Tag Archives: Props

Real-life DOR-15 bowler hat from Disney’s Meet the Robinsons

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/real-life-dor-15-bowler-hat-from-disneys-meet-the-robinsons/

Why wear a boring bowler hat when you can add technology to make one of Disney’s most evil pieces of apparel?

Meet the Robinsons

Meet the Robinsons is one of Disney’s most underrated movies. Thank you for coming to my TED talk.

What’s not to love? Experimental, futuristic technology, a misunderstood villain, lessons of love and forgiveness aplenty, and a talking T-Rex!

For me, one of the stand-out characters of Meet the Robinsons is DOR-15, a best-of-intentions experiment gone horribly wrong. Designed as a helper hat, DOR-15 instead takes over the mind of whoever is wearing it, hellbent on world domination.

Real-life DOR-15

Built using a Raspberry Pi and the MATRIX Voice development board, the real-life DOR-15, from Team MATRIX Labs, may not be ready to take over the world, but it’s still really cool.

With a plethora of built-in audio sensors, the MATRIX Voice directs DOR-15 towards whoever is making sound, while a series of servos wiggle 3D‑printed legs for added creepy.

This project uses ODAS (Open embeddeD Audition System) and some custom code to move a servo motor in the direction of the most concentrated incoming sound in a 180 degree radius. This enables the hat to face a person calling to it.

The added wiggly spider legs come courtesy of this guide by the delightful Jorvon Moss, whom HackSpace readers will remember from issue 21.

In their complete Hackster walkthrough, Team Matrix Lab talk you through how to build your own DOR-15, including all the files needed to 3D‑print the legs.

Realising animated characters and props

So, what fictional wonder would you bring to life? Your own working TARDIS? Winifred’s spellbook? Mary Poppins’ handbag? Let us know in the comments below.

The post Real-life DOR-15 bowler hat from Disney’s Meet the Robinsons appeared first on Raspberry Pi.

Build Demolition Man’s verbal morality ticketing machine

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/build-demolition-mans-verbal-morality-ticketing-machine/

In the 1993 action movie Demolition Man, Sylvester Stallone stars as a 1990s cop transported to the near-future. Technology plays a central role in the film, often bemusing the lead character. In a memorable scene, he is repeatedly punished by a ticketing machine for using bad language (a violation of the verbal morality statute).

In the future of Demolition Man, an always-listening government machine detects every banned word and issues a fine in the form of a receipt from a wall-mounted printer. This tutorial shows you how to build your own version using Raspberry Pi, the Google Voice API, and a thermal printer. Not only can it replicate detecting banned words, but it also doubles as a handy voice-to-paper stenographer (if you want a more serious use).

Prepare the hardware

We built a full ‘boxed’ project, but you can keep it simple if you wish. Your Raspberry Pi needs a method for listening, speaking, and printing. The easiest solution is to use USB for all three.

After prototyping using Raspberry Pi 4 and various USB devices, we settled on Raspberry Pi Zero W with a small USB mic and Pimoroni Speaker pHAT to save space. A Pico HAT Hacker allowed the connection of both the printer and Speaker pHAT, as they don’t share GPIO pins. This bit of space-saving means we could install the full assembly inside the 3D-printed case along with the printer.

Connect the printer

To issue our receipts we used a thermal printer, the kind found in supermarket tills. This particular model is surprisingly versatile, handling text and graphics.

It takes standard 2.25-inch (57mm) receipt paper, available in rolls of 15 metres. When printing, it does draw a lot of current, so we advise using a separate power supply. Do not attempt to power it from your Raspberry Pi. You may need to fit a barrel connector and source a 5V/1.5A power supply. The printer uses a UART/TTL serial connection, which neatly fits on to the GPIO. Although the printer’s connection is listed as being 5V, it is in fact 3.3V, so it can be directly connected to the ground, TX, and RX pins (physical pins 6, 8, 10) on the GPIO.

Install and configure Raspbian

Get yourself a copy of Raspbian Buster Lite and burn it to a microSD card using a tool like Etcher. You can use the full version of Buster if you wish. Perform the usual steps of getting a wireless connection and then updating to the latest version using sudo apt update && sudo apt -y upgrade. From a command prompt, run sudo raspi-config and go to ‘Interfacing options’, then ‘Enable serial’. When asked if you would like the login shell to be accessible, respond ‘No’. To the next question, ‘Would you like the serial port hardware to be enabled?’, reply ‘Yes’. Now reboot your Raspberry Pi.

Test the printer

Make sure the printer is up and running. Double-check you’ve connected the header to the GPIO correctly and power up the printer. The LED on the printer should flash every few seconds. Load in the paper and make sure it’s feeding correctly. We can talk to the printer directly, but the Python ‘thermalprinter‘ library makes coding for it so much easier. To install the library:

sudo apt install python3-pip
pip3 install thermalprinter

Create a file called printer.py and enter in the code in the relevant listing. Run the code using:

python3 printer.py

If you got a nice welcoming message, your printer is all set to go.

Test the microphone

Once your microphone is connected to Raspberry Pi, check the settings by running:

alsamixer

This utility configures your various sound devices. Press F4 to enter ‘capture’ mode (microphones), then press F6 and select your device from the list. Make sure the microphone is not muted (M key) and the levels are high, but not in the red zone.

Back at the command line, run this command:

arecord -l

This shows a list of available recording devices, one of which will be your microphone. Make a note of the card number and subdevice number.

To make a test recording, enter:

arecord --device=hw:1,0 --format S16_LE --rate 44100 -c1 test.wav

If your card and subdevice numbers were not ‘0,1’, you’ll need to change the device parameter in the above command.

Say a few words, then use CTRL+C to stop recording. Check the playback with:

aplay test.wav

Choose your STT provider

STT means speech to text and refers to the code that can take an audio recording and return recognised speech as plain text. Many solutions are available and can be used in this project. For the greatest accuracy, we’re going to use Google Voice API. Rather than doing the complex processing locally, a compressed version of the sound file is uploaded to Google Cloud and the text returned. However, this does mean Google gets a copy of everything ‘heard’ by the project. If this isn’t for you, take a look at Jasper, an open-source alternative that supports local processing.

Create your Google project

To use the Google Cloud API, you’ll need a Google account. Log in to the API Console at console.developers.google.com. We need to create a project here. Next to ‘Google APIs’, click the drop-down menu, then ‘New Project’. Give it a name. You’ll be prompted to enable APIs for the project. Click the link, then search for ‘speech’. Click on ‘Cloud Speech-to-Text API’, then ‘Enable’. At this point you may be prompted for billing information. Don’t worry, you can have up to 60 minutes of audio transcribed for free each month.

Get your credentials

Once the Speech API is enabled, the screen will refresh and you’ll be prompted to create credentials. This is the info our code needs to be granted access to the speech-to-text API. Click on ‘Create Credentials’ and on the next screen select ‘Cloud Speech-to-text API’. You’re asked if you’re planning to use the Compute Engine; select ‘no’. Now create a ‘service account’. Give it a different name from the one used earlier, change the role to ‘Project Owner’, leave the type of file as ‘JSON’, and click ‘Continue’. A file will be downloaded to your computer; transfer this to your Raspberry Pi.

Test Google recognition

When you’re happy with the recording levels, record a short piece of speech and save it as test.wav. We’ll send this to Google and check our access to the API is working. Install the Google Speech-To-Text Python library:

sudo apt install python3-pyaudio
pip3 install google-cloud-speech

Now set an environment variable that the libraries will use to locate your credentials JSON:

export GOOGLE_APPLICATION_CREDENTIALS="/home/pi/[FILE_NAME].json"

(Don’t forget to replace [FILE_NAME] with the actual name of the JSON file).

Using a text editor, create a file called speech_to_text.py and enter the code from the relevant listing. Then run it:

python3 speech_to_text.py

If everything is working correctly, you’ll get a text transcript back within a few seconds.

Live transcription

Amazingly, Google’s speech-to-text service can also support streaming recognition, so rather than capture-then-process, the audio can be sent as a stream, and a HTTP stream of the recognised text comes back. When there is a pause in the speech, the results are finalised, so then we can send the results to the printer. If all the code you’ve entered so far is running correctly, all you need to do is download the stenographer.py script and start it using:

python3 stenographer.py

You are limited on how long you can record for, but this could be coupled with a ‘push to talk’ button so you can make notes using only your voice!

Banned word game

Back to Demolition Man. We need to make an alarm sound, so install a speaker (a passive one that connects to the 3.5mm jack is ideal; we used a Pimoroni Speaker pHAT). Download the banned.py code and edit it in your favourite text editor. At the top is a list of words. You can change this to anything you like (but don’t offend anyone!). In our list, the system is listening for a few mild naughty words. In the event anyone mentions one, a buzzer will sound and a fine will be printed.

Make up your list and start the game by running:

python3 banned.py

Now try one of your banned words.

Package it up

Whatever you decide to use this project for, why not finish it up with a 3D-printed case so you package up the printer and Raspberry Pi with the recording and playback devices and create a portable unit? Ideal for pranking friends or taking notes on the move!

See if you can invent any other games using voice recognition, or investigate the graphics capability of the printer. Add a Raspberry Pi Camera Module for retro black and white photos. Combine it with facial recognition to print out an ID badge just using someone’s face. Over to you.

The MagPi magazine issue 84

This project was created by PJ Evans for The MagPi magazine issue 84, available now online, from your local newsagents, or as a free download from The MagPi magazine website.

The post Build Demolition Man’s verbal morality ticketing machine appeared first on Raspberry Pi.

Build your own animatronic GLaDOS

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/build-your-own-animatronic-glados/

It’s 11 years since Steam’s Orange Box came out, which is probably making you feel really elderly. Portal was the highlight of the game bundle for me — cue giant argument in the comments — and it still holds up brilliantly. It’s even in the Museum of Modern Art’s collection; there’s nothing that quite says you’re part of the establishment like being in a museum. Cough.

I bought an inflatable Portal turret to add to the decor in Raspberry Pi’s first office (I’m still not sure why; I just thought it was a good idea at the time, like the real-life Minecraft sword). Objects and sounds from the game have embedded themselves in pop culture; there’s a companion cube paperweight somewhere in my desk at home, and I bet you’ve encountered a cake that looks like this sometime in the last 11 years or so.

A lie

But turrets, cakes, and companion cubes pale into viral insignificance next to the game’s outstanding antagonist, GLaDOS, a psychopathic AI system who just happens to be my favourite video game bad guy of all time. So I was extremely excited to see Element14’s DJ Harrigan make an animatronic GLaDOS, powered, of course, by a Raspberry Pi.

Animitronic GLaDOS Head with Raspberry Pi

The Portal franchise is one of the most engaging puzzle games of the last decade and beyond the mind-bending physics, is also known for its charming A.I. antagonist: G.L.a.D.O.S. Join DJ on his journey to build yet more robotic characters from pop culture as he “brings her to life” with a Raspberry Pi and sure dooms us all.

Want to make your own? You’ll find everything you need here. I’ve been trying awfully hard not to end this post on a total cliche, but I’m failing hard: this was a triumph.

The post Build your own animatronic GLaDOS appeared first on Raspberry Pi.

Build your own South Park Buddha Box

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/south-park-raspberry-pi-buddha-box/

Escape the distractions of the world around you and focus your attention on the thing you love the most in life: your smartphone! It’s easy with the all-new Buddha Box, brought to you by South Park and the 8 Bits and a Byte team!

Introducing The All New Buddha Box | South Park

A brand new invention is sweeping South Park. The Buddha Box will let you escape from anything in the world so that you can focus on the thing you love the most… your phone.

The Buddha Box

Introduced in a recent episode of the cult show South Park, the Buddha Box is an ingenious invention that allows its user to ignore the outside world and fully immerse themselves in their smartphone. With noise-cancelling headphones and a screen so close to your eyes you’ll be seeing light spots for weeks to come, the Buddha Box is the must-have accessory for 2019.

We jest, obviously. It’s a horrible idea. And here’s how to make your own!

Build your own Buddha Box

Using a Raspberry Pi, noise-cancelling headphones, a screen, and a cardboard box, the wonderful 8 Bits and a Byte team has created a real-life Buddha Box that you definitely shouldn’t make yourself. As we said — horrible idea.

But it would be a great way to try out screensharing software on your Pi!

To make it, you’ll need to secure the headphones and a screen inside a suitably sized cardboard box, and then set up your Raspberry Pi to run Screencast.

The inside of the Raspberry Pi-enabled South Park Buddha Box showing the headphones, screen and Pi secured inside

The Screencast software allows you to cast the screen of your smartphone to the screen within the box — hence its name.

Here’s the tutorial from 8 Bits and a Byte, and a working demonstration:

South Park’s Buddha Box

A real, working version of South Parks Buddha Box, made using a pair of headphones, an LCD screen, a powerbank and a Raspberry Pi.

If you have an Android phone that you want to use with your Raspberry Pi, check out this guide for enabling Screencast, written by Make Tech Easier. And if you want to share the screen of an iPhone with your Pi, this Instructables guide will walk you through setting up the RPlay software.

Building props

We love prop builds using Raspberry Pi — if you do too, check out the posts in our ‘props’ blog category. And if you’ve made a prop from TV or film using a Pi, be sure to share it with us!

The post Build your own South Park Buddha Box appeared first on Raspberry Pi.

Happy Birthday, Harry Potter: wizard-worthy Pi projects

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/harry-potter-raspberry-pi/

Today marks Harry Potter’s 38th birthday. And as we’re so, so very British here at Raspberry Pi, we have no choice but to celebrate the birth of The Boy Who Lived with some wonderfully magical projects from members of the community.

Harry Potter birthday Raspberry Pi

Build your own Daily Prophet

After a trip to The Wizarding World of Harry Potter, Piet Rullens Jr wanted to build something special to remember the wonderful time he and his wife had at the amusement park.

Daily Prophet poster with moving object

Daily Prophet with moving object

Piet designed and printed his own front page of The Daily Prophet, and then cut out a photo and replaced it with our Official Touch Display. The Raspberry Pi hidden behind it runs a short Python script that responds to input from a motion sensor by letting the screen play video footage from their wizarding day whenever someone walks by.

Read more about Piet’s project on our blog here, and in The MagPi here.

Wizard duelling

Since Allen Pan is known for his tech projects based on pop culture favourites, it’s no surprise that he combined a Raspberry Pi and Harry Potter lore to build duelling gear. But where any of us expecting real spells with very real consequences such as this?

Real Life Harry Potter Wizard Duel with ELECTRICITY | Sufficiently Advanced

Harry Potter body shocking wands with speech recognition…It’s indistinguishable from magic! With the release of Fantastic Beasts and Where to Find Them, we took magic wands from Harry Potter to create a shocking new game. Follow Sufficiently Advanced! https://twitter.com/AnyTechnology https://www.facebook.com/sufficientlyadvanced https://www.instagram.com/sufficientlyadvanced/ Check out redRomina: https://www.youtube.com/user/redRomina Watch our TENS unit challenge!

When a dueller correctly pronounces one of a collection of wizard spells, their opponent gets an electric shock from a Transcutaneous Electrical Nerve Stimulation (TENS) machine.

Learn more about how the Raspberry Pi controls this rather terrifying build here, and remember: don’t try this at home — wizard duels are reserved for the Hogwarts Great Hall only!

Find family members with the Weasley clock

Curious as to where your family members are at any one time? So was Pat Peters: by replacing magic with GPS technology, Pat recreated the iconic clock from the home of the Weasley family.

Harry Potter birthday Raspberry Pi

But how does it work? Over to Pat:

This location clock works through a Raspberry Pi, which subscribes to an MQTT broker that our phones publish events to. Our phones (running the OwnTracks GPS app) send a message to the broker whenever we cross into or out of one of our waypoints that we have set up in OwnTracks; this then triggers the Raspberry Pi to run a servo that moves the clock hand to show our location.

Find more information, including links to the full Instructables tutorial,  on our blog.

Play Wizard’s Chess!

Motors and gears and magnets, oh my! Bethanie Fentiman knows how to bring magic to Muggles with her Wizard’s Chess set.

Harry Potter birthday Raspberry Pi

We bet ten shiny Sickles that no one has ever finished reading/watching Harry Potter and the Philosopher’s Stone and not wanted to play Wizard’s Chess. Pieces moving by magic, Knights attacking Pawns — it’s entertaining mayhem for the whole family. And while Bethanie hasn’t managed to get her pieces to attack one another (yet), she’s got moving them as if by magic down to a fine art!

Learn more about Bethanie’s Wizard’s Chess set here, where you’ll also find links to the Kent Raspberry Jam community where Bethanie volunteers.

Find your house with the Sorting Hat

Whether you believe yourself to be a Gryffindor, Slytherin, Hufflepuff, or Ravenclaw, the only way to truly know is via the Hogwarts Sorting Hat.

Harry Potter birthday Raspberry Pi

Our free resource lets you code your own Sorting Hat to establish once and for all which Hogwarts house you really belong to.

I’m a Gryffindor, by the way. [Editor’s note: Alex is the most Gryffindor person I’ve ever met.]

Create a wand-controlled lamp

Visitors to The Wizarding World of Harry Potter may have found themselves in possession of souvenir interactive wands that allow them to control various displays throughout the park. Upon returning from a trip, Sean O’Brien and his daughters began planning how they could continue to use the wands at home.

They soon began work on Raspberry Potter, an automation project that uses an infrared camera and a Raspberry Pi to allow their wands to control gadgets and props around their home.



Find the full tutorial for the build here! And if you don’t have a wand to hand, here are Allen Pan and William Osman making their own out of…hotdogs?!

Hacking Wands at Harry Potter World

How to make your very own mostly-functional interactive wand. Please don’t ban me from Universal Studios. Links on my blog: http://www.williamosman.com/2017/12/hacking-harry-potter-wands.html Allen’s Channel: https://www.youtube.com/channel/UCVS89U86PwqzNkK2qYNbk5A Support us on Patreon: https://www.patreon.com/williamosman Website: http://www.williamosman.com/ Facebook: https://www.facebook.com/williamosmanscience/ InstaHam: https://www.instagram.com/crabsandscience/ CameraManJohn: http://www.johnwillner.com/

You’re a project theme, Harry

We’re sure these aren’t the only Harry Potter–themed Raspberry Pi makes in the wild. If we’ve missed any, or if you have your own ideas for a project, let us know! We will never grow tired of Harry Potter projects…

Harry Potter birthday Raspberry Pi

The post Happy Birthday, Harry Potter: wizard-worthy Pi projects appeared first on Raspberry Pi.

Ten awesome 3D-printable Raspberry Pi goodies

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/3d-printable-raspberry-pi/

3D printing has become far more accessible for hobbyists, with printer prices now in the hundreds instead of thousands of pounds. Last year, we covered some of the best 3D-printable cases for the Pi, and since then, Raspberry Pi enthusiasts have shared even more cool designs on sites such as MyMiniFactory and Thingiverse!

Here are ten of our recent favourites:

World Cup Sputnik

“With the World Cup now underway, I wanted a Russia-themed football sculpture to hang over the desk,” explains creator Ajax Jones. “What better than a football-styled Sputnik!”

Raspberry Pi 3d-printable World Cup Sputnik

The World Cup Sputnik comes complete with a Raspberry Pi that transmits the original Sputnik ‘beeps’ on an FM frequency, allowing co-workers to tune in for some 1960s nostalgia.

Radios

We see an abundance of musical Raspberry Pi projects online, and love looking out for the ones housed in interesting, unique cases like these:

Raspberry Pi 3d-printable radio
Raspberry Pi 3d-printable radio

The MiniZ is a streaming radio based on the Zenith Cube, created by Thingiverse user thisoldgeek.

This is a case for a small, retro radio powered by Logitech Media Server. It uses a Raspberry Pi Zero W and displays a radio dial (tunes via a knob), a clock, and ‘Now Playing’ album art.

For something a little more simple to use, Lukas2040‘s NFC radio for children comes with illustrated, NFC-tagged cards to allow his two-year-old daughter to pick her own music to play.

Gaming

Whether it’s console replicas or tabletop arcade cabinets, the internet is awash with gaming-themed Raspberry Pi projects. Here are a few of our favourites!

The Okama Gamesphere is a fictional game console from South Park. Leodym has taken the rather stylish design and converted it into a Raspberry Pi 3 case.

Okama Gamesphere 3d-printable Raspberry Pi case
Okama Gamesphere 3d-printable Raspberry Pi case
Okama Gamesphere 3d-printable Raspberry Pi case

Canino‘s Yet Another Mini Arcade is exactly that. We really like how it reminds us of old, imported gaming consoles from our childhoods.

3d-printable Raspberry Pi arcade case

“I really love the design and look of the HP OMEN Accelerator,” writes designer STIG_. “So I decided to draw up a case for the Raspberry Pi 3 Model B.”

OMEN Accelerator 3D-printable Raspberry Pi case
OMEN Accelerator 3D-printable Raspberry Pi case
OMEN Accelerator 3D-printable Raspberry Pi case

We really love it too, STIG_. Well done.

Ironman, Ironman, does whatever an Ironman can…

atlredninja‘s Ironman Mark 7 torso housing for the Google AIY Projects Voice Kit is pretty sweet!

Iron man AIY case Neopixel Rings Adafruit

Iron man AIY case Neopixel Rings Adafruit 16 and 12 LEDS. 3d files and instructions for assembly here: https://www.thingiverse.com/thing:2950452 This is just a test to make sure the LEDs are working and the A.I. is working correctly. This took me about 3 weeks to design, print, and assemble.

This model is atlredninja‘s second version of an Ironman-themed AIY project: the first fits within a replica helmet. We’re looking forward to a possible third edition with legs. And a fourth that flies.

We can dream, can’t we?

Speaking of Marvel

How often have you looked at Thor’s hammer and thought to yourself “If only it had a Raspberry Pi inside…”

Raspberry Pi Thor case

This case from furnibird is one of several pop culture–themed Raspberry Pi cases that the designer has created. Be sure to check out the others, including a Deathstar and Pac-Man.

3D-printable bird box

chickey‘s 3D-printable Raspberry Pi Bird Box squeezes a Raspberry Pi Zero W and a camera into the lid, turning this simple nesting box into a live-streaming nature cam.

3D-printed raspberry pi bird box
3D-printed raspberry pi bird box
3D-printed raspberry pi bird box

The Raspberry Pi uploads images directly to a webpage, allowing you to check in on the feathered occupants from any computer or mobile device. Nifty.

Print a Raspberry Pi!

Using a 3D-printed Raspberry Pi in place of the real deal while you’re prototyping in the workshop may save you from accidentally damaging your tiny computer.

3D-printed Raspberry Pi 3
3D-printed Raspberry Pi 3
3D-printed Raspberry Pi 3

AlwaysComputing designed this Raspberry Pi Voxel Model using MagicaVoxel, stating “I like to tinker and play with the program MagicaVoxel. I find it therapeutic!”

What else?

What Raspberry Pi–themed 3D prints have you seen lately? Share your favourites with us in the comments, or on Twitter and Facebook.

The post Ten awesome 3D-printable Raspberry Pi goodies appeared first on Raspberry Pi.

A working original Doctor Who K-9 prop

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/k-9-working-doctor-who-prop/

When Abertay University purchased some unwanted Doctor Who props from the BBC in 2011, they could never have known that their future computer science student Gary Taylor would transform a water-damaged robot corpse into a working K-9, the cutest (and snarkiest) of all the Doctor’s companions.

K-9 Doctor Who Raspberry Pi Prop

image c/o The Courier

K-9

If you’re unfamiliar with Doctor Who, you may not be aware of the Doctor’s robotic-canine best friend, K-9. I won’t wax lyrical about the long and winding history of this iconic science fiction character (though I could), but those of you who want to learn more can watch the video below.

History of K9 – History of Doctor Who

Hello and welcome to the Whoniverse and to another instalment of the History of Doctor Who series, this time I’m not looking at a universe conquering species but a tin dog. Yes the Doctor’s past travelling companion K9. There have been many versions of K9 and he has appeared alongside numerous Doctor’s and other companions.

Tl;dw: K-9 is basically a really clever, robotic dog invented in the year 5000.

Resurrecting a robotic dog

For his final-year dissertation, computer science student Gary Taylor decided to bring K-9 back to life, having discovered the prop damaged by a water leak in the university hackspace.

“I love robotics, I love programming, I love dogs, and I love Doctor Who.” Don’t we all, Gary. Don’t we all.
Image c/o The Courier

For his dissertation, titled Creating an Autonomous Robot Utilizing Raspberry Pi, Arduino, and Ultrasound Sensors for Mapping a Room, Gary used modern-day technology to rebuild K-9’s original and often unreliable radio-controlled electronics from the 1970s.

However, Gary’s K-9 is more than a simple remote-controlled robot. As the dissertation title states, the robot uses ultrasound sensors for room mapping, and this function is controlled by both an Arduino and Raspberry Pi.

A block diagram taken from Gary’s dissertation

An Arduino Mega 2560 controls the wheels and three ultrasound sensors located at the bottom of K-9’s body. It passes the sensor data to the onboard Raspberry Pi 3, and the Pi plots obstacles and walls to create a map of K-9’s surroundings.

The three ultrasonic sensors can be seen along the bottom of K-9’s body

The Raspberry Pi also connects to a smartphone via Bluetooth, where Gary runs a custome app to remotely control K-9 and view the map it creates.

More information? Affirmative!

The team at the Electronic Engineering Journal has written up a very thorough explanation of Gary’s dissertation. Those interested in the full details of the robot won’t be disappointed!

For a video of Gary and K-9 that refuses to embed itself in this blog post, head over to The Courier’s website.

And for more Doctor Who–related Raspberry Pi builds, check out Jeremy Lee’s remake of Captain Jack’s Vortex Manipulator, a synthesised rendition of the classic theme using a Raspberry Pi Zero, and a collection of builds and props in this Doctor Who roundup, including a sonic screwdriver, a Dalek, and a TARDIS in near-space.

Oh, and another thing…

BBC released some cool behind-the-scenes images and photos from season ten of Doctor Who, including this production art for Nardole’s tracking device:

The Pi Towers staff may have let out a little squee of delight when we noticed the Raspberry Pi included within.

The post A working original Doctor Who K-9 prop appeared first on Raspberry Pi.

Build your own Solo: A Star Wars Story L3-37 droid

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/build-star-wars-l3-37-droid/

It is a truth universally acknowledged…that everyone wants their own Star Wars droid. If you’re now thinking “No, not me!”, then you obviously haven’t met the right droid yet. But Patrick ‘PatchBOTS‘ Stefanski has, and that droid is L3-37 from the newly released Solo: A Star Wars Story.

Release the droids

Visit your local maker event, such as Maker Faire, and you’re sure to meet at least one droid builder. Building a Star Wars droid is pretty much every maker’s dream, and YouTube droid-building sensation Patrick Stefanski is living that dream. On his Youtube channel PatchBOTS, Patrick is showcasing his maker chops with truly impressive recreations of characters such as BB-8 and our personal favourite, Chopper from Star Wars Rebels.

L3-37

Patrick’s new L3-37 build uses the free Alexa Voice Service and a Raspberry Pi 3 to augment a 3D-printed base model with robotics and AI.

Solo Star Wars Story L3-37 droid PatchBOTs

He designed L3-37’s head based on press images and trailers, and then adjusted some of the visual aesthetic after watching the movie. When he realised that the Amazon Echo Dot he’d started the build with wouldn’t allow him to implement some of the features he had planned, including a unique wake word, Patrick decided to use a Raspberry Pi instead.

Solo Star Wars Story L3-37 droid PatchBOTs

A wake word is the word a home assistant uses to recognise that you’re addressing it. For Amazon Alexa, the standard wake words are ‘Alexa’, ‘Echo’, ‘Amazon’, and ‘computer’. While these are fine for standard daily use, Patrick wanted his droid to acknowledge its own name, L3-37. He also wanted to make L3-37 react with a voice response and movement whenever it heard its name. Using the Raspberry Pi enabled him to edit the home assistant code to include these functionalities, and in this way he made L3-37 truly come to life.

Build your own L3-37 home assistant

If you’d like to build your own L3-37 (and why wouldn’t you), Patrick is in the process of adding the complete set of instructions and code to his Github account. The 3D printer files are available now to get you started, along with the list of ingredients for the build, including servos, NeoPixels, and every propmaker’s staple: Rub n Buff.

If you want buy the parts for this project, why not use the affiliate links Patrick provides in the L3-37 video description to help him fund future projects? And while you’re there, leave a comment to show him some love for this incredible droid build, and also subscribe to his channel to see what he comes up with next.

Solo Star Wars Story L3-37 droid

We’re definitely going to be taking some of the lessons learned in this project to work on our own builds, and we hope you’ll do the same and share your work with us via social media.

The post Build your own Solo: A Star Wars Story L3-37 droid appeared first on Raspberry Pi.

Archimedes, the Google AIY Projects Vision familiar

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/archimedes-google-aiy-vision/

hackster.io‘s ‘resident hardware nerd’ Alex Glow has gifted the world of makers with Archimedes, a shoulder-mounted owl that judges your emotions using the Google AIY Project Vision Kit.

Say Hi to Archimedes – the AI Robot Owl

Say hi to Archimedes – the robot owl with a Google AIY brain. Built with Raspberry Pi + Arduino! Here are some insights into pitfalls of the build process. I made this li’l guy to demo the AIY Vision Kit for Maker Faire 2018… but he’s not going away anytime soon!

Google AIY Project Kits

Google released the Pi-powered AIY Projects Voice Kit last year, providing the entire set of build ingredients with issue 57 of The MagPi Magazine. You loved it, we loved it, and later that year they followed up the Voice Kit’s success with the Vision Kit, also based on the Raspberry Pi.

google aiy vision kit

As the name indicates, the Voice Kit completes tasks in response to voice commands, just like Amazon Alexa or Google Home. The Vision Kit allows makers to experiment with neural networking to implement image recognition in their projects.

Planning for Maker Faire

When the hackster.io team was asked to contribute a project to Google’s stand at Maker Faire Bay Area this year, their in-house self-confessed hardware and robotics nerd Alex Glow took on the challenge.

I took a really, really long time to figure out what to build — what it would look like, how it would animate, how it would dispense the stickers…in the end, I went with this cute and fairly challenging design.

And so, Alex brought Archimedes the robotic owl into the world — and the world is a cuter place for it.

Archimedes the owl

Having set up the Google AIY Vision Kit — you can find Alex’s live build video here — she raided a HackerBox for a pan/tilt gimble. The gimble was far more robust than simple servos, and since Alex wanted to bring Archimedes to more events after Maker Faire, she needed something that would take the wear and tear.

it’ll be fun trying to explain this one // i tried: bit.ly/robotowl

337 Likes, 18 Comments – Alex Glow (@glowascii) on Instagram: “it’ll be fun trying to explain this one // i tried: bit.ly/robotowl”

For Maker Faire, she modified Archimedes to be a shoulder-mounted familiar, but Alex initially mounted him on a box that would open to reveal a prize if Archimedes detected a certain facial expression. For this, she introduced an Arduino into the mix, using the board to control three servos: two for the gimble and the third for the box lid.

Archimedes’s main objective is to hunt out faces and read their expressions. Because of this, his head is always moving so he can take in his surroundings like a real owl.

I combined the AIY Kit’s LED and Joy Detection demos (found in /gpiozero and /joy, respectively). I wanted to make the LED pin turn on when it finds a happy face, but weirdly, this code does the opposite. Someday, I will be enough of a software wizard to figure out why…

Alex designed the owl’s body using OnShape, with the intention of keeping the Raspberry Pi and AIY tech inside. Then she 3D printed the body using the Lulzbot Taz 6 and very hackster-blue filament.

Shawn Hymel on Twitter

Testing out @glowascii ‘s familiar, Archimedes. It knows when I’m sad or happy, but I have to *really* force that happy 😅 #aiy #computervision #ai #3dprinting https://t.co/77pQk9pOHm

Build your own robot familiar

For full instructions on building and coding your own Archimedes, head to Alex’s hackster.io project page. You can keep up to date on the pair’s adventures via Alex’s Twitter account.

The post Archimedes, the Google AIY Projects Vision familiar appeared first on Raspberry Pi.

This is a really lovely Raspberry Pi tricorder

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-tricorder-prop/

At the moment I’m spending my evenings watching all of Star Trek in order. Yes, I have watched it before (but with some really big gaps). Yes, including the animated series (I’m up to The Terratin Incident). So I’m gratified to find this beautiful The Original Series–style tricorder build.

Star Trek Tricorder with Working Display!

At this year’s Replica Prop Forum showcase, we meet up once again wtih Brian Mix, who brought his new Star Trek TOS Tricorder. This beautiful replica captures the weight and finish of the filming hand prop, and Brian has taken it one step further with some modern-day electronics!

A what now?

If you don’t know what a tricorder is, which I guess is faintly possible, the easiest way I can explain is to steal words that Liz wrote when Recantha made one back in 2013. It’s “a made-up thing used by the crew of the Enterprise to measure stuff, store data, and scout ahead remotely when exploring strange new worlds, seeking out new life and new civilisations, and all that jazz.”

A brief history of Picorders

We’ve seen other Raspberry Pi–based realisations of this iconic device. Recantha’s LEGO-cased tricorder delivered some authentic functionality, including temperature sensors, an ultrasonic distance sensor, a photosensor, and a magnetometer. Michael Hahn’s tricorder for element14’s Sci-Fi Your Pi competition in 2015 packed some similar functions, along with Original Series audio effects, into a neat (albeit non-canon) enclosure.

Brian Mix’s Original Series tricorder

Brian Mix’s tricorder, seen in the video above from Tested at this year’s Replica Prop Forum showcase, is based on a high-quality kit into which, he discovered, a Raspberry Pi just fits. He explains that the kit is the work of the late Steve Horch, a special effects professional who provided props for later Star Trek series, including the classic Deep Space Nine episode Trials and Tribble-ations.

A still from an episode of Star Trek: Deep Space Nine: Jadzia Dax, holding an Original Series-sylte tricorder, speaks with Benjamin Sisko

Dax, equipped for time travel

This episode’s plot required sets and props — including tricorders — replicating the USS Enterprise of The Original Series, and Steve Horch provided many of these. Thus, a tricorder kit from him is about as close to authentic as you can possibly find unless you can get your hands on a screen-used prop. The Pi allows Brian to drive a real display and a speaker: “Being the geek that I am,” he explains, “I set it up to run every single Original Series Star Trek episode.”

Even more wonderful hypothetical tricorders that I would like someone to make

This tricorder is beautiful, and it makes me think how amazing it would be to squeeze in some of the sensor functionality of the devices depicted in the show. Space in the case is tight, but it looks like there might be a little bit of depth to spare — enough for an IMU, maybe, or a temperature sensor. I’m certain the future will bring more Pi tricorder builds, and I, for one, can’t wait. Please tell us in the comments if you’re planning something along these lines, and, well, I suppose some other sci-fi franchises have decent Pi project potential too, so we could probably stand to hear about those.

If you’re commenting, no spoilers please past The Animated Series S1 E11. Thanks.

The post This is a really lovely Raspberry Pi tricorder appeared first on Raspberry Pi.

Achieving Major Stability and Performance Improvements in Yahoo Mail with a Novel Redux Architecture

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/173062946866

yahoodevelopers:

By Mohit Goenka, Gnanavel Shanmugam, and Lance Welsh

At Yahoo Mail, we’re constantly striving to upgrade our product experience. We do this not only by adding new features based on our members’ feedback, but also by providing the best technical solutions to power the most engaging experiences. As such, we’ve recently introduced a number of novel and unique revisions to the way in which we use Redux that have resulted in significant stability and performance improvements. Developers may find our methods useful in achieving similar results in their apps.

Improvements to product metrics

Last year Yahoo Mail implemented a brand new architecture using Redux. Since then, we have transformed the overall architecture to reduce latencies in various operations, reduce JavaScript exceptions, and better synchronized states. As a result, the product is much faster and more stable.

Stability improvements:

  • when checking for new emails – 20%
  • when reading emails – 30%
  • when sending emails – 20%

Performance improvements:

  • 10% improvement in page load performance
  • 40% improvement in frame rendering time

We have also reduced API calls by approximately 20%.

How we use Redux in Yahoo Mail

Redux architecture is reliant on one large store that represents the application state. In a Redux cycle, action creators dispatch actions to change the state of the store. React Components then respond to those state changes. We’ve made some modifications on top of this architecture that are atypical in the React-Redux community.

For instance, when fetching data over the network, the traditional methodology is to use Thunk middleware. Yahoo Mail fetches data over the network from our API. Thunks would create an unnecessary and undesirable dependency between the action creators and our API. If and when the API changes, the action creators must then also change. To keep these concerns separate we dispatch the action payload from the action creator to store them in the Redux state for later processing by “action syncers”. Action syncers use the payload information from the store to make requests to the API and process responses. In other words, the action syncers form an API layer by interacting with the store. An additional benefit to keeping the concerns separate is that the API layer can change as the backend changes, thereby preventing such changes from bubbling back up into the action creators and components. This also allowed us to optimize the API calls by batching, deduping, and processing the requests only when the network is available. We applied similar strategies for handling other side effects like route handling and instrumentation. Overall, action syncers helped us to reduce our API calls by ~20% and bring down API errors by 20-30%.

Another change to the normal Redux architecture was made to avoid unnecessary props. The React-Redux community has learned to avoid passing unnecessary props from high-level components through multiple layers down to lower-level components (prop drilling) for rendering. We have introduced action enhancers middleware to avoid passing additional unnecessary props that are purely used when dispatching actions. Action enhancers add data to the action payload so that data does not have to come from the component when dispatching the action. This avoids the component from having to receive that data through props and has improved frame rendering by ~40%. The use of action enhancers also avoids writing utility functions to add commonly-used data to each action from action creators.

image

In our new architecture, the store reducers accept the dispatched action via action enhancers to update the state. The store then updates the UI, completing the action cycle. Action syncers then initiate the call to the backend APIs to synchronize local changes.

Conclusion

Our novel use of Redux in Yahoo Mail has led to significant user-facing benefits through a more performant application. It has also reduced development cycles for new features due to its simplified architecture. We’re excited to share our work with the community and would love to hear from anyone interested in learning more.

The robotic teapot from your nightmares

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/robotic-teapot/

For those moments when you wish the cast of Disney’s Beauty and the Beast was real, only to realise what a nightmare that would be, here’s Paul-Louis Ageneau’s robotic teapot!

Paul-Louis Ageneau Robotic teapot Raspberry Pi Zero

See what I mean?

Tale as old as time…

It’s the classic story of guy meets digital killer teapot, digital killer teapot inspires him to 3D print his own. Loosely based on a boss level of the video game Alice: Madness Returns, Paul-Louis’s creation is a one-eyed walking teapot robot with a (possible) thirst for blood.

Kill Build the beast

“My new robot is based on a Raspberry Pi Zero W with a camera.” Paul-Louis explains in his blog. “It is connected via a serial link to an Arduino Pro Mini board, which drives servos.”

Each leg has two points of articulation, one for the knee and one for the ankle. In order to move each of the joints, the teapot uses eight servo motor in total.

Paul-Louis Ageneau Robotic teapot Raspberry Pi Zero

Paul-Louis designed and 3D printed the body of the teapot to fit the components needed. So if you’re considering this build as a means of acquiring tea on your laziest of days, I hate to be the bearer of bad news, but the most you’ll get from your pour will be jumper leads and Pi.

Paul-Louis Ageneau Robotic Raspberry Pi Zero teapot
Paul-Louis Ageneau Robotic Raspberry Pi Zero teapot
Paul-Louis Ageneau Robotic Raspberry Pi Zero teapot

While the Arduino board controls the legs, it’s the Raspberry Pi’s job to receive user commands and tell the board how to direct the servos. The protocol for moving the servos is simple, with short lines of characters specifying instructions. First a digit from 0 to 7 selects a servo; next the angle of movement, such as 45 or 90, is input; and finally, the use of C commits the instruction.

Typing in commands is great for debugging, but you don’t want to be glued to a keyboard. Therefore, Paul-Louis continued to work on the code in order to string together several lines to create larger movements.

Paul-Louis Ageneau Robotic teapot Raspberry Pi Zero

The final control system of the teapot runs on a web browser as a standard four-axis arrow pad, with two extra arrows for turning.

Something there that wasn’t there before

Jean-Paul also included an ‘eye’ in the side of the pot to fit the Raspberry Pi Camera Module as another nod to the walking teapot from the video game, but with a purpose other than evil and wrong-doing. As you can see from the image above, the camera live-streams footage, allowing for remote control of the monster teapot regardless of your location.

If you like it all that much, it’s yours

In case you fancy yourself as an inventor, Paul-Louis has provided the entire build process and the code on his blog, documenting how to bring your own teapot to life. And if you’ve created any robotic household items or any props from video games or movies, we’d love to see them, so leave a link in the comments or share it with us across social media using the hashtag #IBuiltThisAndNowIThinkItIsTryingToKillMe.

The post The robotic teapot from your nightmares appeared first on Raspberry Pi.

AWS Glue Now Supports Scala Scripts

Post Syndicated from Mehul Shah original https://aws.amazon.com/blogs/big-data/aws-glue-now-supports-scala-scripts/

We are excited to announce AWS Glue support for running ETL (extract, transform, and load) scripts in Scala. Scala lovers can rejoice because they now have one more powerful tool in their arsenal. Scala is the native language for Apache Spark, the underlying engine that AWS Glue offers for performing data transformations.

Beyond its elegant language features, writing Scala scripts for AWS Glue has two main advantages over writing scripts in Python. First, Scala is faster for custom transformations that do a lot of heavy lifting because there is no need to shovel data between Python and Apache Spark’s Scala runtime (that is, the Java virtual machine, or JVM). You can build your own transformations or invoke functions in third-party libraries. Second, it’s simpler to call functions in external Java class libraries from Scala because Scala is designed to be Java-compatible. It compiles to the same bytecode, and its data structures don’t need to be converted.

To illustrate these benefits, we walk through an example that analyzes a recent sample of the GitHub public timeline available from the GitHub archive. This site is an archive of public requests to the GitHub service, recording more than 35 event types ranging from commits and forks to issues and comments.

This post shows how to build an example Scala script that identifies highly negative issues in the timeline. It pulls out issue events in the timeline sample, analyzes their titles using the sentiment prediction functions from the Stanford CoreNLP libraries, and surfaces the most negative issues.

Getting started

Before we start writing scripts, we use AWS Glue crawlers to get a sense of the data—its structure and characteristics. We also set up a development endpoint and attach an Apache Zeppelin notebook, so we can interactively explore the data and author the script.

Crawl the data

The dataset used in this example was downloaded from the GitHub archive website into our sample dataset bucket in Amazon S3, and copied to the following locations:

s3://aws-glue-datasets-<region>/examples/scala-blog/githubarchive/data/

Choose the best folder by replacing <region> with the region that you’re working in, for example, us-east-1. Crawl this folder, and put the results into a database named githubarchive in the AWS Glue Data Catalog, as described in the AWS Glue Developer Guide. This folder contains 12 hours of the timeline from January 22, 2017, and is organized hierarchically (that is, partitioned) by year, month, and day.

When finished, use the AWS Glue console to navigate to the table named data in the githubarchive database. Notice that this data has eight top-level columns, which are common to each event type, and three partition columns that correspond to year, month, and day.

Choose the payload column, and you will notice that it has a complex schema—one that reflects the union of the payloads of event types that appear in the crawled data. Also note that the schema that crawlers generate is a subset of the true schema because they sample only a subset of the data.

Set up the library, development endpoint, and notebook

Next, you need to download and set up the libraries that estimate the sentiment in a snippet of text. The Stanford CoreNLP libraries contain a number of human language processing tools, including sentiment prediction.

Download the Stanford CoreNLP libraries. Unzip the .zip file, and you’ll see a directory full of jar files. For this example, the following jars are required:

  • stanford-corenlp-3.8.0.jar
  • stanford-corenlp-3.8.0-models.jar
  • ejml-0.23.jar

Upload these files to an Amazon S3 path that is accessible to AWS Glue so that it can load these libraries when needed. For this example, they are in s3://glue-sample-other/corenlp/.

Development endpoints are static Spark-based environments that can serve as the backend for data exploration. You can attach notebooks to these endpoints to interactively send commands and explore and analyze your data. These endpoints have the same configuration as that of AWS Glue’s job execution system. So, commands and scripts that work there also work the same when registered and run as jobs in AWS Glue.

To set up an endpoint and a Zeppelin notebook to work with that endpoint, follow the instructions in the AWS Glue Developer Guide. When you are creating an endpoint, be sure to specify the locations of the previously mentioned jars in the Dependent jars path as a comma-separated list. Otherwise, the libraries will not be loaded.

After you set up the notebook server, go to the Zeppelin notebook by choosing Dev Endpoints in the left navigation pane on the AWS Glue console. Choose the endpoint that you created. Next, choose the Notebook Server URL, which takes you to the Zeppelin server. Log in using the notebook user name and password that you specified when creating the notebook. Finally, create a new note to try out this example.

Each notebook is a collection of paragraphs, and each paragraph contains a sequence of commands and the output for that command. Moreover, each notebook includes a number of interpreters. If you set up the Zeppelin server using the console, the (Python-based) pyspark and (Scala-based) spark interpreters are already connected to your new development endpoint, with pyspark as the default. Therefore, throughout this example, you need to prepend %spark at the top of your paragraphs. In this example, we omit these for brevity.

Working with the data

In this section, we use AWS Glue extensions to Spark to work with the dataset. We look at the actual schema of the data and filter out the interesting event types for our analysis.

Start with some boilerplate code to import libraries that you need:

%spark

import com.amazonaws.services.glue.DynamicRecord
import com.amazonaws.services.glue.GlueContext
import com.amazonaws.services.glue.util.GlueArgParser
import com.amazonaws.services.glue.util.Job
import com.amazonaws.services.glue.util.JsonOptions
import com.amazonaws.services.glue.types._
import org.apache.spark.SparkContext

Then, create the Spark and AWS Glue contexts needed for working with the data:

@transient val spark: SparkContext = SparkContext.getOrCreate()
val glueContext: GlueContext = new GlueContext(spark)

You need the transient decorator on the SparkContext when working in Zeppelin; otherwise, you will run into a serialization error when executing commands.

Dynamic frames

This section shows how to create a dynamic frame that contains the GitHub records in the table that you crawled earlier. A dynamic frame is the basic data structure in AWS Glue scripts. It is like an Apache Spark data frame, except that it is designed and optimized for data cleaning and transformation workloads. A dynamic frame is well-suited for representing semi-structured datasets like the GitHub timeline.

A dynamic frame is a collection of dynamic records. In Spark lingo, it is an RDD (resilient distributed dataset) of DynamicRecords. A dynamic record is a self-describing record. Each record encodes its columns and types, so every record can have a schema that is unique from all others in the dynamic frame. This is convenient and often more efficient for datasets like the GitHub timeline, where payloads can vary drastically from one event type to another.

The following creates a dynamic frame, github_events, from your table:

val github_events = glueContext
                    .getCatalogSource(database = "githubarchive", tableName = "data")
                    .getDynamicFrame()

The getCatalogSource() method returns a DataSource, which represents a particular table in the Data Catalog. The getDynamicFrame() method returns a dynamic frame from the source.

Recall that the crawler created a schema from only a sample of the data. You can scan the entire dataset, count the rows, and print the complete schema as follows:

github_events.count
github_events.printSchema()

The result looks like the following:

The data has 414,826 records. As before, notice that there are eight top-level columns, and three partition columns. If you scroll down, you’ll also notice that the payload is the most complex column.

Run functions and filter records

This section describes how you can create your own functions and invoke them seamlessly to filter records. Unlike filtering with Python lambdas, Scala scripts do not need to convert records from one language representation to another, thereby reducing overhead and running much faster.

Let’s create a function that picks only the IssuesEvents from the GitHub timeline. These events are generated whenever someone posts an issue for a particular repository. Each GitHub event record has a field, “type”, that indicates the kind of event it is. The issueFilter() function returns true for records that are IssuesEvents.

def issueFilter(rec: DynamicRecord): Boolean = { 
    rec.getField("type").exists(_ == "IssuesEvent") 
}

Note that the getField() method returns an Option[Any] type, so you first need to check that it exists before checking the type.

You pass this function to the filter transformation, which applies the function on each record and returns a dynamic frame of those records that pass.

val issue_events =  github_events.filter(issueFilter)

Now, let’s look at the size and schema of issue_events.

issue_events.count
issue_events.printSchema()

It’s much smaller (14,063 records), and the payload schema is less complex, reflecting only the schema for issues. Keep a few essential columns for your analysis, and drop the rest using the ApplyMapping() transform:

val issue_titles = issue_events.applyMapping(Seq(("id", "string", "id", "string"),
                                                 ("actor.login", "string", "actor", "string"), 
                                                 ("repo.name", "string", "repo", "string"),
                                                 ("payload.action", "string", "action", "string"),
                                                 ("payload.issue.title", "string", "title", "string")))
issue_titles.show()

The ApplyMapping() transform is quite handy for renaming columns, casting types, and restructuring records. The preceding code snippet tells the transform to select the fields (or columns) that are enumerated in the left half of the tuples and map them to the fields and types in the right half.

Estimating sentiment using Stanford CoreNLP

To focus on the most pressing issues, you might want to isolate the records with the most negative sentiments. The Stanford CoreNLP libraries are Java-based and offer sentiment-prediction functions. Accessing these functions through Python is possible, but quite cumbersome. It requires creating Python surrogate classes and objects for those found on the Java side. Instead, with Scala support, you can use those classes and objects directly and invoke their methods. Let’s see how.

First, import the libraries needed for the analysis:

import java.util.Properties
import edu.stanford.nlp.ling.CoreAnnotations
import edu.stanford.nlp.neural.rnn.RNNCoreAnnotations
import edu.stanford.nlp.pipeline.{Annotation, StanfordCoreNLP}
import edu.stanford.nlp.sentiment.SentimentCoreAnnotations
import scala.collection.convert.wrapAll._

The Stanford CoreNLP libraries have a main driver that orchestrates all of their analysis. The driver setup is heavyweight, setting up threads and data structures that are shared across analyses. Apache Spark runs on a cluster with a main driver process and a collection of backend executor processes that do most of the heavy sifting of the data.

The Stanford CoreNLP shared objects are not serializable, so they cannot be distributed easily across a cluster. Instead, you need to initialize them once for every backend executor process that might need them. Here is how to accomplish that:

val props = new Properties()
props.setProperty("annotators", "tokenize, ssplit, parse, sentiment")
props.setProperty("parse.maxlen", "70")

object myNLP {
    lazy val coreNLP = new StanfordCoreNLP(props)
}

The properties tell the libraries which annotators to execute and how many words to process. The preceding code creates an object, myNLP, with a field coreNLP that is lazily evaluated. This field is initialized only when it is needed, and only once. So, when the backend executors start processing the records, each executor initializes the driver for the Stanford CoreNLP libraries only one time.

Next is a function that estimates the sentiment of a text string. It first calls Stanford CoreNLP to annotate the text. Then, it pulls out the sentences and takes the average sentiment across all the sentences. The sentiment is a double, from 0.0 as the most negative to 4.0 as the most positive.

def estimatedSentiment(text: String): Double = {
    if ((text == null) || (!text.nonEmpty)) { return Double.NaN }
    val annotations = myNLP.coreNLP.process(text)
    val sentences = annotations.get(classOf[CoreAnnotations.SentencesAnnotation])
    sentences.foldLeft(0.0)( (csum, x) => { 
        csum + RNNCoreAnnotations.getPredictedClass(x.get(classOf[SentimentCoreAnnotations.SentimentAnnotatedTree])) 
    }) / sentences.length
}

Now, let’s estimate the sentiment of the issue titles and add that computed field as part of the records. You can accomplish this with the map() method on dynamic frames:

val issue_sentiments = issue_titles.map((rec: DynamicRecord) => { 
    val mbody = rec.getField("title")
    mbody match {
        case Some(mval: String) => { 
            rec.addField("sentiment", ScalarNode(estimatedSentiment(mval)))
            rec }
        case _ => rec
    }
})

The map() method applies the user-provided function on every record. The function takes a DynamicRecord as an argument and returns a DynamicRecord. The code above computes the sentiment, adds it in a top-level field, sentiment, to the record, and returns the record.

Count the records with sentiment and show the schema. This takes a few minutes because Spark must initialize the library and run the sentiment analysis, which can be involved.

issue_sentiments.count
issue_sentiments.printSchema()

Notice that all records were processed (14,063), and the sentiment value was added to the schema.

Finally, let’s pick out the titles that have the lowest sentiment (less than 1.5). Count them and print out a sample to see what some of the titles look like.

val pressing_issues = issue_sentiments.filter(_.getField("sentiment").exists(_.asInstanceOf[Double] < 1.5))
pressing_issues.count
pressing_issues.show(10)

Next, write them all to a file so that you can handle them later. (You’ll need to replace the output path with your own.)

glueContext.getSinkWithFormat(connectionType = "s3", 
                              options = JsonOptions("""{"path": "s3://<bucket>/out/path/"}"""), 
                              format = "json")
            .writeDynamicFrame(pressing_issues)

Take a look in the output path, and you can see the output files.

Putting it all together

Now, let’s create a job from the preceding interactive session. The following script combines all the commands from earlier. It processes the GitHub archive files and writes out the highly negative issues:

import com.amazonaws.services.glue.DynamicRecord
import com.amazonaws.services.glue.GlueContext
import com.amazonaws.services.glue.util.GlueArgParser
import com.amazonaws.services.glue.util.Job
import com.amazonaws.services.glue.util.JsonOptions
import com.amazonaws.services.glue.types._
import org.apache.spark.SparkContext
import java.util.Properties
import edu.stanford.nlp.ling.CoreAnnotations
import edu.stanford.nlp.neural.rnn.RNNCoreAnnotations
import edu.stanford.nlp.pipeline.{Annotation, StanfordCoreNLP}
import edu.stanford.nlp.sentiment.SentimentCoreAnnotations
import scala.collection.convert.wrapAll._

object GlueApp {

    object myNLP {
        val props = new Properties()
        props.setProperty("annotators", "tokenize, ssplit, parse, sentiment")
        props.setProperty("parse.maxlen", "70")

        lazy val coreNLP = new StanfordCoreNLP(props)
    }

    def estimatedSentiment(text: String): Double = {
        if ((text == null) || (!text.nonEmpty)) { return Double.NaN }
        val annotations = myNLP.coreNLP.process(text)
        val sentences = annotations.get(classOf[CoreAnnotations.SentencesAnnotation])
        sentences.foldLeft(0.0)( (csum, x) => { 
            csum + RNNCoreAnnotations.getPredictedClass(x.get(classOf[SentimentCoreAnnotations.SentimentAnnotatedTree])) 
        }) / sentences.length
    }

    def main(sysArgs: Array[String]) {
        val spark: SparkContext = SparkContext.getOrCreate()
        val glueContext: GlueContext = new GlueContext(spark)

        val dbname = "githubarchive"
        val tblname = "data"
        val outpath = "s3://<bucket>/out/path/"

        val github_events = glueContext
                            .getCatalogSource(database = dbname, tableName = tblname)
                            .getDynamicFrame()

        val issue_events =  github_events.filter((rec: DynamicRecord) => {
            rec.getField("type").exists(_ == "IssuesEvent")
        })

        val issue_titles = issue_events.applyMapping(Seq(("id", "string", "id", "string"),
                                                         ("actor.login", "string", "actor", "string"), 
                                                         ("repo.name", "string", "repo", "string"),
                                                         ("payload.action", "string", "action", "string"),
                                                         ("payload.issue.title", "string", "title", "string")))

        val issue_sentiments = issue_titles.map((rec: DynamicRecord) => { 
            val mbody = rec.getField("title")
            mbody match {
                case Some(mval: String) => { 
                    rec.addField("sentiment", ScalarNode(estimatedSentiment(mval)))
                    rec }
                case _ => rec
            }
        })

        val pressing_issues = issue_sentiments.filter(_.getField("sentiment").exists(_.asInstanceOf[Double] < 1.5))

        glueContext.getSinkWithFormat(connectionType = "s3", 
                              options = JsonOptions(s"""{"path": "$outpath"}"""), 
                              format = "json")
                    .writeDynamicFrame(pressing_issues)
    }
}

Notice that the script is enclosed in a top-level object called GlueApp, which serves as the script’s entry point for the job. (You’ll need to replace the output path with your own.) Upload the script to an Amazon S3 location so that AWS Glue can load it when needed.

To create the job, open the AWS Glue console. Choose Jobs in the left navigation pane, and then choose Add job. Create a name for the job, and specify a role with permissions to access the data. Choose An existing script that you provide, and choose Scala as the language.

For the Scala class name, type GlueApp to indicate the script’s entry point. Specify the Amazon S3 location of the script.

Choose Script libraries and job parameters. In the Dependent jars path field, enter the Amazon S3 locations of the Stanford CoreNLP libraries from earlier as a comma-separated list (without spaces). Then choose Next.

No connections are needed for this job, so choose Next again. Review the job properties, and choose Finish. Finally, choose Run job to execute the job.

You can simply edit the script’s input table and output path to run this job on whatever GitHub timeline datasets that you might have.

Conclusion

In this post, we showed how to write AWS Glue ETL scripts in Scala via notebooks and how to run them as jobs. Scala has the advantage that it is the native language for the Spark runtime. With Scala, it is easier to call Scala or Java functions and third-party libraries for analyses. Moreover, data processing is faster in Scala because there’s no need to convert records from one language runtime to another.

You can find more example of Scala scripts in our GitHub examples repository: https://github.com/awslabs/aws-glue-samples. We encourage you to experiment with Scala scripts and let us know about any interesting ETL flows that you want to share.

Happy Glue-ing!

 


Additional Reading

If you found this post useful, be sure to check out Simplify Querying Nested JSON with the AWS Glue Relationalize Transform and Genomic Analysis with Hail on Amazon EMR and Amazon Athena.

 


About the Authors

Mehul Shah is a senior software manager for AWS Glue. His passion is leveraging the cloud to build smarter, more efficient, and easier to use data systems. He has three girls, and, therefore, he has no spare time.

 

 

 

Ben Sowell is a software development engineer at AWS Glue.

 

 

 

 
Vinay Vivili is a software development engineer for AWS Glue.

 

 

 

I am Beemo, a little living boy: Adventure Time prop build

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/adventure-time-bmo/

Bob Herzberg, BMO builder and blogger at BYOBMO.com, fills us in on the whys and hows and even the Pen Wards of creating interactive Adventure Time BMO props with the Raspberry Pi.

A Conversation With BMO

A conversation with BMO showing off some voice recognition capabilities. There is no interaction for BMO’s responses other than voice commands. There is a small microphone inside BMO (right behind the blue dot) and the voice commands are processed by Google voice API over WiFi.

Finding BMO

My first BMO began as a cosplay prop for my daughter. She and her friends are huge fans of Adventure Time and made their costumes for Princess Bubblegum, Marceline, and Finn. It was my job to come up with a BMO.

Raspberry Pi BMO Laura Herzberg Bob Herzberg

Bob as Banana Guard, daughter Laura as Princess Bubblegum, and son Steven as Finn

I wanted something electronic, and also interactive if possible. And it had to run on battery power. There was only one option that I found that would work: the Raspberry Pi.

Building a living little boy

BMO’s basic internals consist of the Raspberry Pi, an 8” HDMI monitor, and a USB battery pack. The body is made from laser-cut MDF wood, which I sanded, sealed, and painted. I added 3D-printed arms and legs along with some vinyl lettering to complete the look. There is also a small wireless keyboard that works as a remote control.

Adventure Time BMO prop
Adventure Time BMO prop
Adventure Time BMO prop
Adventure Time BMO prop

To make the front panel button function, I created a custom PCB, mounted laser-cut acrylic buttons on it, and connected it to the Pi’s IO header.

Inside BMO - Raspberry Pi BMO Laura Herzberg Bob Herzberg

Custom-made PCBs control BMO’s gaming buttons and USB input.

The USB jack is extended with another custom PCB, which gives BMO USB ports on the front panel. His battery life is an impressive 8 hours of continuous use.

The main brain game frame

Most of BMO’s personality comes from custom animations that my daughter created and that were then turned into MP4 video files. The animations are triggered by the remote keyboard. Some versions of BMO have an internal microphone, and the Google Voice API is used to translate the user’s voice and map it to an appropriate response, so it’s possible to have a conversation with BMO.

The final components of Raspberry Pi BMO Laura Herzberg Bob Herzberg

The Raspberry Pi Camera Module was also put to use. Some BMOs have a servo that can pop up a camera, called GoMO, which takes pictures. Although some people mistake it for ghost detecting equipment, BMO just likes taking nice pictures.

Who wants to play video games?

Playing games on BMO is as simple as loading one of the emulators supported by Raspbian.

BMO connected to SNES controllers - Raspberry Pi BMO Laura Herzberg Bob Herzberg

I’m partial to the Atari 800 emulator, since I used to write games for that platform when I was just starting to learn programming. The front-panel USB ports are used for connecting gamepads, or his front-panel buttons and D-Pad can be used.

Adventure time

BMO has been a lot of fun to bring to conventions. He makes it to ComicCon San Diego each year and has been as far away as DragonCon in Atlanta, where he finally got to meet the voice of BMO, Niki Yang.

BMO's back panel - Raspberry Pi BMO Laura Herzberg Bob Herzberg

BMO’s back panel, autographed by Niki Yang

One day, I received an email from the producer of Adventure Time, Kelly Crews, with a very special request. Kelly was looking for a birthday present for the show’s creator, Pendleton Ward. It was either luck or coincidence that I just was finishing up the latest version of BMO. Niki Yang added some custom greetings just for Pen.

BMO Wishes Pendleton Ward a Happy Birthday!

Happy birthday to Pendleton Ward, the creator of, well, you know what. We were asked to build Pen his very own BMO and with help from Niki Yang and the Adventure Time crew here is the result.

We added a few more items inside, including a 3D-printed heart, a medal, and a certificate which come from the famous Be More episode that explains BMO’s origins.

Back of Adventure Time BMO prop
Adventure Time BMO prop
Adventure Time BMO prop
Adventure Time BMO prop

BMO was quite a challenge to create. Fabricating the enclosure required several different techniques and materials. Fortunately, bringing him to life was quite simple once he had a Raspberry Pi inside!

Find out more

Be sure to follow Bob’s adventures with BMO at the Build Your Own BMO blog. And if you’ve built your own prop from television or film using a Raspberry Pi, be sure to share it with us in the comments below or on our social media channels.

 

All images c/o Bob and Laura Herzberg

The post I am Beemo, a little living boy: Adventure Time prop build appeared first on Raspberry Pi.