Tag Archives: display

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.


SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see:

Project Floofball and more: Pi pet stuff

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/project-floofball-pi-pet-stuff/

It’s a public holiday here today (yes, again). So, while we indulge in the traditional pastime of barbecuing stuff (ourselves, mainly), here’s a little trove of Pi projects that cater for our various furry friends.

Project Floofball

Nicole Horward created Project Floofball for her hamster, Harold. It’s an IoT hamster wheel that uses a Raspberry Pi and a magnetic door sensor to log how far Harold runs.

Project Floofball: an IoT hamster wheel

An IoT Hamsterwheel using a Raspberry Pi and a magnetic door sensor, to see how far my hamster runs.

You can follow Harold’s runs in real time on his ThingSpeak channel, and you’ll find photos of the build on imgur. Nicole’s Python code, as well as her template for the laser-cut enclosure that houses the wiring and LCD display, are available on the hamster wheel’s GitHub repo.

A live-streaming pet feeder

JaganK3 used to work long hours that meant he couldn’t be there to feed his dog on time. He found that he couldn’t buy an automated feeder in India without paying a lot to import one, so he made one himself. It uses a Raspberry Pi to control a motor that turns a dispensing valve in a hopper full of dry food, giving his dog a portion of food at set times.

A transparent cylindrical hopper of dry dog food, with a motor that can turn a dispensing valve at the lower end. The motor is connected to a Raspberry Pi in a plastic case. Hopper, motor, Pi, and wiring are all mounted on a board on the wall.

He also added a web cam for live video streaming, because he could. Find out more in JaganK3’s Instructable for his pet feeder.

Shark laser cat toy

Sam Storino, meanwhile, is using a Raspberry Pi to control a laser-pointer cat toy with a goshdarned SHARK (which is kind of what I’d expect from the guy who made the steampunk-looking cat feeder a few weeks ago). The idea is to keep his cats interested and active within the confines of a compact city apartment.

Raspberry Pi Automatic Cat Laser Pointer Toy

Post with 52 votes and 7004 views. Tagged with cat, shark, lasers, austin powers, raspberry pi; Shared by JeorgeLeatherly. Raspberry Pi Automatic Cat Laser Pointer Toy

If I were a cat, I would definitely be entirely happy with this. Find out more on Sam’s website.

And there’s more

Michel Parreno has written a series of articles to help you monitor and feed your pet with Raspberry Pi.

All of these makers are generous in acknowledging the tutorials and build logs that helped them with their projects. It’s lovely to see the Raspberry Pi and maker community working like this, and I bet their projects will inspire others too.

Now, if you’ll excuse me. I’m late for a barbecue.

The post Project Floofball and more: Pi pet stuff appeared first on Raspberry Pi.

Amazon Sumerian – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-sumerian-now-generally-available/

We announced Amazon Sumerian at AWS re:Invent 2017. As you can see from Tara‘s blog post (Presenting Amazon Sumerian: An Easy Way to Create VR, AR, and 3D Experiences), Sumerian does not require any specialized programming or 3D graphics expertise. You can build VR, AR, and 3D experiences for a wide variety of popular hardware platforms including mobile devices, head-mounted displays, digital signs, and web browsers.

I’m happy to announce that Sumerian is now generally available. You can create realistic virtual environments and scenes without having to acquire or master specialized tools for 3D modeling, animation, lighting, audio editing, or programming. Once built, you can deploy your finished creation across multiple platforms without having to write custom code or deal with specialized deployment systems and processes.

Sumerian gives you a web-based editor that you can use to quickly and easily create realistic, professional-quality scenes. There’s a visual scripting tool that lets you build logic to control how objects and characters (Sumerian Hosts) respond to user actions. Sumerian also lets you create rich, natural interactions powered by AWS services such as Amazon Lex, Polly, AWS Lambda, AWS IoT, and Amazon DynamoDB.

Sumerian was designed to work on multiple platforms. The VR and AR apps that you create in Sumerian will run in browsers that supports WebGL or WebVR and on popular devices such as the Oculus Rift, HTC Vive, and those powered by iOS or Android.

During the preview period, we have been working with a broad spectrum of customers to put Sumerian to the test and to create proof of concept (PoC) projects designed to highlight an equally broad spectrum of use cases, including employee education, training simulations, field service productivity, virtual concierge, design and creative, and brand engagement. Fidelity Labs (the internal R&D unit of Fidelity Investments), was the first to use a Sumerian host to create an engaging VR experience. Cora (the host) lives within a virtual chart room. She can display stock quotes, pull up company charts, and answer questions about a company’s performance. This PoC uses Amazon Polly to implement text to speech and Amazon Lex for conversational chatbot functionality. Read their blog post and watch the video inside to see Cora in action:

Now that Sumerian is generally available, you have the power to create engaging AR, VR, and 3D experiences of your own. To learn more, visit the Amazon Sumerian home page and then spend some quality time with our extensive collection of Sumerian Tutorials.



Brutus 2: the gaming PC case of your dreams

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/brutus-2-gaming-pc-case/

Attention, case modders: take a look at the Brutus 2, an extremely snazzy computer case with a partly transparent, animated side panel that’s powered by a Pi. Daniel Otto and Carsten Lehman have a current crowdfunder for the case; their video is in German, but the looks of the build speak for themselves. There are some truly gorgeous effects here.

der BRUTUS 2 by 3nb Gaming

Vorbestellungen ab sofort auf https://www.startnext.com/brutus2 Weitere Infos zu uns auf: https://3nb.de https://www.facebook.com/3nb.de https://www.instagram.com/3nb.de Über 3nb: – GbR aus Leipzig, gegründet 2017 – wir kommen aus den Bereichen Elektronik und Informatik – erstes Produkt: der Brutus One ein Gaming PC mit transparentem Display in der Seite Kurzinfo Brutus 2: – Markencomputergehäuse für Gaming- /Casemoddingszene – Besonderheit: animiertes Seitenfenster angesteuert mit einem Raspberry Pi – Vorteile von unserem Case: o Case ist einzeln lieferbar und nicht nur als komplett-PC o kein Leistungsverbrauch der Grafikkarte dank integriertem Raspberry Pi o bessere Darstellung von Texten und Grafiken durch unscharfen Hintergrund

What’s case modding?

Case modding just means modifying your computer or gaming console’s case, and it’s very popular in the gaming community. Some mods are functional, while others improve the way the case looks. Lots of dedicated gamers don’t only want a powerful computer, they also want it to look amazing — at home, or at LAN parties and games tournaments.

The Brutus 2 case

The Brutus 2 case is made by Daniel and Carsten’s startup, 3nb electronics, and it’s a product that is officially Powered by Raspberry Pi. Its standout feature is the semi-transparent TFT screen, which lets you play any video clip you choose while keeping your gaming hardware on display. It looks incredibly cool. All the graphics for the case’s screen are handled by a Raspberry Pi, so it doesn’t use any of your main PC’s GPU power and your gaming won’t suffer.

Brutus 2 PC case powered by Raspberry Pi

The software

To use Brutus 2, you just need to run a small desktop application on your PC to choose what you want to display on the case. A number of neat animations are included, and you can upload your own if you want.

So far, the app only runs on Windows, but 3nb electronics are planning to make the code open-source, so you can modify it for other operating systems, or to display other file types. This is true to the spirit of the case modding and Raspberry Pi communities, who love adapting, retrofitting, and overhauling projects and code to fit their needs.

Brutus 2 PC case powered by Raspberry Pi

Daniel and Carsten say that one of their campaign’s stretch goals is to implement more functionality in the Brutus 2 app. So in the future, the case could also show things like CPU temperature, gaming stats, and in-game messages. Of course, there’s nothing stopping you from integrating features like that yourself.

If you have any questions about the case, you can post them directly to Daniel and Carsten here.

The crowdfunding campaign

The Brutus 2 campaign on Startnext is currently halfway to its first funding goal of €10000, with over three weeks to go until it closes. If you’re quick, you still be may be able to snatch one of the early-bird offers. And if your whole guild NEEDS this, that’s OK — there are discounts for bulk orders.

The post Brutus 2: the gaming PC case of your dreams appeared first on Raspberry Pi.

Some notes on eFail

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/some-notes-on-efail.html

I’ve been busy trying to replicate the “eFail” PGP/SMIME bug. I thought I’d write up some notes.

PGP and S/MIME encrypt emails, so that eavesdroppers can’t read them. The bugs potentially allow eavesdroppers to take the encrypted emails they’ve captured and resend them to you, reformatted in a way that allows them to decrypt the messages.

Disable remote/external content in email

The most important defense is to disable “external” or “remote” content from being automatically loaded. This is when HTML-formatted emails attempt to load images from remote websites. This happens legitimately when they want to display images, but not fill up the email with them. But most of the time this is illegitimate, they hide images on the webpage in order to track you with unique IDs and cookies. For example, this is the code at the end of an email from politician Bernie Sanders to his supporters. Notice the long random number assigned to track me, and the width/height of this image is set to one pixel, so you don’t even see it:

Such trackers are so pernicious they are disabled by default in most email clients. This is an example of the settings in Thunderbird:

The problem is that as you read email messages, you often get frustrated by the fact the error messages and missing content, so you keep adding exceptions:

The correct defense against this eFail bug is to make sure such remote content is disabled and that you have no exceptions, or at least, no HTTP exceptions. HTTPS exceptions (those using SSL) are okay as long as they aren’t to a website the attacker controls. Unencrypted exceptions, though, the hacker can eavesdrop on, so it doesn’t matter if they control the website the requests go to. If the attacker can eavesdrop on your emails, they can probably eavesdrop on your HTTP sessions as well.

Some have recommended disabling PGP and S/MIME completely. That’s probably overkill. As long as the attacker can’t use the “remote content” in emails, you are fine. Likewise, some have recommend disabling HTML completely. That’s not even an option in any email client I’ve used — you can disable sending HTML emails, but not receiving them. It’s sufficient to just disable grabbing remote content, not the rest of HTML email rendering.

I couldn’t replicate the direct exfiltration

There rare two related bugs. One allows direct exfiltration, which appends the decrypted PGP email onto the end of an IMG tag (like one of those tracking tags), allowing the entire message to be decrypted.

An example of this is the following email. This is a standard HTML email message consisting of multiple parts. The trick is that the IMG tag in the first part starts the URL (blog.robertgraham.com/…) but doesn’t end it. It has the starting quotes in front of the URL but no ending quotes. The ending will in the next chunk.

The next chunk isn’t HTML, though, it’s PGP. The PGP extension (in my case, Enignmail) will detect this and automatically decrypt it. In this case, it’s some previous email message I’ve received the attacker captured by eavesdropping, who then pastes the contents into this email message in order to get it decrypted.

What should happen at this point is that Thunderbird will generate a request (if “remote content” is enabled) to the blog.robertgraham.com server with the decrypted contents of the PGP email appended to it. But that’s not what happens. Instead, I get this:

I am indeed getting weird stuff in the URL (the bit after the GET /), but it’s not the PGP decrypted message. Instead what’s going on is that when Thunderbird puts together a “multipart/mixed” message, it adds it’s own HTML tags consisting of lines between each part. In the email client it looks like this:

The HTML code it adds looks like:

That’s what you see in the above URL, all this code up to the first quotes. Those quotes terminate the quotes in the URL from the first multipart section, causing the rest of the content to be ignored (as far as being sent as part of the URL).

So at least for the latest version of Thunderbird, you are accidentally safe, even if you have “remote content” enabled. Though, this is only according to my tests, there may be a work around to this that hackers could exploit.


In the old days, email was sent plaintext over the wire so that it could be passively eavesdropped on. Nowadays, most providers send it via “STARTTLS”, which sorta encrypts it. Attackers can still intercept such email, but they have to do so actively, using man-in-the-middle. Such active techniques can be detected if you are careful and look for them.
Some organizations don’t care. Apparently, some nation states are just blocking all STARTTLS and forcing email to be sent unencrypted. Others do care. The NSA will passively sniff all the email they can in nations like Iraq, but they won’t actively intercept STARTTLS messages, for fear of getting caught.
The consequence is that it’s much less likely that somebody has been eavesdropping on you, passively grabbing all your PGP/SMIME emails. If you fear they have been, you should look (e.g. send emails from GMail and see if they are intercepted by sniffing the wire).

You’ll know if you are getting hacked

If somebody attacks you using eFail, you’ll know. You’ll get an email message formatted this way, with multipart/mixed components, some with corrupt HTML, some encrypted via PGP. This means that for the most part, your risk is that you’ll be attacked only once — the hacker will only be able to get one message through and decrypt it before you notice that something is amiss. Though to be fair, they can probably include all the emails they want decrypted as attachments to the single email they sent you, so the risk isn’t necessarily that you’ll only get one decrypted.
As mentioned above, a lot of attackers (e.g. the NSA) won’t attack you if its so easy to get caught. Other attackers, though, like anonymous hackers, don’t care.
Somebody ought to write a plugin to Thunderbird to detect this.


It only works if attackers have already captured your emails (though, that’s why you use PGP/SMIME in the first place, to guard against that).
It only works if you’ve enabled your email client to automatically grab external/remote content.
It seems to not be easily reproducible in all cases.
Instead of disabling PGP/SMIME, you should make sure your email client hast remote/external content disabled — that’s a huge privacy violation even without this bug.

Notes: The default email client on the Mac enables remote content by default, which is bad:

Augmented-reality projection lamp with Raspberry Pi and Android Things

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/augmented-reality-projector/

If your day has been a little fraught so far, watch this video. It opens with a tableau of methodically laid-out components and then shows them soldered, screwed, and slotted neatly into place. Everything fits perfectly; nothing needs percussive adjustment. Then it shows us glimpses of an AR future just like the one promised in the less dystopian comics and TV programmes of my 1980s childhood. It is all very soothing, and exactly what I needed.

Android Things – Lantern

Transform any surface into mixed-reality using Raspberry Pi, a laser projector, and Android Things. Android Experiments – http://experiments.withgoogle.com/android/lantern Lantern project site – http://nordprojects.co/lantern check below to make your own ↓↓↓ Get the code – https://github.com/nordprojects/lantern Build the lamp – https://www.hackster.io/nord-projects/lantern-9f0c28

Creating augmented reality with projection

We’ve seen plenty of Raspberry Pi IoT builds that are smart devices for the home; they add computing power to things like lights, door locks, or toasters to make these objects interact with humans and with their environment in new ways. Nord ProjectsLantern takes a different approach. In their words, it:

imagines a future where projections are used to present ambient information, and relevant UI within everyday objects. Point it at a clock to show your appointments, or point to speaker to display the currently playing song. Unlike a screen, when Lantern’s projections are no longer needed, they simply fade away.

Lantern is set up so that you can connect your wireless device to it using Google Nearby. This means there’s no need to create an account before you can dive into augmented reality.

Lantern Raspberry Pi powered projector lamp

Your own open-source AR lamp

Nord Projects collaborated on Lantern with Google’s Android Things team. They’ve made it fully open-source, so you can find the code on GitHub and also download their parts list, which includes a Pi, an IKEA lamp, an accelerometer, and a laser projector. Build instructions are at hackster.io and on GitHub.

This is a particularly clear tutorial, very well illustrated with photos and GIFs, and once you’ve sourced and 3D-printed all of the components, you shouldn’t need a whole lot of experience to put everything together successfully. Since everything is open-source, though, if you want to adapt it — for example, if you’d like to source a less costly projector than the snazzy one used here — you can do that too.

components of Lantern Raspberry Pi powered augmented reality projector lamp

The instructions walk you through the mechanical build and the wiring, as well as installing Android Things and Nord Projects’ custom software on the Raspberry Pi. Once you’ve set everything up, an accelerometer connected to the Pi’s GPIO pins lets the lamp know which surface it is pointing at. A companion app on your mobile device lets you choose from the mini apps that work on that surface to select the projection you want.

The designers are making several mini apps available for Lantern, including the charmingly named Space Porthole: this uses Processing and your local longitude and latitude to project onto your ceiling the stars you’d see if you punched a hole through to the sky, if it were night time, and clear weather. Wouldn’t you rather look at that than deal with the ant problem in your kitchen or tackle your GitHub notifications?

What would you like to project onto your living environment? Let us know in the comments!

The post Augmented-reality projection lamp with Raspberry Pi and Android Things appeared first on Raspberry Pi.

This is a really lovely Raspberry Pi tricorder

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-tricorder-prop/

At the moment I’m spending my evenings watching all of Star Trek in order. Yes, I have watched it before (but with some really big gaps). Yes, including the animated series (I’m up to The Terratin Incident). So I’m gratified to find this beautiful The Original Series–style tricorder build.

Star Trek Tricorder with Working Display!

At this year’s Replica Prop Forum showcase, we meet up once again wtih Brian Mix, who brought his new Star Trek TOS Tricorder. This beautiful replica captures the weight and finish of the filming hand prop, and Brian has taken it one step further with some modern-day electronics!

A what now?

If you don’t know what a tricorder is, which I guess is faintly possible, the easiest way I can explain is to steal words that Liz wrote when Recantha made one back in 2013. It’s “a made-up thing used by the crew of the Enterprise to measure stuff, store data, and scout ahead remotely when exploring strange new worlds, seeking out new life and new civilisations, and all that jazz.”

A brief history of Picorders

We’ve seen other Raspberry Pi–based realisations of this iconic device. Recantha’s LEGO-cased tricorder delivered some authentic functionality, including temperature sensors, an ultrasonic distance sensor, a photosensor, and a magnetometer. Michael Hahn’s tricorder for element14’s Sci-Fi Your Pi competition in 2015 packed some similar functions, along with Original Series audio effects, into a neat (albeit non-canon) enclosure.

Brian Mix’s Original Series tricorder

Brian Mix’s tricorder, seen in the video above from Tested at this year’s Replica Prop Forum showcase, is based on a high-quality kit into which, he discovered, a Raspberry Pi just fits. He explains that the kit is the work of the late Steve Horch, a special effects professional who provided props for later Star Trek series, including the classic Deep Space Nine episode Trials and Tribble-ations.

A still from an episode of Star Trek: Deep Space Nine: Jadzia Dax, holding an Original Series-sylte tricorder, speaks with Benjamin Sisko

Dax, equipped for time travel

This episode’s plot required sets and props — including tricorders — replicating the USS Enterprise of The Original Series, and Steve Horch provided many of these. Thus, a tricorder kit from him is about as close to authentic as you can possibly find unless you can get your hands on a screen-used prop. The Pi allows Brian to drive a real display and a speaker: “Being the geek that I am,” he explains, “I set it up to run every single Original Series Star Trek episode.”

Even more wonderful hypothetical tricorders that I would like someone to make

This tricorder is beautiful, and it makes me think how amazing it would be to squeeze in some of the sensor functionality of the devices depicted in the show. Space in the case is tight, but it looks like there might be a little bit of depth to spare — enough for an IMU, maybe, or a temperature sensor. I’m certain the future will bring more Pi tricorder builds, and I, for one, can’t wait. Please tell us in the comments if you’re planning something along these lines, and, well, I suppose some other sci-fi franchises have decent Pi project potential too, so we could probably stand to hear about those.

If you’re commenting, no spoilers please past The Animated Series S1 E11. Thanks.

The post This is a really lovely Raspberry Pi tricorder appeared first on Raspberry Pi.

Battle for Wesnoth 1.14 released

Post Syndicated from corbet original https://lwn.net/Articles/753984/rss

Version 1.14 of the
Battle for Wesnoth role-playing game — the first release in over three
years — is available. “Along with the long-awaited debut on Steam,
this new release series brings forth a vast number of additions and changes
in all areas: a new single-player campaign, a visual and functional refresh
of the multiplayer lobby and add-ons manager, a refurbished display engine,
new unit graphics and animations, and much more.

Own your own working Pokémon Pokédex!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/deep-learning-pokedex/

Squeal with delight as your inner Pokémon trainer witnesses the wonder of Adrian Rosebrock’s deep learning Pokédex.

Creating a real-life Pokedex with a Raspberry Pi, Python, and Deep Learning

This video demos a real-like Pokedex, complete with visual recognition, that I created using a Raspberry Pi, Python, and Deep Learning. You can find the entire blog post, including code, using this link: https://www.pyimagesearch.com/2018/04/30/a-fun-hands-on-deep-learning-project-for-beginners-students-and-hobbyists/ Music credit to YouTube user “No Copyright” for providing royalty free music: https://www.youtube.com/watch?v=PXpjqURczn8

The history of Pokémon in 30 seconds

The Pokémon franchise was created by video game designer Satoshi Tajiri in 1995. In the fictional world of Pokémon, Pokémon Trainers explore the vast landscape, catching and training small creatures called Pokémon. To date, there are 802 different types of Pokémon. They range from the ever recognisable Pikachu, a bright yellow electric Pokémon, to the highly sought-after Shiny Charizard, a metallic, playing-card-shaped Pokémon that your mate Alex claims she has in mint condition, but refuses to show you.

Pokemon GIF

In the world of Pokémon, children as young as ten-year-old protagonist and all-round annoyance Ash Ketchum are allowed to leave home and wander the wilderness. There, they hunt vicious, deadly creatures in the hope of becoming a Pokémon Master.

Adrian’s deep learning Pokédex

Adrian is a bit of a deep learning pro, as demonstrated by his Santa/Not Santa detector, which we wrote about last year. For that project, he also provided a great explanation of what deep learning actually is. In a nutshell:

…a subfield of machine learning, which is, in turn, a subfield of artificial intelligence (AI).While AI embodies a large, diverse set of techniques and algorithms related to automatic reasoning (inference, planning, heuristics, etc), the machine learning subfields are specifically interested in pattern recognition and learning from data.

As with his earlier Raspberry Pi project, Adrian uses the Keras deep learning model and the TensorFlow backend, plus a few other packages such as Adrian’s own imutils functions and OpenCV.

Adrian trained a Convolutional Neural Network using Keras on a dataset of 1191 Pokémon images, obtaining 96.84% accuracy. As Adrian explains, this model is able to identify Pokémon via still image and video. It’s perfect for creating a Pokédex – an interactive Pokémon catalogue that should, according to the franchise, be able to identify and read out information on any known Pokémon when captured by camera. More information on model training can be found on Adrian’s blog.

Adrian Rosebeck deep learning pokemon pokedex

For the physical build, a Raspberry Pi 3 with camera module is paired with the Raspberry Pi 7″ touch display to create a portable Pokédex. And while Adrian comments that the same result can be achieved using your home computer and a webcam, that’s not how Adrian rolls as a Raspberry Pi fan.

Adrian Rosebeck deep learning pokemon pokedex

Plus, the smaller size of the Pi is perfect for one of you to incorporate this deep learning model into a 3D-printed Pokédex for ultimate Pokémon glory, pretty please, thank you.

Adrian Rosebeck deep learning pokemon pokedex

Adrian has gone into impressive detail about how the project works and how you can create your own on his blog, pyimagesearch. So if you’re interested in learning more about deep learning, and making your own Pokédex, be sure to visit.

The post Own your own working Pokémon Pokédex! appeared first on Raspberry Pi.

Secure Build with AWS CodeBuild and LayeredInsight

Post Syndicated from Asif Khan original https://aws.amazon.com/blogs/devops/secure-build-with-aws-codebuild-and-layeredinsight/

This post is written by Asif Awan, Chief Technology Officer of Layered InsightSubin Mathew – Software Development Manager for AWS CodeBuild, and Asif Khan – Solutions Architect

Enterprises adopt containers because they recognize the benefits: speed, agility, portability, and high compute density. They understand how accelerating application delivery and deployment pipelines makes it possible to rapidly slipstream new features to customers. Although the benefits are indisputable, this acceleration raises concerns about security and corporate compliance with software governance. In this blog post, I provide a solution that shows how Layered Insight, the pioneer and global leader in container-native application protection, can be used with seamless application build and delivery pipelines like those available in AWS CodeBuild to address these concerns.

Layered Insight solutions

Layered Insight enables organizations to unify DevOps and SecOps by providing complete visibility and control of containerized applications. Using the industry’s first embedded security approach, Layered Insight solves the challenges of container performance and protection by providing accurate insight into container images, adaptive analysis of running containers, and automated enforcement of container behavior.


AWS CodeBuild

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools.


Problem Definition

Security and compliance concerns span the lifecycle of application containers. Common concerns include:

Visibility into the container images. You need to verify the software composition information of the container image to determine whether known vulnerabilities associated with any of the software packages and libraries are included in the container image.

Governance of container images is critical because only certain open source packages/libraries, of specific versions, should be included in the container images. You need support for mechanisms for blacklisting all container images that include a certain version of a software package/library, or only allowing open source software that come with a specific type of license (such as Apache, MIT, GPL, and so on). You need to be able to address challenges such as:

·       Defining the process for image compliance policies at the enterprise, department, and group levels.

·       Preventing the images that fail the compliance checks from being deployed in critical environments, such as staging, pre-prod, and production.

Visibility into running container instances is critical, including:

·       CPU and memory utilization.

·       Security of the build environment.

·       All activities (system, network, storage, and application layer) of the application code running in each container instance.

Protection of running container instances that is:

·       Zero-touch to the developers (not an SDK-based approach).

·       Zero touch to the DevOps team and doesn’t limit the portability of the containerized application.

·       This protection must retain the option to switch to a different container stack or orchestration layer, or even to a different Container as a Service (CaaS ).

·       And it must be a fully automated solution to SecOps, so that the SecOps team doesn’t have to manually analyze and define detailed blacklist and whitelist policies.


Solution Details

In AWS CodeCommit, we have three projects:
●     “Democode” is a simple Java application, with one buildspec to build the app into a Docker container (run by build-demo-image CodeBuild project), and another to instrument said container (instrument-image CodeBuild project). The resulting container is stored in ECR repo javatestasjavatest:20180415-layered. This instrumented container is running in AWS Fargate cluster demo-java-appand can be seen in the Layered Insight runtime console as the javatestapplication in us-east-1.
●     aws-codebuild-docker-imagesis a clone of the official aws-codebuild-docker-images repo on GitHub . This CodeCommit project is used by the build-python-builder CodeBuild project to build the python 3.3.6 codebuild image and is stored at the codebuild-python ECR repo. We then manually instructed the Layered Insight console to instrument the image.
●     scan-java-imagecontains just a buildspec.yml file. This file is used by the scan-java-image CodeBuild project to instruct Layered Assessment to perform a vulnerability scan of the javatest container image built previously, and then run the scan results through a compliance policy that states there should be no medium vulnerabilities. This build fails — but in this case that is a success: the scan completes successfully, but compliance fails as there are medium-level issues found in the scan.

This build is performed using the instrumented version of the Python 3.3.6 CodeBuild image, so the activity of the processes running within the build are recorded each time within the LI console.

Build container image

Create or use a CodeCommit project with your application. To build this image and store it in Amazon Elastic Container Registry (Amazon ECR), add a buildspec file to the project and build a container image and create a CodeBuild project.

Scan container image

Once the image is built, create a new buildspec in the same project or a new one that looks similar to below (update ECR URL as necessary):

version: 0.2
      - echo Pulling down LI Scan API client scripts
      - git clone https://github.com/LayeredInsight/scan-api-example-python.git
      - echo Setting up LI Scan API client
      - cd scan-api-example-python
      - pip install layint_scan_api
      - pip install -r requirements.txt
      - echo Scanning container started on `date`
      - IMAGEID=$(./li_add_image --name <aws-region>.amazonaws.com/javatest:20180415)
      - ./li_wait_for_scan -v --imageid $IMAGEID
      - ./li_run_image_compliance -v --imageid $IMAGEID --policyid PB15260f1acb6b2aa5b597e9d22feffb538256a01fbb4e5a95

Add the buildspec file to the git repo, push it, and then build a CodeBuild project using with the instrumented Python 3.3.6 CodeBuild image at <aws-region>.amazonaws.com/codebuild-python:3.3.6-layered. Set the following environment variables in the CodeBuild project:
●     LI_APPLICATIONNAME – name of the build to display
●     LI_LOCATION – location of the build project to display
●     LI_API_KEY – ApiKey:<key-name>:<api-key>
●     LI_API_HOST – location of the Layered Insight API service

Instrument container image

Next, to instrument the new container image:

  1. In the Layered Insight runtime console, ensure that the ECR registry and credentials are defined (click the Setup icon and the ‘+’ sign on the top right of the screen to add a new container registry). Note the name given to the registry in the console, as this needs to be referenced in the li_add_imagecommand in the script, below.
  2. Next, add a new buildspec (with a new name) to the CodeCommit project, such as the one shown below. This code will download the Layered Insight runtime client, and use it to instruct the Layered Insight service to instrument the image that was just built:
    version: 0.2
    echo Pulling down LI API Runtime client scripts
    git clone https://github.com/LayeredInsight/runtime-api-example-python
    echo Setting up LI API client
    cd runtime-api-example-python
    pip install layint-runtime-api
    pip install -r requirements.txt
    echo Instrumentation started on `date`
    ./li_add_image --registry "Javatest ECR" --name IMAGE_NAME:TAG --description "IMAGE DESCRIPTION" --policy "Default Policy" --instrument --wait --verbose
  3. Commit and push the new buildspec file.
  4. Going back to CodeBuild, create a new project, with the same CodeCommit repo, but this time select the new buildspec file. Use a Python 3.3.6 builder – either the AWS or LI Instrumented version.
  5. Click Continue
  6. Click Save
  7. Run the build, again on the master branch.
  8. If everything runs successfully, a new image should appear in the ECR registry with a -layered suffix. This is the instrumented image.

Run instrumented container image

When the instrumented container is now run — in ECS, Fargate, or elsewhere — it will log data back to the Layered Insight runtime console. It’s appearance in the console can be modified by setting the LI_APPLICATIONNAME and LI_LOCATION environment variables when running the container.


In the above blog we have provided you steps needed to embed governance and runtime security in your build pipelines running on AWS CodeBuild using Layered Insight.




Grafana v5.1 Released

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/04/26/grafana-v5.1-released/

v5.1 Stable Release

The recent 5.0 major release contained a lot of new features so the Grafana 5.1 release is focused on smoothing out the rough edges and iterating over some of the new features.

Download Grafana 5.1 Now

Release Highlights

There are two new features included, Heatmap Support for Prometheus and a new core data source for Microsoft SQL Server.

Another highlight is the revamp of the Grafana docker container that makes it easier to run and control but be aware there is a breaking change to file permissions that will affect existing containers with data volumes.

We got tons of useful improvement suggestions, bug reports and Pull Requests from our amazing community. Thank you all! See the full changelog for more details.

Improved Scrolling Experience

In Grafana v5.0 we introduced a new scrollbar component. Unfortunately this introduced a lot of issues and in some scenarios removed
the native scrolling functionality. Grafana v5.1 ships with a native scrollbar for all pages together with a scrollbar component for
the dashboard grid and panels that does not override the native scrolling functionality. We hope that these changes and improvements should
make the Grafana user experience much better!

Improved Docker Image

Grafana v5.1 brings an improved official docker image which should make it easier to run and use the Grafana docker image and at the same time give more control to the user how to use/run it.

We have switched the id of the grafana user running Grafana inside a docker container. Unfortunately this means that files created prior to 5.1 will not have the correct permissions for later versions and thereby introduces a breaking change. We made this change so that it would be easier for you to control what user Grafana is executed as.

Please read the updated documentation which includes migration instructions and more information.

Heatmap Support for Prometheus

The Prometheus datasource now supports transforming Prometheus histograms to the heatmap panel. The Prometheus histogram is a powerful feature, and we’re
really happy to finally allow our users to render those as heatmaps. The Heatmap panel documentation
contains more information on how to use it.

Another improvement is that the Prometheus query editor now supports autocomplete for template variables. More information in the Prometheus data source documentation.

Microsoft SQL Server

Grafana v5.1 now ships with a built-in Microsoft SQL Server (MSSQL) data source plugin that allows you to query and visualize data from any
Microsoft SQL Server 2005 or newer, including Microsoft Azure SQL Database. Do you have metric or log data in MSSQL? You can now visualize
that data and define alert rules on it as with any of Grafana’s other core datasources.

The using Microsoft SQL Server in Grafana documentation has more detailed information on how to get started.

Adding New Panels to Dashboards

The control for adding new panels to dashboards now includes panel search and it is also now possible to copy and paste panels between dashboards.

By copying a panel in a dashboard it will be displayed in the Paste tab. When you switch to a new dashboard you can paste the
copied panel.

Align Zero-Line for Right and Left Y-axes

The feature request to align the zero-line for right and left Y-axes on the Graph panel is more than 3 years old. It has finally been implemented – more information in the Graph panel documentation.

Other Highlights

  • Table Panel: New enhancements includes support for mapping a numeric value/range to text and additional units. More information in the Table panel documentation.
  • New variable interpolation syntax: We now support a new option for rendering variables that gives the user full control of how the value(s) should be rendered. More details in the in the Variables documentation.
  • Improved workflow for provisioned dashboards. More details here.


Checkout the CHANGELOG.md file for a complete list
of new features, changes, and bug fixes.

Continued: the answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-2/

Last week, we shared the first half of our Q&A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. Today we follow up with all your other questions, including your expectations for a Raspberry Pi 4, Eben’s dream add-ons, and whether we really could go smaller than the Zero.

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

With internet security becoming more necessary, will there be automated versions of VPN on an SD card?

There are already third-party tools which turn your Raspberry Pi into a VPN endpoint. Would we do it ourselves? Like the power button, it’s one of those cases where there are a million things we could do and so it’s more efficient to let the community get on with it.

Just to give a counterexample, while we don’t generally invest in optimising for particular use cases, we did invest a bunch of money into optimising Kodi to run well on Raspberry Pi, because we found that very large numbers of people were using it. So, if we find that we get half a million people a year using a Raspberry Pi as a VPN endpoint, then we’ll probably invest money into optimising it and feature it on the website as we’ve done with Kodi. But I don’t think we’re there today.

Have you ever seen any Pis running and doing important jobs in the wild, and if so, how does it feel?

It’s amazing how often you see them driving displays, for example in radio and TV studios. Of course, it feels great. There’s something wonderful about the geographic spread as well. The Raspberry Pi desktop is quite distinctive, both in its previous incarnation with the grey background and logo, and the current one where we have Greg Annandale’s road picture.

The PIXEL desktop on Raspberry Pi

And so it’s funny when you see it in places. Somebody sent me a video of them teaching in a classroom in rural Pakistan and in the background was Greg’s picture.

Raspberry Pi 4!?!

There will be a Raspberry Pi 4, obviously. We get asked about it a lot. I’m sticking to the guidance that I gave people that they shouldn’t expect to see a Raspberry Pi 4 this year. To some extent, the opportunity to do the 3B+ was a surprise: we were surprised that we’ve been able to get 200MHz more clock speed, triple the wireless and wired throughput, and better thermals, and still stick to the $35 price point.

We’re up against the wall from a silicon perspective; we’re at the end of what you can do with the 40nm process. It’s not that you couldn’t clock the processor faster, or put a larger processor which can execute more instructions per clock in there, it’s simply about the energy consumption and the fact that you can’t dissipate the heat. So we’ve got to go to a smaller process node and that’s an order of magnitude more challenging from an engineering perspective. There’s more effort, more risk, more cost, and all of those things are challenging.

With 3B+ out of the way, we’re going to start looking at this now. For the first six months or so we’re going to be figuring out exactly what people want from a Raspberry Pi 4. We’re listening to people’s comments about what they’d like to see in a new Raspberry Pi, and I’m hoping by early autumn we should have an idea of what we want to put in it and a strategy for how we might achieve that.

Could you go smaller than the Zero?

The challenge with Zero as that we’re periphery-limited. If you run your hand around the unit, there is no edge of that board that doesn’t have something there. So the question is: “If you want to go smaller than Zero, what feature are you willing to throw out?”

It’s a single-sided board, so you could certainly halve the PCB area if you fold the circuitry and use both sides, though you’d have to lose something. You could give up some GPIO and go back to 26 pins like the first Raspberry Pi. You could give up the camera connector, you could go to micro HDMI from mini HDMI. You could remove the SD card and just do USB boot. I’m inventing a product live on air! But really, you could get down to two thirds and lose a bunch of GPIO – it’s hard to imagine you could get to half the size.

What’s the one feature that you wish you could outfit on the Raspberry Pi that isn’t cost effective at this time? Your dream feature.

Well, more memory. There are obviously technical reasons why we don’t have more memory on there, but there are also market reasons. People ask “why doesn’t the Raspberry Pi have more memory?”, and my response is typically “go and Google ‘DRAM price’”. We’re used to the price of memory going down. And currently, we’re going through a phase where this has turned around and memory is getting more expensive again.

Machine learning would be interesting. There are machine learning accelerators which would be interesting to put on a piece of hardware. But again, they are not going to be used by everyone, so according to our method of pricing what we might add to a board, machine learning gets treated like a $50 chip. But that would be lovely to do.

Which citizen science projects using the Pi have most caught your attention?

I like the wildlife camera projects. We live out in the countryside in a little village, and we’re conscious of being surrounded by nature but we don’t see a lot of it on a day-to-day basis. So I like the nature cam projects, though, to my everlasting shame, I haven’t set one up yet. There’s a range of them, from very professional products to people taking a Raspberry Pi and a camera and putting them in a plastic box. So those are good fun.

Raspberry Shake seismometer

The Raspberry Shake seismometer

And there’s Meteor Pi from the Cambridge Science Centre, that’s a lot of fun. And the seismometer Raspberry Shake – that sort of thing is really nice. We missed the recent South Wales earthquake; perhaps we should set one up at our Californian office.

How does it feel to go to bed every day knowing you’ve changed the world for the better in such a massive way?

What feels really good is that when we started this in 2006 nobody else was talking about it, but now we’re part of a very broad movement.

We were in a really bad way: we’d seen a collapse in the number of applicants applying to study Computer Science at Cambridge and elsewhere. In our view, this reflected a move away from seeing technology as ‘a thing you do’ to seeing it as a ‘thing that you have done to you’. It is problematic from the point of view of the economy, industry, and academia, but most importantly it damages the life prospects of individual children, particularly those from disadvantaged backgrounds. The great thing about STEM subjects is that you can’t fake being good at them. There are a lot of industries where your Dad can get you a job based on who he knows and then you can kind of muddle along. But if your dad gets you a job building bridges and you suck at it, after the first or second bridge falls down, then you probably aren’t going to be building bridges anymore. So access to STEM education can be a great driver of social mobility.

By the time we were launching the Raspberry Pi in 2012, there was this wonderful movement going on. Code Club, for example, and CoderDojo came along. Lots of different ways of trying to solve the same problem. What feels really, really good is that we’ve been able to do this as part of an enormous community. And some parts of that community became part of the Raspberry Pi Foundation – we merged with Code Club, we merged with CoderDojo, and we continue to work alongside a lot of these other organisations. So in the two seconds it takes me to fall asleep after my face hits the pillow, that’s what I think about.

We’re currently advertising a Programme Manager role in New Delhi, India. Did you ever think that Raspberry Pi would be advertising a role like this when you were bringing together the Foundation?

No, I didn’t.

But if you told me we were going to be hiring somewhere, India probably would have been top of my list because there’s a massive IT industry in India. When we think about our interaction with emerging markets, India, in a lot of ways, is the poster child for how we would like it to work. There have already been some wonderful deployments of Raspberry Pi, for example in Kerala, without our direct involvement. And we think we’ve got something that’s useful for the Indian market. We have a product, we have clubs, we have teacher training. And we have a body of experience in how to teach people, so we have a physical commercial product as well as a charitable offering that we think are a good fit.

It’s going to be massive.

What is your favourite BBC type-in listing?

There was a game called Codename: Druid. There is a famous game called Codename: Droid which was the sequel to Stryker’s Run, which was an awesome, awesome game. And there was a type-in game called Codename: Druid, which was at the bottom end of what you would consider a commercial game.

codename druid

And I remember typing that in. And what was really cool about it was that the next month, the guy who wrote it did another article that talks about the memory map and which operating system functions used which bits of memory. So if you weren’t going to do disc access, which bits of memory could you trample on and know the operating system would survive.

babbage versus bugs Raspberry Pi annual

See the full listing for Babbage versus Bugs in the Raspberry Pi 2018 Annual

I still like type-in listings. The Raspberry Pi 2018 Annual has a type-in listing that I wrote for a Babbage versus Bugs game. I will say that’s not the last type-in listing you will see from me in the next twelve months. And if you download the PDF, you could probably copy and paste it into your favourite text editor to save yourself some time.

The post Continued: the answers to your questions for Eben Upton appeared first on Raspberry Pi.

Invent new sounds with Google’s NSynth Super

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/google-nsynth-super/

Discover new sounds and explore the role of machine learning in music production and sound research with the NSynth Super, an ongoing project from Google’s Magenta research team that you can build at home.

Google Open NSynth Super Testing

Uploaded by AB Open on 2018-04-17.

What is the NSynth Super?

Part of the ongoing Magenta research project within Google, NSynth Super explores the ways in which machine learning tools help artists and musicians be creative.

Google Nsynth Super Raspberry Pi

“Technology has always played a role in creating new types of sounds that inspire musicians — from the sounds of distortion to the electronic sounds of synths,” explains the team behind the NSynth Super. “Today, advances in machine learning and neural networks have opened up new possibilities for sound generation.”

Using TensorFlow, the Magenta team builds tools and interfaces that let  artists and musicians use machine learning in their work. The NSynth Super AI algorithm uses deep neural networking to investigate the character of sounds. It then builds new sounds based on these characteristics instead of simply mixing sounds together.

Using an autoencoder, it extracts 16 defining temporal features from each input. These features are then interpolated linearly to create new embeddings (mathematical representations of each sound). These new embeddings are then decoded into new sounds, which have the acoustic qualities of both inputs.

The team publishes all hardware designs and software that are part of their ongoing research under open-source licences, allowing you to build your own synth.

Build your own NSynth Super

Using these open-source tools, Andrew Black has produced his own NSynth Super, demoed in the video above. Andrew’s list of build materials includes a Raspberry Pi 3, potentiometers, rotary encoders, and the Adafruit 1.3″ OLED display. Magenta also provides Gerber files for you to fabricate your own PCB.

Google Nsynth Super Raspberry Pi

Once fabricated, the PCB includes a table of contents for adding components.

The build isn’t easy — it requires soldering skills or access to someone who can assemble PCBs. Take a look at Andrew’s blog post and the official NSynth GitHub repo to see whether you’re up to the challenge.

Google Nsynth Super Raspberry Pi
Google Nsynth Super Raspberry Pi
Google Nsynth Super Raspberry Pi

Music and Raspberry Pi

The Raspberry Pi has been widely used for music production and music builds. Be it retrofitting a boombox, distributing music atop Table Mountain, or coding tracks with Sonic Pi, the Pi offers endless opportunities for musicians and music lovers to expand their repertoire of builds and instruments.

If you’d like to try more music-based projects using the Raspberry Pi, you can check out our free resources. And if you’ve used a Raspberry Pi in your own musical project, please share it with us in the comments or via our social network accounts.

The post Invent new sounds with Google’s NSynth Super appeared first on Raspberry Pi.

New – Registry of Open Data on AWS (RODA)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-registry-of-open-data-on-aws-roda/

Almost a decade ago, my colleague Deepak Singh introduced the AWS Public Datasets in his post Paging Researchers, Analysts, and Developers. I’m happy to report that Deepak is still an important part of the AWS team and that the Public Datasets program is still going strong!

Today we are announcing a new take on open and public data, the Registry of Open Data on AWS, or RODA. This registry includes existing Public Datasets and allows anyone to add their own datasets so that they can be accessed and analyzed on AWS.

Inside the Registry
The home page lists all of the datasets in the registry:

Entering a search term shrinks the list so that only the matching datasets are displayed:

Each dataset has an associated detail page, including usage examples, license info, and the information needed to locate and access the dataset on AWS:

In this case, I can access the data with a simple CLI command:

I could also access it programmatically, or download data to my EC2 instance.

Adding to the Repository
If you have a dataset that is publicly available and would like to add it to RODA , you can simply send us a pull request. Head over to the open-data-registry repo, read the CONTRIBUTING document, and create a YAML file that describes your dataset, using one of the existing files in the datasets directory as a model:

We’ll review pull requests regularly; you can “star” or watch the repo in order to track additions and changes.

Impress Me
I am looking forward to an inrush of new datasets, along with some blog posts and apps that show how to to use the data in powerful and interesting ways. Let me know what you come up with.



Colour sensing with a Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/colour-sensing-raspberry-pi/

In their latest video and tutorial, Electronic Hub shows you how to detect colour using a Raspberry Pi and a TCS3200 colour sensor.

Raspberry Pi Color Sensor (TCS3200) Interface | Color Detector

A simple Raspberry Pi based project using TCS3200 Color Sensor. The project demonstrates how to interface a Color Sensor (like TCS3200) with Raspberry Pi and implement a simple Color Detector using Raspberry Pi.

What is a TCS3200 colour sensor?

Colour sensors sense reflected light from nearby objects. The bright light of the TCS3200’s on-board white LEDs hits an object’s surface and is reflected back. The sensor has an 8×8 array of photodiodes, which are covered by either a red, blue, green, or clear filter. The type of filter determines what colour a diode can detect. Then the overall colour of an object is determined by how much light of each colour it reflects. (For example, a red object reflects mostly red light.)

Colour sensing with the TCS3200 Color Sensor and a Raspberry Pi

As Electronics Hub explains:

TCS3200 is one of the easily available colour sensors that students and hobbyists can work on. It is basically a light-to-frequency converter, i.e. based on colour and intensity of the light falling on it, the frequency of its output signal varies.

I’ll save you a physics lesson here, but you can find a detailed explanation of colour sensing and the TCS3200 on the Electronics Hub blog.

Raspberry Pi colour sensor

The TCS3200 colour sensor is connected to several of the onboard General Purpose Input Output (GPIO) pins on the Raspberry Pi.

Colour sensing with the TCS3200 Color Sensor and a Raspberry Pi

These connections allow the Raspberry Pi 3 to run one of two Python scripts that Electronics Hub has written for the project. The first displays the RAW RGB values read by the sensor. The second detects the primary colours red, green, and blue, and it can be expanded for more colours with the help of the first script.

Colour sensing with the TCS3200 Color Sensor and a Raspberry Pi

Electronic Hub’s complete build uses a breadboard for simply prototyping

Use it in your projects

This colour sensing setup is a simple means of adding a new dimension to your builds. Why not build a candy-sorting robot that organises your favourite sweets by colour? Or add colour sensing to your line-following buggy to allow for multiple path options!

If your Raspberry Pi project uses colour sensing, we’d love to see it, so be sure to share it in the comments!

The post Colour sensing with a Raspberry Pi appeared first on Raspberry Pi.

The answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-1/

Before Easter, we asked you to tell us your questions for a live Q & A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. The variety of questions and comments you sent was wonderful, and while we couldn’t get to them all, we picked a handful of the most common to grill him on.

You can watch the video below — though due to this being the first pancake of our live Q&A videos, the sound is a bit iffy — or read Eben’s answers to the first five questions today. We’ll follow up with the rest in the next few weeks!

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

Any plans for 64-bit Raspbian?

Raspbian is effectively 32-bit Debian built for the ARMv6 instruction-set architecture supported by the ARM11 processor in the first-generation Raspberry Pi. So maybe the question should be: “Would we release a version of our operating environment that was built on top of 64-bit ARM Debian?”

And the answer is: “Not yet.”

When we released the Raspberry Pi 3 Model B+, we released an operating system image on the same day; the wonderful thing about that image is that it runs on every Raspberry Pi ever made. It even runs on the alpha boards from way back in 2011.

That deep backwards compatibility is really important for us, in large part because we don’t want to orphan our customers. If someone spent $35 on an older-model Raspberry Pi five or six years ago, they still spent $35, so it would be wrong for us to throw them under the bus.

So, if we were going to do a 64-bit version, we’d want to keep doing the 32-bit version, and then that would mean our efforts would be split across the two versions; and remember, we’re still a very small engineering team. Never say never, but it would be a big step for us.

For people wanting a 64-bit operating system, there are plenty of good third-party images out there, including SUSE Linux Enterprise Server.

Given that the 3B+ includes 5GHz wireless and Power over Ethernet (PoE) support, why would manufacturers continue to use the Compute Module?

It’s a form-factor thing.

Very large numbers of people are using the bigger product in an industrial context, and it’s well engineered for that: it has module certification, wireless on board, and now PoE support. But there are use cases that can’t accommodate this form factor. For example, NEC displays: we’ve had this great relationship with NEC for a couple of years now where a lot of their displays have a socket in the back that you can put a Compute Module into. That wouldn’t work with the 3B+ form factor.

Back of an NEC display with a Raspberry Pi Compute Module slotted in.

An NEC display with a Raspberry Pi Compute Module

What are some industrial uses/products Raspberry is used with?

The NEC displays are a good example of the broader trend of using Raspberry Pi in digital signage.

A Raspberry Pi running the wait time signage at The Wizarding World of Harry Potter, Universal Studios.
Image c/o thelonelyredditor1

If you see a monitor at a station, or an airport, or a recording studio, and you look behind it, it’s amazing how often you’ll find a Raspberry Pi sitting there. The original Raspberry Pi was particularly strong for multimedia use cases, so we saw uptake in signage very early on.

An array of many Raspberry Pis

Los Alamos Raspberry Pi supercomputer

Another great example is the Los Alamos National Laboratory building supercomputers out of Raspberry Pis. Many high-end supercomputers now are built using white-box hardware — just regular PCs connected together using some networking fabric — and a collection of Raspberry Pi units can serve as a scale model of that. The Raspberry Pi has less processing power, less memory, and less networking bandwidth than the PC, but it has a balanced amount of each. So if you don’t want to let your apprentice supercomputer engineers loose on your expensive supercomputer, a cluster of Raspberry Pis is a good alternative.

Why is there no power button on the Raspberry Pi?

“Once you start, where do you stop?” is a question we ask ourselves a lot.

There are a whole bunch of useful things that we haven’t included in the Raspberry Pi by default. We don’t have a power button, we don’t have a real-time clock, and we don’t have an analogue-to-digital converter — those are probably the three most common requests. And the issue with them is that they each cost a bit of money, they’re each only useful to a minority of users, and even that minority often can’t agree on exactly what they want. Some people would like a power button that is literally a physical analogue switch between the 5V input and the rest of the board, while others would like something a bit more like a PC power button, which is partway between a physical switch and a ‘shutdown’ button. There’s no consensus about what sort of power button we should add.

So the answer is: accessories. By leaving a feature off the board, we’re not taxing the majority of people who don’t want the feature. And of course, we create an opportunity for other companies in the ecosystem to create and sell accessories to those people who do want them.

Adafruit Push-button Power Switch Breakout Raspberry Pi

The Adafruit Push-button Power Switch Breakout is one of many accessories that fill in the gaps for makers.

We have this neat way of figuring out what features to include by default: we divide through the fraction of people who want it. If you have a 20 cent component that’s going to be used by a fifth of people, we treat that as if it’s a $1 component. And it has to fight its way against the $1 components that will be used by almost everybody.

Do you think that Raspberry Pi is the future of the Internet of Things?

Absolutely, Raspberry Pi is the future of the Internet of Things!

In practice, most of the viable early IoT use cases are in the commercial and industrial spaces rather than the consumer space. Maybe in ten years’ time, IoT will be about putting 10-cent chips into light switches, but right now there’s so much money to be saved by putting automation into factories that you don’t need 10-cent components to address the market. Last year, roughly 2 million $35 Raspberry Pi units went into commercial and industrial applications, and many of those are what you’d call IoT applications.

So I think we’re the future of a particular slice of IoT. And we have ten years to get our price point down to 10 cents 🙂

The post The answers to your questions for Eben Upton appeared first on Raspberry Pi.

Build a house in Minecraft using Python

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/build-minecraft-house-using-python/

In this tutorial from The MagPi issue 68, Steve Martin takes us through the process of house-building in Minecraft Pi. Get your copy of The MagPi in stores now, or download it as a free PDF here.

Minecraft Pi is provided for free as part of the Raspbian operating system. To start your Minecraft: Pi Edition adventures, try our free tutorial Getting started with Minecraft.

Minecraft Raspberry Pi

Writing programs that create things in Minecraft is not only a great way to learn how to code, but it also means that you have a program that you can run again and again to make as many copies of your Minecraft design as you want. You never need to worry about your creation being destroyed by your brother or sister ever again — simply rerun your program and get it back! Whilst it might take a little longer to write the program than to build one house, once it’s finished you can build as many houses as you want.

Co-ordinates in Minecraft

Let’s start with a review of the coordinate system that Minecraft uses to know where to place blocks. If you are already familiar with this, you can skip to the next section. Otherwise, read on.

Minecraft Raspberry Pi Edition

Plan view of our house design

Minecraft shows us a three-dimensional (3D) view of the world. Imagine that the room you are in is the Minecraft world and you want to describe your location within that room. You can do so with three numbers, as follows:

  • How far across the room are you? As you move from side to side, you change this number. We can consider this value to be our X coordinate.
  • How high off the ground are you? If you are upstairs, or if you jump, this value increases. We can consider this value to be our Y coordinate.
  • How far into the room are you? As you walk forwards or backwards, you change this number. We can consider this value to be our Z coordinate.

You might have done graphs in school with X going across the page and Y going up the page. Coordinates in Minecraft are very similar, except that we have an extra value, Z, for our third dimension. Don’t worry if this still seems a little confusing: once we start to build our house, you will see how these three dimensions work in Minecraft.

Designing our house

It is a good idea to start with a rough design for our house. This will help us to work out the values for the coordinates when we are adding doors and windows to our house. You don’t have to plan every detail of your house right away. It is always fun to enhance it once you have got the basic design written. The image above shows the plan view of the house design that we will be creating in this tutorial. Note that because this is a plan view, it only shows the X and Z co-ordinates; we can’t see how high anything is. Hopefully, you can imagine the house extending up from the screen.

We will build our house close to where the Minecraft player is standing. This a good idea when creating something in Minecraft with Python, as it saves us from having to walk around the Minecraft world to try to find our creation.

Starting our program

Type in the code as you work through this tutorial. You can use any editor you like; we would suggest either Python 3 (IDLE) or Thonny Python IDE, both of which you can find on the Raspberry Pi menu under Programming. Start by selecting the File menu and creating a new file. Save the file with a name of your choice; it must end with .py so that the Raspberry Pi knows that it is a Python program.

It is important to enter the code exactly as it is shown in the listing. Pay particular attention to both the spelling and capitalisation (upper- or lower-case letters) used. You may find that when you run your program the first time, it doesn’t work. This is very common and just means there’s a small error somewhere. The error message will give you a clue about where the error is.

It is good practice to start all of your Python programs with the first line shown in our listing. All other lines that start with a # are comments. These are ignored by Python, but they are a good way to remind us what the program is doing.

The two lines starting with from tell Python about the Minecraft API; this is a code library that our program will be using to talk to Minecraft. The line starting mc = creates a connection between our Python program and the game. Then we get the player’s location broken down into three variables: x, y, and z.

Building the shell of our house

To help us build our house, we define three variables that specify its width, height, and depth. Defining these variables makes it easy for us to change the size of our house later; it also makes the code easier to understand when we are setting the co-ordinates of the Minecraft bricks. For now, we suggest that you use the same values that we have; you can go back and change them once the house is complete and you want to alter its design.

It’s now time to start placing some bricks. We create the shell of our house with just two lines of code! These lines of code each use the setBlocks command to create a complete block of bricks. This function takes the following arguments:

setBlocks(x1, y1, z1, x2, y2, z2, block-id, data)

x1, y1, and z1 are the coordinates of one corner of the block of bricks that we want to create; x1, y1, and z1 are the coordinates of the other corner. The block-id is the type of block that we want to use. Some blocks require another value called data; we will see this being used later, but you can ignore it for now.

We have to work out the values that we need to use in place of x1, y1, z1, x1, y1, z1 for our walls. Note that what we want is a larger outer block made of bricks and that is filled with a slightly smaller block of air blocks. Yes, in Minecraft even air is actually just another type of block.

Once you have typed in the two lines that create the shell of your house, you almost ready to run your program. Before doing so, you must have Minecraft running and displaying the contents of your world. Do not have a world loaded with things that you have created, as they may get destroyed by the house that we are building. Go to a clear area in the Minecraft world before running the program. When you run your program, check for any errors in the ‘console’ window and fix them, repeatedly running the code again until you’ve corrected all the errors.

You should see a block of bricks now, as shown above. You may have to turn the player around in the Minecraft world before you can see your house.

Adding the floor and door

Now, let’s make our house a bit more interesting! Add the lines for the floor and door. Note that the floor extends beyond the boundary of the wall of the house; can you see how we achieve this?

Hint: look closely at how we calculate the x and z attributes as compared to when we created the house shell above. Also note that we use a value of y-1 to create the floor below our feet.

Minecraft doors are two blocks high, so we have to create them in two parts. This is where we have to use the data argument. A value of 0 is used for the lower half of the door, and a value of 8 is used for the upper half (the part with the windows in it). These values will create an open door. If we add 4 to each of these values, a closed door will be created.

Before you run your program again, move to a new location in Minecraft to build the house away from the previous one. Then run it to check that the floor and door are created; you will need to fix any errors again. Even if your program runs without errors, check that the floor and door are positioned correctly. If they aren’t, then you will need to check the arguments so setBlock and setBlocks are exactly as shown in the listing.

Adding windows

Hopefully you will agree that your house is beginning to take shape! Now let’s add some windows. Looking at the plan for our house, we can see that there is a window on each side; see if you can follow along. Add the four lines of code, one for each window.

Now you can move to yet another location and run the program again; you should have a window on each side of the house. Our house is starting to look pretty good!

Adding a roof

The final stage is to add a roof to the house. To do this we are going to use wooden stairs. We will do this inside a loop so that if you change the width of your house, more layers are added to the roof. Enter the rest of the code. Be careful with the indentation: I recommend using spaces and avoiding the use of tabs. After the if statement, you need to indent the code even further. Each indentation level needs four spaces, so below the line with if on it, you will need eight spaces.

Since some of these code lines are lengthy and indented a lot, you may well find that the text wraps around as you reach the right-hand side of your editor window — don’t worry about this. You will have to be careful to get those indents right, however.

Now move somewhere new in your world and run the complete program. Iron out any last bugs, then admire your house! Does it look how you expect? Can you make it better?

Customising your house

Now you can start to customise your house. It is a good idea to use Save As in the menu to save a new version of your program. Then you can keep different designs, or refer back to your previous program if you get to a point where you don’t understand why your new one doesn’t work.

Consider these changes:

  • Change the size of your house. Are you able also to move the door and windows so they stay in proportion?
  • Change the materials used for the house. An ice house placed in an area of snow would look really cool!
  • Add a back door to your house. Or make the front door a double-width door!

We hope that you have enjoyed writing this program to build a house. Now you can easily add a house to your Minecraft world whenever you want to by simply running this program.

Get the complete code for this project here.

Continue your Minecraft journey

Minecraft Pi’s programmable interface is an ideal platform for learning Python. If you’d like to try more of our free tutorials, check out:

You may also enjoy Martin O’Hanlon’s and David Whale’s Adventures in Minecraft, and the Hacking and Making in Minecraft MagPi Essentials guide, which you can download for free or buy in print here.

The post Build a house in Minecraft using Python appeared first on Raspberry Pi.