Tag Archives: LED

What the blink is my IP address?

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/what-the-blink-is-my-ip-address/

Picture the scene: you have a Raspberry Pi configured to run on your network, you power it up headless (without a monitor), and now you need to know which IP address it was assigned.

Matthias came up with this solution, which makes your Raspberry Pi blink its IP address, because he used a Raspberry Pi Zero W headless for most of his projects and got bored with having to look it up with his DHCP server or hunt for it by pinging different IP addresses.

How does it work?

A script runs when you start your Raspberry Pi and indicates which IP address is assigned to it by blinking it out on the device’s LED. The script comprises about 100 lines of Python, and you can get it on GitHub.

A screen running Python
Easy peasy GitHub breezy

The power/status LED on the edge of the Raspberry Pi blinks numbers in a Roman numeral-like scheme. You can tell which number it’s blinking based on the length of the blink and the gaps between each blink, rather than, for example, having to count nine blinks for a number nine.

Blinking in Roman numerals

Short, fast blinks represent the numbers one to four, depending on how many short, fast blinks you see. A gap between short, fast blinks means the LED is about to blink the next digit of the IP address, and a longer blink represents the number five. So reading the combination of short and long blinks will give you your device’s IP address.

You can see this in action at this exact point in the video. You’ll see the LED blink fast once, then leave a gap, blink fast once again, then leave a gap, then blink fast twice. That means the device’s IP address ends in 112.

What are octets?

Luckily, you usually only need to know the last three numbers of the IP address (the last octet), as the previous octets will almost always be the same for all other computers on the LAN.

The script blinks out the last octet ten times, to give you plenty of chances to read it. Then it returns the LED to its default functionality.

Which LED on which Raspberry Pi?

On a Raspberry Pi Zero W, the script uses the green status/power LED, and on other Raspberry Pis it uses the green LED next to the red power LED.

The green LED blinking the IP address (the red power LED is slightly hidden by Matthias’ thumb)

Once you get the hang of the Morse code-like blinking style, this is a really nice quick solution to find your device’s IP address and get on with your project.

The post What the blink is my IP address? appeared first on Raspberry Pi.

Raspberry Pi calls out your custom workout routine

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-calls-out-your-custom-workout-routine/

If you don’t want to be tied to a video screen during home workouts, Llum AcostaSamreen Islam, and Alfred Gonzalez shared this great Raspberry Pi–powered alternative on hackster.io: their voice-activated project announces each move of your workout routine and how long you need to do it for.

This LED-lit, compact solution means you don’t need to squeeze yourself in front of a TV or crane to see what your video instructor is doing next. Instead you can be out in the garden or at a local park and complete your own, personalised workout on your own terms.

Kit list:

Raspberry Pi and MATRIX Device

The makers shared these setup guides to get MATRIX working with your Raspberry Pi. Our tiny computer doesn’t have a built-in microphone, so here’s where the two need to work together.

MATRIX, meet Raspberry Pi

Once that’s set up, ensure you enable SSH on your Raspberry Pi.

Click, click. Simple

The three sweet Hackster angels shared a four-step guide to running the software of your own customisable workout routine buddy in their original post. Happy hacking!

1. Install MATRIX Libraries and Rhasspy

Follow the steps below in order for Rhasspy to work on your Raspberry Pi.

2. Creating an intent

Access Rhasspy’s web interface by opening a browser and navigating to http://YOUR_PI_IP_HERE:12101. Then click on the Sentences tab. All intents and sentences are defined here.

By default, there are a few example sentences in the text box. Remove the default intents and add the following:

[Workout]start [my] workout

Once created, click on Save Sentences and wait for Rhasspy to finish training.

Here, Workout is an intent. You can change the wording to anything that works for you as long as you keep [Workout] the same, because this intent name will be used in the code.

3. Catching the intent

Install git on your Raspberry Pi.

sudo apt install git

Download the repository.

git clone https://github.com/matrix-io/rhasspy-workout-timer

Navigate to the folder and install the project dependencies.

cd rhasspy-workout-timernpm install

Run the program.

node index.js

4. Using and customizing the project

To change the workout to your desired routine, head into the project folder and open workout.txt. There, you’ll see:

jumping jacks 12,plank 15, test 14

To make your own workout routine, type an exercise name followed by the number of seconds to do it for. Repeat that for each exercise you want to do, separating each combo using a comma.

Whenever you want to use the Rhasspy Assistant, run the file and say “Start my workout” (or whatever it is you have it set to).

And now you’re all done — happy working out. Make sure to visit the makers’ original post on hackster.io and give it a like.

The post Raspberry Pi calls out your custom workout routine appeared first on Raspberry Pi.

How to use an LED with Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/how-to-use-an-led-with-raspberry-pi/

Learn how to use an LED with Raspberry Pi in our latest How to use video on YouTube.

HOW TO USE an LED with Raspberry Pi

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the #RaspberryPi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Using LEDs

LEDs (light-emitting diodes) are incredibly useful in digital making projects. You can use one to indicate whether a script is running or when an action can take place, or as decoration, and for so much more besides.

Blinking an LED with the help of Raspberry Pi has become a rite of passage for new digital makers: it’s the physical equivalent of the ‘hello world’ program! Therefore, it’s the first thing that the participants in our Picademy training, and many young people in physical computing sessions at coding clubs in our networks, learn how to do.

Follow the steps in our latest How to use video to learn how to control an LED with your Raspberry Pi, and go get making.

More Raspberry Pi videos

You can find the How to use YouTube playlist here, and you can subscribe to our channel and never miss a video!

And, while you’re in a subscribe-y mood, also subscribe to the Raspberry Pi Press YouTube channel, the home of all content from The MagPi, HackSpace magazine, WireFrame, Custom PC, and more.

The post How to use an LED with Raspberry Pi appeared first on Raspberry Pi.

Really, really awesome Raspberry Pi NeoPixel LED mirror

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/awesome-neopixel-led-mirror/

Check out Super Make Something’s awesome NeoPixel LED mirror: a 576 RGB LED display that converts images via the Raspberry Pi Camera Module and Raspberry Pi 3B+ into a pixelated light show.

Neopixel LED Mirror (Python, Raspberry Pi, Arduino, 3D Printing, Laser Cutting!) DIY How To

Time to pull out all the stops for the biggest Super Make Something project to date! Using 3D printing, laser cutting, a Raspberry Pi, computer vision, Python, and nearly 600 Neopixel LEDs, I build a low resolution LED mirror that displays your reflection on a massive 3 foot by 3 foot grid made from an array of 24 by 24 RGB LEDs!

Mechanical mirrors

If you’re into cool uses of tech, you may be aware of Daniel Rozin, the creative artist building mechanical mirrors out of wooden panels, trash, and…penguins, to name but a few of his wonderful builds.

A woman standing in front of a mechanical mirror made of toy penguins

Yup, this is a mechanical mirror made of toy penguins.

A digital mechanical mirror?

Inspired by Daniel Rozin’s work, Alex, the person behind Super Make Something, put an RGB LED spin on the concept, producing this stunning mirror that thoroughly impressed visitors at Cleveland Maker Faire last month.

“Inspired by Danny Rozin’s mechanical mirrors, this 3 foot by 3 foot mirror is powered by a Raspberry Pi, and uses Python and OpenCV computer vision libraries to process captured images in real time to light up 576 individual RGB LEDs!” Alex explains on Instagram. “Also onboard are nearly 600 3D-printed squares to diffuse the light from each NeoPixel, as well as 16 laser-cut panels to hold everything in place!”

The video above gives a brilliantly detailed explanation of how Alex made the, so we highly recommend giving it a watch if you’re feeling inspired to make your own.

Seriously, we really want to make one of these for Raspberry Pi Towers!

As always, be sure to subscribe to Super Make Something on YouTube and leave a comment on the video if, like us, you love the project. Most online makers are producing content such as this with very little return on their investment, so every like and subscriber really does make a difference.

The post Really, really awesome Raspberry Pi NeoPixel LED mirror appeared first on Raspberry Pi.

Rousseau-inspired Raspberry Pi Zero LED piano visualiser

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/rousseau-raspberry-pi-zero-led-piano-visualiser/

Unlock your inner Rousseau with this gorgeous Raspberry Pi Zero LED piano visualiser.

Piano LED Visualizer

Inspired by Rousseau videos I tried to build my own Piano Visualizer. It is made with Raspberry Pi and WS2812B LED strip. Screen and buttons: Waveshare LCD TFT 1,44” 128x128px.

Pianist Rousseau

Fans of the popular YouTube pianist Rousseau would be forgiven for thinking the thumbnail above is of one of his videos. It’s actually of a Raspberry Pi build by Aleksander Evening, who posted this project on Reddit last week as an homage to Rousseau, who is one of his favourite YouTubers.

Building an LED piano visualiser

After connecting the LED strip to the Raspberry Pi Zero W, and setting up the Pi as a Bluetooth MIDI host, Aleksander was almost good to go. There was just one thing standing in his way…



He wanted to use the Synthesia software for visualisations, and, unmodified, this software doesn’t support the MIDI files Aleksander planned to incorporate. Luckily, he found the workaround:

As of today Synthesia doesn’t support MIDI via Bluetooth, it should be added in next update. There is official workaround: you have to replace dll file. You also have to enable light support in Synthesia. In Visualizer settings you have to change “input” to RPI Bluetooth. After that when learning new song next-to-play keys will be illuminated in corresponding colors, blue for left hand and green for right hand.

Phew!

Homemade Rousseau

The final piece is a gorgeous mix of LEDs, sound, and animation — worthy of the project’s inspiration.

Find more information, including parts, links to the code, and build instructions, on Aleksander’s GitHub repo. And as always, if you build your own, or if you’ve created a Raspberry Pi project in honour of your favourite musician, artist, or YouTuber, we’d love to see it in the comments below.

And now, a little something from Rousseau:

Ludovico Einaudi – Nuvole Bianche

Sheet music: https://mnot.es/2N01Gqt Click the 🔔bell to join the notification squad! ♫ Listen on Spotify: http://spoti.fi/2LdpqK7 ♫ MIDI: https://patreon.com/rousseau ♫ Facebook: http://bit.ly/rousseaufb ♫ Instagram: http://bit.ly/rousseauig ♫ Twitter: http://bit.ly/rousseautw ♫ Buy me a coffee: http://buymeacoff.ee/rousseau Hope you enjoy my performance of Nuvole Bianche by Ludovico Einaudi.

The post Rousseau-inspired Raspberry Pi Zero LED piano visualiser appeared first on Raspberry Pi.

Bind MIDI inputs to LED lights using a Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/midi-controlled-led-lights-raspberry-pi/

Blinky lights and music created using a Raspberry Pi? Count us in! When Aaron Chambers shared his latest project, Py-Lights, on Reddit, we were quick to ask for more information. And here it is:

Controlling lights with MIDI commands

Tentatively titled Py-Lights, Aaron’s project allows users to assign light patterns to MIDI actions, creating a rather lovely blinky light display.

For his example, Aaron connected a MIDI keyboard to a strip of RGB LEDs via a Raspberry Pi that ran his custom Python code.

Aaron explains on Reddit:

The program I made lets me bind “actions” (strobe white, flash blue, disable all colors, etc.) to any input and any input type (hold, knob, trigger, etc.). And each action type has a set of parameters that I bind to the input. For example, I have a knob that changes a strobe’s intensity, and another knob that changes its speed.

The program updates each action, pulls its resulting color, and adds them together, then sends that to the LEDs. I’m using rtmidi for reading the midi device and pigpio for handling the LED output.

Aaron has updated the Py-Lights GitHub repo for the project to include a handy readme file and a more stable build.

The post Bind MIDI inputs to LED lights using a Raspberry Pi appeared first on Raspberry Pi.

Make your own custom LEDs using hot glue!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/homemade-custom-leds-using-hot-glue/

Tired of using the same old plastic LEDs in your projects? It’s time to grab a hot glue gun and some confectionary moulds to create your own custom LEDs!

make your own custom LEDs for Raspberry Pi

Blinky LEDs!

Lighting up an LED is the standard first step into the world of digital making with a Raspberry Pi. For example, at our two-day Picademy training events, budding Raspberry Pi Certified Educators are shown the ropes of classroom digital making by learning how to connect an LED to a Pi and use code to make it blink.

Anastasia Hanneken on Twitter

Blinking LED Light @Raspberry_Pi #picademy! https://t.co/zhTODYsBxp

And while LEDs come in various sizes, they’re all pretty much the same shape: small, coloured domes of plastic with pointy legs that always manage to draw blood when I grab them from the depths of my maker drawer.

So why not do away with the boring and make some new LEDs based on your favourite characters and shapes?

Making custom LEDs with a whole lotta hot glue

The process of creating your own custom LEDs is pretty simple, but it’s not without its risk — namely, burnt fingertips and sizzled LEDs! So be careful when making these, and supervise young children throughout the process.

The moulds

I used flexible ice cube trays, but you could also use chocolate moulds. As long as the mould is flexible, this should work — I haven’t tried hard plastic moulds, so I can’t make any promises for those. Also be sure to test whether your mould will withstand the heat of the hot glue!

Check your LEDs

Before you submerge your LEDs in hot glue, check to make sure they work. The easiest way to do this is to set up a testing station using a Pi, a breadboard, some jumper wires, and a resistor. To save having to write code, I used the 3V3 pin and a ground pin.

make your own custom LEDs for Raspberry Pi

Remember, the shorter of the two legs connects to the ground pin, while the longer goes to 3V3. If you mix this up, you may end up with a fried LED like this poor LEGO man.

make your own custom LEDs for Raspberry Pi

Everything isn’t awesome.

Once you’ve confirmed that your LED works, bend its legs to make it easier to insert it into the glue.

Glue

Next, grab a hot glue gun and fill a mould. The glue will take a while to cool, so you have some time to make sure that all nooks and crannies are filled before you insert an LED.

make your own custom LEDs for Raspberry Pi

Tip: test a corner of your mould with the tip of your glue gun to check how heat-resistant it is. One of my moulds didn’t enjoy heat and began to bubble.

Once your mould is properly filled, push an LED into the glue, holding on to the legs to keep your fingertips safe. Have a wiggle around to find the bottom and sides of your mould and ensure that your LED is in the centre.

make your own custom LEDs for Raspberry Pi

Pick a colour best suited to your mould. You could try using multiple LEDs on larger moulds to introduce more colours!

You may notice that the LED tries to sink a little and the legs begin to drop. Keep an eye out and adjust them if you need to. They’ll stop moving once the glue begins to set.

make your own custom LEDs for Raspberry Pi

These took about ten minutes to cool down.

Be patient

Don’t rush. The hot glue will take time to cool down, especially if you’re using a larger mould like the one for this Stormtrooper helmet.

Custom hot glue LED

Here I used a gumdrop LED, which is larger than your standard maker kit LED.

You’ll know that the glue has set when the shape pulls away easily from the mould. It should just pop out when its ready.

make your own custom LEDs for Raspberry Pi

Pop!

Light it up

Test your new custom LED one more time on your testing rig to ensure you haven’t damaged the connections.

make your own custom LEDs for Raspberry Pi

As with all LEDs, they look better in the dark (and terrible when you try to take a photo of them), so try testing them in a dim room or at night. You could also use a box to create a small testing lab if you’re planning to make a lot of these.

Custom hot glue LED
Custom hot glue LED
Custom hot glue LED

Now it’s your turn

What custom LED would you want to make? How would you use it in your next project? And what other fun hacks have you used to augment tech for your builds?

The post Make your own custom LEDs using hot glue! appeared first on Raspberry Pi.

AWS Resources Addressing Argentina’s Personal Data Protection Law and Disposition No. 11/2006

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/aws-and-resources-addressing-argentinas-personal-data-protection-law-and-disposition-no-112006/

We have two new resources to help customers address their data protection requirements in Argentina. These resources specifically address the needs outlined under the Personal Data Protection Law No. 25.326, as supplemented by Regulatory Decree No. 1558/2001 (“PDPL”), including Disposition No. 11/2006. For context, the PDPL is an Argentine federal law that applies to the protection of personal data, including during transfer and processing.

A new webpage focused on data privacy in Argentina features FAQs, helpful links, and whitepapers that provide an overview of PDPL considerations, as well as our security assurance frameworks and international certifications, including ISO 27001, ISO 27017, and ISO 27018. You’ll also find details about our Information Request Report and the high bar of security at AWS data centers.

Additionally, we’ve released a new workbook that offers a detailed mapping as to how customers can operate securely under the Shared Responsibility Model while also aligning with Disposition No. 11/2006. The AWS Disposition 11/2006 Workbook can be downloaded from the Argentina Data Privacy page or directly from this link. Both resources are also available in Spanish from the Privacidad de los datos en Argentina page.

Want more AWS Security news? Follow us on Twitter.

 

Build your own weather station with our new guide!

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/build-your-own-weather-station/

One of the most common enquiries I receive at Pi Towers is “How can I get my hands on a Raspberry Pi Oracle Weather Station?” Now the answer is: “Why not build your own version using our guide?”

Build Your Own weather station kit assembled

Tadaaaa! The BYO weather station fully assembled.

Our Oracle Weather Station

In 2016 we sent out nearly 1000 Raspberry Pi Oracle Weather Station kits to schools from around the world who had applied to be part of our weather station programme. In the original kit was a special HAT that allows the Pi to collect weather data with a set of sensors.

The original Raspberry Pi Oracle Weather Station HAT – Build Your Own Raspberry Pi weather station

The original Raspberry Pi Oracle Weather Station HAT

We designed the HAT to enable students to create their own weather stations and mount them at their schools. As part of the programme, we also provide an ever-growing range of supporting resources. We’ve seen Oracle Weather Stations in great locations with a huge differences in climate, and they’ve even recorded the effects of a solar eclipse.

Our new BYO weather station guide

We only had a single batch of HATs made, and unfortunately we’ve given nearly* all the Weather Station kits away. Not only are the kits really popular, we also receive lots of questions about how to add extra sensors or how to take more precise measurements of a particular weather phenomenon. So today, to satisfy your demand for a hackable weather station, we’re launching our Build your own weather station guide!

Build Your Own Raspberry Pi weather station

Fun with meteorological experiments!

Our guide suggests the use of many of the sensors from the Oracle Weather Station kit, so can build a station that’s as close as possible to the original. As you know, the Raspberry Pi is incredibly versatile, and we’ve made it easy to hack the design in case you want to use different sensors.

Many other tutorials for Pi-powered weather stations don’t explain how the various sensors work or how to store your data. Ours goes into more detail. It shows you how to put together a breadboard prototype, it describes how to write Python code to take readings in different ways, and it guides you through recording these readings in a database.

Build Your Own Raspberry Pi weather station on a breadboard

There’s also a section on how to make your station weatherproof. And in case you want to move past the breadboard stage, we also help you with that. The guide shows you how to solder together all the components, similar to the original Oracle Weather Station HAT.

Who should try this build

We think this is a great project to tackle at home, at a STEM club, Scout group, or CoderDojo, and we’re sure that many of you will be chomping at the bit to get started. Before you do, please note that we’ve designed the build to be as straight-forward as possible, but it’s still fairly advanced both in terms of electronics and programming. You should read through the whole guide before purchasing any components.

Build Your Own Raspberry Pi weather station – components

The sensors and components we’re suggesting balance cost, accuracy, and easy of use. Depending on what you want to use your station for, you may wish to use different components. Similarly, the final soldered design in the guide may not be the most elegant, but we think it is achievable for someone with modest soldering experience and basic equipment.

You can build a functioning weather station without soldering with our guide, but the build will be more durable if you do solder it. If you’ve never tried soldering before, that’s OK: we have a Getting started with soldering resource plus video tutorial that will walk you through how it works step by step.

Prototyping HAT for Raspberry Pi weather station sensors

For those of you who are more experienced makers, there are plenty of different ways to put the final build together. We always like to hear about alternative builds, so please post your designs in the Weather Station forum.

Our plans for the guide

Our next step is publishing supplementary guides for adding extra functionality to your weather station. We’d love to hear which enhancements you would most like to see! Our current ideas under development include adding a webcam, making a tweeting weather station, adding a light/UV meter, and incorporating a lightning sensor. Let us know which of these is your favourite, or suggest your own amazing ideas in the comments!

*We do have a very small number of kits reserved for interesting projects or locations: a particularly cool experiment, a novel idea for how the Oracle Weather Station could be used, or places with specific weather phenomena. If have such a project in mind, please send a brief outline to [email protected], and we’ll consider how we might be able to help you.

The post Build your own weather station with our new guide! appeared first on Raspberry Pi.

Kernel 4.17 released

Post Syndicated from corbet original https://lwn.net/Articles/756373/rss

Linus has released the 4.17 kernel, which
will indeed be called “4.17”.
No, I didn’t call it 5.0, even though all the git object count
numerology was in place for that. It will happen in the not _too_
distant future, and I’m told all the release scripts on kernel.org are
ready for it, but I didn’t feel there was any real reason for it.

Headline features in this release include
improved load estimation in the CPU
scheduler,
raw
BPF tracepoints
,
lazytime support in the XFS filesystem,
full in-kernel TLS protocol support,
histogram triggers for tracing,
mitigations for the latest Spectre variants,
and, of course, the removal of support for eight unloved processor
architectures.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

Friday Squid Blogging: Do Cephalopods Contain Alien DNA?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/friday_squid_bl_627.html

Maybe not DNA, but biological somethings.

Cause of Cambrian explosion — Terrestrial or Cosmic?“:

Abstract: We review the salient evidence consistent with or predicted by the Hoyle-Wickramasinghe (H-W) thesis of Cometary (Cosmic) Biology. Much of this physical and biological evidence is multifactorial. One particular focus are the recent studies which date the emergence of the complex retroviruses of vertebrate lines at or just before the Cambrian Explosion of ~500 Ma. Such viruses are known to be plausibly associated with major evolutionary genomic processes. We believe this coincidence is not fortuitous but is consistent with a key prediction of H-W theory whereby major extinction-diversification evolutionary boundaries coincide with virus-bearing cometary-bolide bombardment events. A second focus is the remarkable evolution of intelligent complexity (Cephalopods) culminating in the emergence of the Octopus. A third focus concerns the micro-organism fossil evidence contained within meteorites as well as the detection in the upper atmosphere of apparent incoming life-bearing particles from space. In our view the totality of the multifactorial data and critical analyses assembled by Fred Hoyle, Chandra Wickramasinghe and their many colleagues since the 1960s leads to a very plausible conclusion — life may have been seeded here on Earth by life-bearing comets as soon as conditions on Earth allowed it to flourish (about or just before 4.1 Billion years ago); and living organisms such as space-resistant and space-hardy bacteria, viruses, more complex eukaryotic cells, fertilised ova and seeds have been continuously delivered ever since to Earth so being one important driver of further terrestrial evolution which has resulted in considerable genetic diversity and which has led to the emergence of mankind.

Two commentaries.

This is almost certainly not true.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

1834: The First Cyberattack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/1834_the_first_.html

Tom Standage has a great story of the first cyberattack against a telegraph network.

The Blanc brothers traded government bonds at the exchange in the city of Bordeaux, where information about market movements took several days to arrive from Paris by mail coach. Accordingly, traders who could get the information more quickly could make money by anticipating these movements. Some tried using messengers and carrier pigeons, but the Blanc brothers found a way to use the telegraph line instead. They bribed the telegraph operator in the city of Tours to introduce deliberate errors into routine government messages being sent over the network.

The telegraph’s encoding system included a “backspace” symbol that instructed the transcriber to ignore the previous character. The addition of a spurious character indicating the direction of the previous day’s market movement, followed by a backspace, meant the text of the message being sent was unaffected when it was written out for delivery at the end of the line. But this extra character could be seen by another accomplice: a former telegraph operator who observed the telegraph tower outside Bordeaux with a telescope, and then passed on the news to the Blancs. The scam was only uncovered in 1836, when the crooked operator in Tours fell ill and revealed all to a friend, who he hoped would take his place. The Blanc brothers were put on trial, though they could not be convicted because there was no law against misuse of data networks. But the Blancs’ pioneering misuse of the French network qualifies as the world’s first cyber-attack.

Hiring a Director of Sales

Post Syndicated from Yev original https://www.backblaze.com/blog/hiring-a-director-of-sales/

Backblaze is hiring a Director of Sales. This is a critical role for Backblaze as we continue to grow the team. We need a strong leader who has experience in scaling a sales team and who has an excellent track record for exceeding goals by selling Software as a Service (SaaS) solutions. In addition, this leader will need to be highly motivated, as well as able to create and develop a highly-motivated, success oriented sales team that has fun and enjoys what they do.

The History of Backblaze from our CEO
In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to back up their data.

Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round.

Fast forward 10 years later and our world looks quite different. You’ll have some fantastic assets to work with:

  • A brand millions recognize for openness, ease-of-use, and affordability.
  • A computer backup service that stores over 500 petabytes of data, has recovered over 30 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends.
  • Our B2 service that provides the lowest cost cloud storage on the planet at 1/4th the price Amazon, Google or Microsoft charges. While being a newer product on the market, it already has over 100,000 IT and developers signed up as well as an ecosystem building up around it.
  • A growing, profitable and cash-flow positive company.
  • And last, but most definitely not least: a great sales team.

You might be saying, “sounds like you’ve got this under control — why do you need me?” Don’t be misled. We need you. Here’s why:

  • We have a great team, but we are in the process of expanding and we need to develop a structure that will easily scale and provide the most success to drive revenue.
  • We just launched our outbound sales efforts and we need someone to help develop that into a fully successful program that’s building a strong pipeline and closing business.
  • We need someone to work with the marketing department and figure out how to generate more inbound opportunities that the sales team can follow up on and close.
  • We need someone who will work closely in developing the skills of our current sales team and build a path for career growth and advancement.
  • We want someone to manage our Customer Success program.

So that’s a bit about us. What are we looking for in you?

Experience: As a sales leader, you will strategically build and drive the territory’s sales pipeline by assembling and leading a skilled team of sales professionals. This leader should be familiar with generating, developing and closing software subscription (SaaS) opportunities. We are looking for a self-starter who can manage a team and make an immediate impact of selling our Backup and Cloud Storage solutions. In this role, the sales leader will work closely with the VP of Sales, marketing staff, and service staff to develop and implement specific strategic plans to achieve and exceed revenue targets, including new business acquisition as well as build out our customer success program.

Leadership: We have an experienced team who’s brought us to where we are today. You need to have the people and management skills to get them excited about working with you. You need to be a strong leader and compassionate about developing and supporting your team.

Data driven and creative: The data has to show something makes sense before we scale it up. However, without creativity, it’s easy to say “the data shows it’s impossible” or to find a local maximum. Whether it’s deciding how to scale the team, figuring out what our outbound sales efforts should look like or putting a plan in place to develop the team for career growth, we’ve seen a bit of creativity get us places a few extra dollars couldn’t.

Jive with our culture: Strong leaders affect culture and the person we hire for this role may well shape, not only fit into, ours. But to shape the culture you have to be accepted by the organism, which means a certain set of shared values. We default to openness with our team, our customers, and everyone if possible. We love initiative — without arrogance or dictatorship. We work to create a place people enjoy showing up to work. That doesn’t mean ping pong tables and foosball (though we do try to have perks & fun), but it means people are friendly, non-political, working to build a good service but also a good place to work.

Do the work: Ideas and strategy are critical, but good execution makes them happen. We’re looking for someone who can help the team execute both from the perspective of being capable of guiding and organizing, but also someone who is hands-on themselves.

Additional Responsibilities needed for this role:

  • Recruit, coach, mentor, manage and lead a team of sales professionals to achieve yearly sales targets. This includes closing new business and expanding upon existing clientele.
  • Expand the customer success program to provide the best customer experience possible resulting in upsell opportunities and a high retention rate.
  • Develop effective sales strategies and deliver compelling product demonstrations and sales pitches.
  • Acquire and develop the appropriate sales tools to make the team efficient in their daily work flow.
  • Apply a thorough understanding of the marketplace, industry trends, funding developments, and products to all management activities and strategic sales decisions.
  • Ensure that sales department operations function smoothly, with the goal of facilitating sales and/or closings; operational responsibilities include accurate pipeline reporting and sales forecasts.
  • This position will report directly to the VP of Sales and will be staffed in our headquarters in San Mateo, CA.

Requirements:

  • 7 – 10+ years of successful sales leadership experience as measured by sales performance against goals.
    Experience in developing skill sets and providing career growth and opportunities through advancement of team members.
  • Background in selling SaaS technologies with a strong track record of success.
  • Strong presentation and communication skills.
  • Must be able to travel occasionally nationwide.
  • BA/BS degree required

Think you want to join us on this adventure?
Send an email to jobscontact@backblaze.com with the subject “Director of Sales.” (Recruiters and agencies, please don’t email us.) Include a resume and answer these two questions:

  1. How would you approach evaluating the current sales team and what is your process for developing a growth strategy to scale the team?
  2. What are the goals you would set for yourself in the 3 month and 1-year timeframes?

Thank you for taking the time to read this and I hope that this sounds like the opportunity for which you’ve been waiting.

Backblaze is an Equal Opportunity Employer.

The post Hiring a Director of Sales appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon Neptune Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-neptune-generally-available/

Amazon Neptune is now Generally Available in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland). Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. At the core of Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with millisecond latencies. Neptune supports two popular graph models, Property Graph and RDF, through Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune can be used to power everything from recommendation engines and knowledge graphs to drug discovery and network security. Neptune is fully-managed with automatic minor version upgrades, backups, encryption, and fail-over. I wrote about Neptune in detail for AWS re:Invent last year and customers have been using the preview and providing great feedback that the team has used to prepare the service for GA.

Now that Amazon Neptune is generally available there are a few changes from the preview:

Launching an Amazon Neptune Cluster

Launching a Neptune cluster is as easy as navigating to the AWS Management Console and clicking create cluster. Of course you can also launch with CloudFormation, the CLI, or the SDKs.

You can monitor your cluster health and the health of individual instances through Amazon CloudWatch and the console.

Additional Resources

We’ve created two repos with some additional tools and examples here. You can expect continuous development on these repos as we add additional tools and examples.

  • Amazon Neptune Tools Repo
    This repo has a useful tool for converting GraphML files into Neptune compatible CSVs for bulk loading from S3.
  • Amazon Neptune Samples Repo
    This repo has a really cool example of building a collaborative filtering recommendation engine for video game preferences.

Purpose Built Databases

There’s an industry trend where we’re moving more and more onto purpose-built databases. Developers and businesses want to access their data in the format that makes the most sense for their applications. As cloud resources make transforming large datasets easier with tools like AWS Glue, we have a lot more options than we used to for accessing our data. With tools like Amazon Redshift, Amazon Athena, Amazon Aurora, Amazon DynamoDB, and more we get to choose the best database for the job or even enable entirely new use-cases. Amazon Neptune is perfect for workloads where the data is highly connected across data rich edges.

I’m really excited about graph databases and I see a huge number of applications. Looking for ideas of cool things to build? I’d love to build a web crawler in AWS Lambda that uses Neptune as the backing store. You could further enrich it by running Amazon Comprehend or Amazon Rekognition on the text and images found and creating a search engine on top of Neptune.

As always, feel free to reach out in the comments or on twitter to provide any feedback!

Randall

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.

Summary

SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see: