Tag Archives: temperature

timeShift(GrafanaBuzz, 1w) Issue 14

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/22/timeshiftgrafanabuzz-1w-issue-14/

Summer is officially in the rear-view mirror, but we at Grafana Labs are excited. Next week, the team will gather in Stockholm, Sweden where we’ll be discussing Grafana 5.0, GrafanaCon EU and setting other goals. If you’re attending Percona Live Europe 2017 in Dublin, be sure and catch Grafana developer, Daniel Lee on Tuesday, September 26. He’ll be showing off the new MySQL data source and a sneak peek of Grafana 5.0.

And with that – we hope you enjoy this issue of TimeShift!


Latest Release

Grafana 4.5.2 is now available! Various fixes to the Graphite data source, HTTP API, and templating.

To see details on what’s been fixed in the newest version, please see the release notes.

Download Grafana 4.5.2 Now


From the Blogosphere

A Monitoring Solution for Docker Hosts, Containers and Containerized Services: Stefan was searching for an open source, self-hosted monitoring solution. With an ever-growing number of open source TSDBs, Stefan outlines why he chose Prometheus and provides a rundown of how he’s monitoring his Docker hosts, containers and services.

Real-time API Performance Monitoring with ES, Beats, Logstash and Grafana: As APIs become a centerpiece for businesses, monitoring API performance is extremely important. Hiren recently configured real time API response time monitoring for a project and shares his implementation plan and configurations.

Monitoring SSL Certificate Expiry in GCP and Kubernetes: This article discusses how to use Prometheus and Grafana to automatically monitor SSL certificates in use by load balancers across GCP projects.

Node.js Performance Monitoring with Prometheus: This is a good primer for monitoring in general. It discusses what monitoring is, important signals to know, instrumentation, and things to consider when selecting a monitoring tool.

DIY Dashboard with Grafana and MariaDB: Mark was interested in testing out the new beta MySQL support in Grafana, so he wrote a short article on how he is using Grafana with MariaDB.

Collecting Temperature Data with Raspberry Pi Computers: Many of us use monitoring for tracking mission-critical systems, but setting up environment monitoring can be a fun way to improve your programming skills as well.


GrafanaCon EU CFP is Open

Have a big idea to share? A shorter talk or a demo you’d like to show off? We’re looking for technical and non-technical talks of all sizes. The proposals are rolling in, but we are happy to save a speaking slot for you!

I’d Like to Speak at GrafanaCon


Grafana Plugins

There were a lot of plugin updates to highlight this week, many of which were due to changes in Grafana 4.5. It’s important to keep your plugins up to date, since bug fixes and new features are added frequently. We’ve made the process of installing and updating plugins simple. On an on-prem instance, use the Grafana-cli, or on Hosted Grafana, install and update with 1-click.

NEW PLUGIN

Linksmart HDS Data Source – The LinkSmart Historical Data Store is a new Grafana data source plugin. LinkSmart is an open source IoT platform for developing IoT applications. IoT applications need to deal with large amounts of data produced by a growing number of sensors and other devices. The Historical Datastore is for storing, querying, and aggregating (time-series) sensor data.

Install Now

UPDATED PLUGIN

Simple JSON Data Source – This plugin received a bug fix for the query editor.

Update Now

UPDATED PLUGIN

Stagemonitor Elasticsearch App – Numerous small updates and the version updated to match the StageMonitor version number.

Update Now

UPDATED PLUGIN

Discrete Panel – Update to fix breaking change in Grafana 4.5.

Update Now

UPDATED PLUGIN

Status Dot Panel – Minor HTML Update in this version.

Update Now

UPDATED PLUGIN

Alarm Box Panel – This panel was updated to fix breaking changes in Grafana 4.5.

Update Now


This week’s MVC (Most Valuable Contributor)

Each week we highlight a contributor to Grafana or the surrounding ecosystem as a thank you for their participation in making open source software great.

Sven Klemm opened a PR for adding a new Postgres data source and has been very quick at implementing proposed changes. The Postgres data source is on our roadmap for Grafana 5.0 so this PR really helps. Thanks Sven!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Glad you’re finding Grafana useful! Curious about that annotation just before midnight 🙂

We Need Your Help

Last week we announced an experiment we were conducting, and need your help! Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

I Want to Help


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


What do you think?

What would you like to see here? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

The Weather Station and the eclipse

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/weather-station-eclipse/

As everyone knows, one of the problems with the weather is that it can be difficult to predict a long time in advance. In the UK we’ve had stormy conditions for weeks but, of course, now that I’ve finished my lightning detector, everything has calmed down. If you’re planning to make scientific measurements of a particular phenomenon, patience is often required.

Oracle Weather Station

Wake STEM ECH get ready to safely observe the eclipse

In the path of the eclipse

Fortunately, this wasn’t a problem for Mr Burgess and his students at Wake STEM Early College High School in Raleigh, North Carolina, USA. They knew exactly when the event they were interested in studying was going to occur: they were going to use their Raspberry Pi Oracle Weather Station to monitor the progress of the 2017 solar eclipse.

Wake STEM EC HS on Twitter

Through the @Celestron telescope #Eclipse2017 @WCPSS via @stemburgess

Measuring the temperature drop

The Raspberry Pi Oracle Weather Stations are always active and recording data, so all the students needed to do was check that everything was connected and working. That left them free to enjoy the eclipse, and take some amazing pictures like the one above.

You can see from the data how the changes in temperature lag behind the solar events – this makes sense, as it takes a while for the air to cool down. When the sun starts to return, the temperature rise continues on its pre-eclipse trajectory.

Oracle Weather Station

Weather station data 21st Aug: the yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

Reading Mr Burgess’ description, I’m feeling rather jealous. Being in the path of the Eclipse sounds amazing: “In North Carolina we experienced 93% coverage, so a lot of sunlight was still shining, but the landscape took on an eerie look. And there was a cool wind like you’d experience at dusk, not at 2:30 pm on a hot summer day. I was amazed at the significant drop in temperature that occurred in a small time frame.”

Temperature drop during Eclipse Oracle Weather Station.

Close up of data showing temperature drop as recorded by the Raspberry Pi Oracle Weather Station. The yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

 Weather Station in the classroom

I’ve been preparing for the solar eclipse for almost two years, with the weather station arriving early last school year. I did not think about temperature data until I read about citizen scientists on a NASA website,” explains Mr Burgess, who is now in his second year of working with the Raspberry Pi Oracle Weather Station. Around 120 ninth-grade students (ages 14-15) have been involved with the project so far. “I’ve found that students who don’t have a strong interest in meteorology find it interesting to look at real data and figure out trends.”

Wake STEM EC Raspberry Pi Oracle Weather Station installation

Wake STEM EC Raspberry Pi Oracle Weather Station installation

As many schools have discovered, Mr Burgess found that the biggest challenge with the Weather Station project “was finding a suitable place to install the weather station in a place that could get power and Ethernet“. To help with this problem, we’ve recently added two new guides to help with installing the wind sensors outside and using WiFi to connect the kit to the Internet.

Raspberry Pi Oracle Weather Station

If you want to keep up to date with all the latest Raspberry Pi Oracle Weather Station activities undertaken by our network of schools around the world, make sure you regularly check our weather station forum. Meanwhile, everyone at Wake STEM ECH is already starting to plan for their next eclipse on Monday, April 8, 2024. I wonder if they’d like some help with their Weather Station?

The post The Weather Station and the eclipse appeared first on Raspberry Pi.

Hunting for life on Mars assisted by high-altitude balloons

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eclipse-high-altitude-balloons/

Will bacteria-laden high-altitude balloons help us find life on Mars? Today’s eclipse should bring us closer to an answer.

NASA Bacteria Balloons Raspberry Pi HAB Life on Mars

image c/o NASA / Ames Research Center / Tristan Caro

The Eclipse Ballooning Project

Having learned of the Eclipse Ballooning Project set to take place today across the USA, a team at NASA couldn’t miss the opportunity to harness the high-flying project for their own experiments.

NASA Bacteria Balloons Raspberry Pi HAB Life on Mars

The Eclipse Ballooning Project invited students across the USA to aid in the launch of 50+ high-altitude balloons during today’s eclipse. Each balloon is equipped with its own Raspberry Pi and camera for data collection and live video-streaming.

High-altitude ballooning, or HAB as it’s often referred to, has become a popular activity within the Raspberry Pi community. The lightweight nature of the device allows for high ascent, and its Camera Module enables instant visual content collection.

Life on Mars

image c/o Montana State University

The Eclipse Ballooning Project team, headed by Angela Des Jardins of Montana State University, was contacted by Jim Green, Director of Planetary Science at NASA, who hoped to piggyback on the project to run tests on bacteria in the Mars-like conditions the balloons would encounter near space.

Into the stratosphere

At around -35 degrees Fahrenheit, with thinner air and harsher ultraviolet radiation, the conditions in the upper part of the earth’s stratosphere are comparable to those on the surface of Mars. And during the eclipse, the moon will block some UV rays, making the environment in our stratosphere even more similar to the martian oneideal for NASA’s experiment.

So the students taking part in the Eclipse Ballooning Project could help the scientists out, NASA sent them some small metal tags.

NASA Bacteria Balloons Raspberry Pi HAB Life on Mars

These tags contain samples of a kind of bacterium known as Paenibacillus xerothermodurans. Upon their return to ground, the bacteria will be tested to see whether and how the high-altitude conditions affected them.

Life on Mars

Paenibacillus xerothermodurans is one of the most resilient bacterial species we know. The team at NASA wants to discover how the bacteria react to their flight in order to learn more about whether life on Mars could possibly exist. If the low temperature, UV rays, and air conditions cause the bacteria to mutate or indeed die, we can be pretty sure that the existence of living organisms on the surface of Mars is very unlikely.

Life on Mars

What happens to the bacteria on the spacecraft and rovers we send to space? This experiment should provide some answers.

The eclipse

If you’re in the US, you might have a chance to witness the full solar eclipse today. And if you’re planning to watch, please make sure to take all precautionary measures. In a nutshell, don’t look directly at the sun. Not today, not ever.

If you’re in the UK, you can observe a partial eclipse, if the clouds decide to vanish. And again, take note of safety measures so you don’t damage your eyes.

Life on Mars

You can also watch a live-stream of the eclipse via the NASA website.

If you’ve created an eclipse-viewing Raspberry Pi project, make sure to share it with us. And while we’re talking about eclipses and balloons, check here for our coverage of the 2015 balloon launches coinciding with the UK’s partial eclipse.

The post Hunting for life on Mars assisted by high-altitude balloons appeared first on Raspberry Pi.

TV Box Seller Emails Sky TV Bosses With ‘Pirate’ Offer, Gets Sued for $1m

Post Syndicated from Andy original https://torrentfreak.com/tv-box-seller-emails-sky-tv-bosses-with-pirate-offer-gets-sued-for-1m-170804/

After relatively quiet treatment in the media, last year press in New Zealand began reporting on the booming ‘pirate’ set-top box business sweeping the world.

Often based around legal Kodi software boosted with third-party addons, the devices are known for providing free movies, TV shows, and sports.

Last November, ‘My Box NZ’ owner Krish Reddy, who said he would take on Sky in its own backyard with his custom streaming boxes, hit the headlines. The 27-year-old told NZHerald that “it seemed like a great idea so we decided to do it ourselves.”

The boxes offered some local free-to-air channels but also the all-important premium offerings from Sky, including Sky Movies and Sky Sports, an expensive proposition for an official subscriber.

“Why pay $80 minimum per month for Sky when for one payment you can have it free for good?” Reddy’s advertising said.

Reddy was confident in the abilities of his product but was also confident he wasn’t breaking the law.

“I don’t see why [Sky] would contact me but if they do contact me and … if there’s something of theirs that they feel I’ve unlawfully taken then yeah … but as it stands I don’t [have any concerns],” he told the Herald.

As things moved on, Reddy’s business really took off. He admitted to having sold 8,000 of the devices and then April this year, Sky appeared to ruh out of patience. In a letter from its lawyers, the pay TV company said Reddy’s devices breached copyright law and the Fair Trading Act. Reddy responded by calling the TV giant “a playground bully” and denied again that he was breaking the law.

“From a legal perspective, what we do is completely within the law. We advertise Sky television channels being available through our website and social media platforms as these are available via streams which you can find through My Box,” he said.

“The content is already available, I’m not going out there and bringing the content so how am I infringing the copyright… the content is already there, if someone uses the box to search for the content, that’s what it is.”

Stuff reports that the initial compensation demand from Sky against Reddy’s company My Box runs to NZD$1.4m (US$1m), an amount that could “rise by millions” by the time a judgment is reached.

“They have given us until September 24 to respond. We are not going to sit and take it,” Reddy told the publication. “How many people can say they went up against a multimillion dollar giant like Sky?”

And it seems that Reddy is absolutely determined to fight back. Earlier this year he said that his father always encouraged him as a child to seek out the big guy for a fight, something that is now playing out with one of the world’s biggest broadcasters.

“[Sky’s] point of view is they own copyright and I’m destroying the market by giving people content for free. To me it is business; I have got something that is new … that’s competition,” he said.

In Europe, where these kinds of cases have already been tested at the highest level, comments like these would be extremely ill-advised and enough to give any defending lawyer a high temperature, but Reddy really doesn’t seem to care.

In fact, a bulk email he sent out to 50,000 people advertising his product as “being better than Sky”, actually found the inboxes of 50 Sky TV staff and directors. He believes this triggered the legal action from the company.

While Reddy was on Sky’s radar long before the mailshot, the blatancy of his advertising and its targets won’t have helped his case one bit. Sky, for its part, is determined to get a ruling against a large player and Reddy seems the perfect catch.

“Anyone selling these boxes are within our sights. You have got to go after the big fish first,” said Sky spokeswoman Kirsty Way.

No case like this has ever gone to court in New Zealand so it could be important for setting the ground rules on several aspects of copyright law, including the making available right.

In addition to prosecutions, Way told Stuff that it could also be possible to introduce site-blocking laws such as those already in place in Australia and the UK. These would aim to render Kodi-powered devices less effective at providing copyrighted content from unauthorized sources.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How To Get Your First 1,000 Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-1000-customers/

PR for getting your first 1000 customers

If you launch your startup and no one knows, did you actually launch? As mentioned in my last post, our initial launch target was to get a 1,000 people to use our service. But how do you get even 1,000 people to sign up for your service when no one knows who you are?

There are a variety of methods to attract your first 1,000 customers, but launching with the press is my favorite. I’ll explain why and how to do it below.

Paths to Attract Your First 1,000 Customers

Social following: If you have a massive social following, those people are a reasonable target for what you’re offering. In particular if your relationship with them is one where they would buy something you recommend, this can be one of the easiest ways to get your initial customers. However, building this type of following is non-trivial and often is done over several years.

Press not only provides awareness and customers, but credibility and SEO benefits as well

Paid advertising: The advantage of paid ads is you have control over when they are presented and what they say. The primary disadvantage is they tend to be expensive, especially before you have your positioning, messaging, and funnel nailed.

Viral: There are certainly examples of companies that launched with a hugely viral video, blog post, or promotion. While fantastic if it happens, even if you do everything right, the likelihood of massive virality is miniscule and the conversion rate is often low.

Press: As I said, this is my favorite. You don’t need to pay a PR agency and can go from nothing to launched in a couple weeks. Press not only provides awareness and customers, but credibility and SEO benefits as well.

How to Pitch the Press

It’s easy: Have a compelling story, find the right journalists, make their life easy, pitch and follow-up. Of course, each one of those has some nuance, so let’s dig in.

Have a compelling story

How to Get Attention When you’ve been working for months on your startup, it’s easy to get lost in the minutiae when talking to others. Stories that a journalist will write about need to be something their readers will care about. Knowing what story to tell and how to tell it is part science and part art. Here’s how you can get there:

The basics of your story

Ask yourself the following questions, and write down the answers:

  • What are we doing? What product service are we offering?
  • Why? What problem are we solving?
  • What is interesting or unique? Either about what we’re doing, how we’re doing it, or for who we’re doing it.

“But my story isn’t that exciting”

Neither was announcing a data backup company, believe me. Look for angles that make it compelling. Here are some:

  • Did someone on your team do something major before? (build a successful company/product, create some innovation, market something we all know, etc.)
  • Do you have an interesting investor or board member?
  • Is there a personal story that drove you to start this company?
  • Are you starting it in a unique place?
  • Did you come upon the idea in a unique way?
  • Can you share something people want to know that’s not usually shared?
  • Are you partnered with a well-known company?
  • …is there something interesting/entertaining/odd/shocking/touching/etc.?

It doesn’t get much less exciting than, “We’re launching a company that will backup your data.” But there were still a lot of compelling stories:

  • Founded by serial entrepreneurs, bootstrapped a capital-intensive company, committed to each other for a year without salary.
  • Challenging the way that every backup company before was set up by not asking customers to pick and choose files to backup.
  • Designing our own storage system.
  • Etc. etc.

For the initial launch, we focused on “unlimited for $5/month” and statistics from a survey we ran with Harris Interactive that said that 94% of people did not regularly backup their data.

It’s an old adage that “Everyone has a story.” Regardless of what you’re doing, there is always something interesting to share. Dig for that.

The headline

Once you’ve captured what you think the interesting story is, you’ve got to boil it down. Yes, you need the elevator pitch, but this is shorter…it’s the headline pitch. Write the headline that you would love to see a journalist write.

Regardless of what you’re doing, there is always something interesting to share. Dig for that.

Now comes the part where you have to be really honest with yourself: if you weren’t involved, would you care?

The “Techmeme Test”

One way I try to ground myself is what I call the “Techmeme Test”. Techmeme lists the top tech articles. Read the headlines. Imagine the headline you wrote in the middle of the page. If you weren’t involved, would you click on it? Is it more or less compelling than the others. Much of tech news is dominated by the largest companies. If you want to get written about, your story should be more compelling. If not, go back above and explore your story some more.

Embargoes, exclusives and calls-to-action

Journalists write about news. Thus, if you’ve already announced something and are then pitching a journalist to cover it, unless you’re giving her something significant that hasn’t been said, it’s no longer news. As a result, there are ‘embargoes’ and ‘exclusives’.

Embargoes

    • : An embargo simply means that you are sharing news with a journalist that they need to keep private until a certain date and time.

If you’re Apple, this may be a formal and legal document. In our case, it’s as simple as saying, “Please keep embargoed until 4/13/17 at 8am California time.” in the pitch. Some sites explicitly will not keep embargoes; for example The Information will only break news. If you want to launch something later, do not share information with journalists at these sites. If you are only working with a single journalist for a story, and your announcement time is flexible, you can jointly work out a date and time to announce. However, if you have a fixed launch time or are working with a few journalists, embargoes are key.

Exclusives: An exclusive means you’re giving something specifically to that journalist. Most journalists love an exclusive as it means readers have to come to them for the story. One option is to give a journalist an exclusive on the entire story. If it is your dream journalist, this may make sense. Another option, however, is to give exclusivity on certain pieces. For example, for your launch you could give an exclusive on funding detail & a VC interview to a more finance-focused journalist and insight into the tech & a CTO interview to a more tech-focused journalist.

Call-to-Action: With our launch we gave TechCrunch, Ars Technica, and SimplyHelp URLs that gave the first few hundred of their readers access to the private beta. Once those first few hundred users from each site downloaded, the beta would be turned off.

Thus, we used a combination of embargoes, exclusives, and a call-to-action during our initial launch to be able to brief journalists on the news before it went live, give them something they could announce as exclusive, and provide a time-sensitive call-to-action to the readers so that they would actually sign up and not just read and go away.

How to Find the Most Authoritative Sites / Authors

“If a press release is published and no one sees it, was it published?” Perhaps the time existed when sending a press release out over the wire meant journalists would read it and write about it. That time has long been forgotten. Over 1,000 unread press releases are published every day. If you want your compelling story to be covered, you need to find the handful of journalists that will care.

Determine the publications

Find the publications that cover the type of story you want to share. If you’re in tech, Techmeme has a leaderboard of publications ranked by leadership and presence. This list will tell you which publications are likely to have influence. Visit the sites and see if your type of story appears on their site. But, once you’ve determined the publication do NOT send a pitch their “[email protected]” or “[email protected]” email addresses. In all the times I’ve done that, I have never had a single response. Those email addresses are likely on every PR, press release, and spam list and unlikely to get read. Instead…

Determine the journalists

Once you’ve determined which publications cover your area, check which journalists are doing the writing. Skim the articles and search for keywords and competitor names.

Over 1,000 unread press releases are published every day.

Identify one primary journalist at the publication that you would love to have cover you, and secondary ones if there are a few good options. If you’re not sure which one should be the primary, consider a few tests:

  • Do they truly seem to care about the space?
  • Do they write interesting/compelling stories that ‘get it’?
  • Do they appear on the Techmeme leaderboard?
  • Do their articles get liked/tweeted/shared and commented on?
  • Do they have a significant social presence?

Leveraging Google

Google author search by date

In addition to Techmeme or if you aren’t in the tech space Google will become a must have tool for finding the right journalists to pitch. Below the search box you will find a number of tabs. Click on Tools and change the Any time setting to Custom range. I like to use the past six months to ensure I find authors that are actively writing about my market. I start with the All results. This will return a combination of product sites and articles depending upon your search term.

Scan for articles and click on the link to see if the article is on topic. If it is find the author’s name. Often if you click on the author name it will take you to a bio page that includes their Twitter, LinkedIn, and/or Facebook profile. Many times you will find their email address in the bio. You should collect all the information and add it to your outreach spreadsheet. Click here to get a copy. It’s always a good idea to comment on the article to start building awareness of your name. Another good idea is to Tweet or Like the article.

Next click on the News tab and set the same search parameters. You will get a different set of results. Repeat the same steps. Between the two searches you will have a list of authors that actively write for the websites that Google considers the most authoritative on your market.

How to find the most socially shared authors

Buzzsumo search for most shared by date

Your next step is to find the writers whose articles get shared the most socially. Go to Buzzsumo and click on the Most Shared tab. Enter search terms for your market as well as competitor names. Again I like to use the past 6 months as the time range. You will get a list of articles that have been shared the most across Facebook, LinkedIn, Twitter, Pinterest, and Google+. In addition to finding the most shared articles and their authors you can also see some of the Twitter users that shared the article. Many of those Twitter users are big influencers in your market so it’s smart to start following and interacting with them as well as the authors.

How to Find Author Email Addresses

Some journalists publish their contact info right on the stories. For those that don’t, a bit of googling will often get you the email. For example, TechCrunch wrote a story a few years ago where they published all of their email addresses, which was in response to this new service that charges a small fee to provide journalist email addresses. Sometimes visiting their twitter pages will link to a personal site, upon which they will share an email address.

Of course all is not lost if you don’t find an email in the bio. There are two good services for finding emails, https://app.voilanorbert.com/ and https://hunter.io/. For Voila Norbert enter the author name and the website you found their article on. The majority of the time you search for an author on a major publication Norbert will return an accurate email address. If it doesn’t try Hunter.io.

On Hunter.io enter the domain name and click on Personal Only. Then scroll through the results to find the author’s email. I’ve found Norbert to be more accurate overall but between the two you will find most major author’s email addresses.

Email, by the way, is not necessarily the best way to engage a journalist. Many are avid Twitter users. Follow them and engage – that means read/retweet/favorite their tweets; reply to their questions, and generally be helpful BEFORE you pitch them. Later when you email them, you won’t be just a random email address.

Don’t spam

Now that you have all these email addresses (possibly thousands if you purchased a list) – do NOT spam. It is incredibly tempting to think “I could try to figure out which of these folks would be interested, but if I just email all of them, I’ll save myself time and be more likely to get some of them to respond.” Don’t do it.

Follow them and engage – that means read/retweet/favorite their tweets; reply to their questions, and generally be helpful BEFORE you pitch them.

First, you’ll want to tailor your pitch to the individual. Second, it’s a small world and you’ll be known as someone who spams – reputation is golden. Also, don’t call journalists. Unless you know them or they’ve said they’re open to calls, you’re most likely to just annoy them.

Build a relationship

Build Trust with reporters Play the long game. You may be focusing just on the launch and hoping to get this one story covered, but if you don’t quickly flame-out, you will have many more opportunities to tell interesting stories that you’ll want the press to cover. Be honest and don’t exaggerate.
When you have 500 users it’s tempting to say, “We’ve got thousands!” Don’t. The good journalists will see through it and it’ll likely come back to bite you later. If you don’t know something, say “I don’t know but let me find out for you.” Most journalists want to write interesting stories that their readers will appreciate. Help them do that. Build deeper relationships with 5 – 10 journalists, rather than spamming thousands.

Stay organized

It doesn’t need to be complicated, but keep a spreadsheet that includes the name, publication, and contact info of the journalists you care about. Then, use it to keep track of who you’ve pitched, who’s responded, whether you’ve sent them the materials they need, and whether they intend to write/have written.

Make their life easy

Journalists have a million PR people emailing them, are actively engaging with readers on Twitter and in the comments, are tracking their metrics, are working their sources…and all the while needing to publish new articles. They’re busy. Make their life easy and they’re more likely to engage with yours.

Get to know them

Before sending them a pitch, know what they’ve written in the space. If you tell them how your story relates to ones they’ve written, it’ll help them put the story in context, and enable them to possibly link back to a story they wrote before.

Prepare your materials

Journalists will need somewhere to get more info (prepare a fact sheet), a URL to link to, and at least one image (ideally a few to choose from.) A fact sheet gives bite-sized snippets of information they may need about your startup or product: what it is, how big the market is, what’s the pricing, who’s on the team, etc. The URL is where their reader will get the product or more information from you. It doesn’t have to be live when you’re pitching, but you should be able to tell what the URL will be. The images are ones that they could embed in the article: a product screenshot, a CEO or team photo, an infographic. Scan the types of images included in their articles. Don’t send any of these in your pitch, but have them ready. Studies, stats, customer/partner/investor quotes are also good to have.

Pitch

A pitch has to be short and compelling.

Subject Line

Think back to the headline you want. Is it really compelling? Can you shorten it to a subject line? Include what’s happening and when. For Mike Arrington at Techcrunch, our first subject line was “Startup doing an ‘online time machine’”. Later I would include, “launching June 6th”.

For John Timmer at ArsTechnica, it was “Demographics data re: your 4/17 article”. Why? Because he wrote an article titled “WiFi popular with the young people; backups, not so much”. Since we had run a demographics survey on backups, I figured as a science editor he’d be interested in this additional data.

Body

A few key things about the body of the email. It should be short and to the point, no more than a few sentences. Here was my actual, original pitch email to John:

Hey John,

We’re launching Backblaze next week which provides a Time Machine-online type of service. As part of doing some research I read your article about backups not being popular with young people and that you had wished Accenture would have given you demographics. In prep for our invite-only launch I sponsored Harris Interactive to get demographic data on who’s doing backups and if all goes well, I should have that data on Friday.

Next week starts Backup Awareness Month (and yes, probably Clean Your House Month and Brush Your Teeth Month)…but nonetheless…good time to remind readers to backup with a bit of data?

Would you be interested in seeing/talking about the data when I get it?

Would you be interested in getting a sneak peak at Backblaze? (I could give you some invite codes for your readers as well.)

Gleb Budman        

CEO and Co-Founder

Backblaze, Inc.

Automatic, Secure, High-Performance Online Backup

Cell: XXX-XXX-XXXX

The Good: It said what we’re doing, why this relates to him and his readers, provides him information he had asked for in an article, ties to something timely, is clearly tailored for him, is pitched by the CEO and Co-Founder, and provides my cell.

The Bad: It’s too long.

I got better later. Here’s an example:

Subject: Does temperature affect hard drive life?

Hi Peter, there has been much debate about whether temperature affects how long a hard drive lasts. Following up on the Backblaze analyses of how long do drives last & which drives last the longest (that you wrote about) we’ve now analyzed the impact of heat on the nearly 40,000 hard drives we have and found that…

We’re going to publish the results this Monday, 5/12 at 5am California-time. Want a sneak peak of the analysis?

Timing

A common question is “When should I launch?” What day, what time? I prefer to launch on Tuesday at 8am California-time. Launching earlier in the week gives breathing room for the news to live longer. While your launch may be a single article posted and that’s that, if it ends up a larger success, earlier in the week allows other journalists (including ones who are in other countries) to build on the story. Monday announcements can be tough because the journalists generally need to have their stories finished by Friday, and while ideally everything is buttoned up beforehand, startups sometimes use the weekend as overflow before a launch.

The 8am California-time is because it allows articles to be published at the beginning of the day West Coast and around lunch-time East Coast. Later and you risk it being past publishing time for the day. We used to launch at 5am in order to be morning for the East Coast, but it did not seem to have a significant benefit in coverage or impact, but did mean that the entire internal team needed to be up at 3am or 4am. Sometimes that’s critical, but I prefer to not burn the team out when it’s not.

Finally, try to stay clear of holidays, major announcements and large conferences. If Apple is coming out with their next iPhone, many of the tech journalists will be busy at least a couple days prior and possibly a week after. Not always obvious, but if you can, find times that are otherwise going to be slow for news.

Follow-up

There is a fine line between persistence and annoyance. I once had a journalist write me after we had an announcement that was covered by the press, “Why didn’t you let me know?! I would have written about that!” I had sent him three emails about the upcoming announcement to which he never responded.

My general rule is 3 emails.

Ugh. However, my takeaway from this isn’t that I should send 10 emails to every journalist. It’s that sometimes these things happen.

My general rule is 3 emails. If I’ve identified a specific journalist that I think would be interested and have a pitch crafted for her, I’ll send her the email ideally 2 weeks prior to the announcement. I’ll follow-up a week later, and one more time 2 days prior. If she ever says, “I’m not interested in this topic,” I note it and don’t email her on that topic again.

If a journalist wrote, I read the article and engage in the comments (or someone on our team, such as our social guy, @YevP does). We’ll often promote the story through our social channels and email our employees who may choose to share the story as well. This helps us, but also helps the journalist get their story broader reach. Again, the goal is to build a relationship with the journalists your space. If there’s something relevant to your customers that the journalist wrote, you’re providing a service to your customers AND helping the journalist get the word out about the article.

At times the stories also end up shared on sites such as Hacker News, Reddit, Slashdot, or become active conversations on Twitter. Again, we try to engage there and respond to questions (when we do, we are always clear that we’re from Backblaze.)

And finally, I’ll often send a short thank you to the journalist.

Getting Your First 1,000 Customers With Press

As I mentioned at the beginning, there is more than one way to get your first 1,000 customers. My favorite is working with the press to share your story. If you figure out your compelling story, find the right journalists, make their life easy, pitch and follow-up, you stand a high likelyhood of getting coverage and customers. Better yet, that coverage will provide credibility for your company, and if done right, will establish you as a resource for the press for the future.

Like any muscle, this process takes working out. The first time may feel a bit daunting, but just take the steps one at a time. As you do this a few times, the process will be easier and you’ll know who to reach out and quickly determine what stories will be compelling.

The post How To Get Your First 1,000 Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A homebrew Pi kit for home brewing

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/homebrew-beer-brewing-pi/

While the rest of us are forced to leave the house to obtain a tasty brew, beer master Christoper Aedo has incorporated a Raspberry Pi into his home brewing system for ultimate ‘sit-back-and-relax’ homebrew home brew.

homebrew home brew Raspberry Pi

KEG! KEG! KEG! KEG!

I drink and I know things

Having brewed his own beer for several years, Christopher was no novice in the pursuit of creating the perfect pint*. He was already brewing 10 gallons at a time when he decided to go all electric with a Raspberry Pi. Inspiration struck when he stumbled upon the StrangeBrew Elsinore Java server, and he went to work planning the best setup for the job:

Before I could talk myself out of the project, I decided to start buying parts. My basic design was a Hot Liquor Tank (HLT) and boil kettle with 5500W heating elements in them, plus a mash tun with a false bottom. I would use a pump to recirculate the mash through a 50 foot stainless coil in the HLT (a “heat exchanger recirculating mash system”, known as HERMS). I would need a second pump to circulate the water in the HLT, and to help with transferring water to the mash tun. All of the electrical components would be controlled with a Raspberry Pi.

Homebrew hardware setup

First, he set up the electrical side of his homebrew system using The Electric Brewing Company‘s walkthrough, swapping out the 12V solid-state relays for ones that manage the 3V needed by the Pi. Aedo then implemented the temperature sensors and controls of these relays. He used Hilitchi DS18B20 Waterproof Temperature Sensors connected to a 1-Wire bus and learned how to manage the relays in this tutorial.

Christopher wanted to be able to move his system around his property. Therefore, he squeezed all the electrical components of the build into a waterproof project box. For cooling purposes, he integrated copper shims and heat sinks.

homebrew home brew raspberry pi

Among the wires, wires, and more wires sits a Raspberry Pi, bottom left.

A brew-tiful build

With the hardware sorted, he took on the project’s software next. Although he had been inspired by it, Christopher decided to move away from the StrangeBrew Elsinore project in favour of the Python-based CraftBeerPi by active repo maintainer Manuel Fritsch.

homebrew home brew raspberry pi

The CraftBeerPi dashboard

This package allowed him to configure his chosen GPIO pins and set up the appropriate sensors. In fact, the setup process was so easy that Christoper also implemented a secondhand fridge as a fermentation chamber.

Duff Beer for me, Duff Beer for you…

In his recently released article on opensource.com, Aedo goes into far more detail. So if you want to create your own brewing kit, it offers all the info you need to get going.

Christoper attributes a lot of his build to the Hosehead, Electric Brewery, and CraftBeerPi projects. Using their resources and those of StrangeBrew Elsinore, any home brewer can control at least part of their system via a Raspberry Pi. Moreover, they can also keep track of their brewery stock levels via the wonderfully named Kegerface display.

We love seeing projects like this that take inspiration from others and build on them. We also love beer.

How about you? Have you created any sort of beer brewing system, from scratch or with the help of an existing project? Then make sure to share it with us in the comments below.

Duff man homebrew

 

*Did you know the British pint is larger than the American pint?

The post A homebrew Pi kit for home brewing appeared first on Raspberry Pi.

Perform Near Real-time Analytics on Streaming Data with Amazon Kinesis and Amazon Elasticsearch Service

Post Syndicated from Tristan Li original https://aws.amazon.com/blogs/big-data/perform-near-real-time-analytics-on-streaming-data-with-amazon-kinesis-and-amazon-elasticsearch-service/

Nowadays, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centers, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms and extract a deeper understanding from the data becomes ever more important. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.

In this post, we use Amazon Kinesis Streams to collect and store streaming data. We then use Amazon Kinesis Analytics to process and analyze the streaming data continuously. Specifically, we use the Kinesis Analytics built-in RANDOM_CUT_FOREST function, a machine learning algorithm, to detect anomalies in the streaming data. Finally, we use Amazon Kinesis Firehose to export the anomalies data to Amazon Elasticsearch Service (Amazon ES). We then build a simple dashboard in the open source tool Kibana to visualize the result.

Solution overview

The following diagram depicts a high-level overview of this solution.

Amazon Kinesis Streams

You can use Amazon Kinesis Streams to build your own streaming application. This application can process and analyze streaming data by continuously capturing and storing terabytes of data per hour from hundreds of thousands of sources.

Amazon Kinesis Analytics

Kinesis Analytics provides an easy and familiar standard SQL language to analyze streaming data in real time. One of its most powerful features is that there are no new languages, processing frameworks, or complex machine learning algorithms that you need to learn.

Amazon Kinesis Firehose

Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.

Amazon Elasticsearch Service

Amazon ES is a fully managed service that makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.

Solution summary

The following is a quick walkthrough of the solution that’s presented in the diagram:

  1. IoT sensors send streaming data into Kinesis Streams. In this post, you use a Python script to simulate an IoT temperature sensor device that sends the streaming data.
  2. By using the built-in RANDOM_CUT_FOREST function in Kinesis Analytics, you can detect anomalies in real time with the sensor data that is stored in Kinesis Streams. RANDOM_CUT_FOREST is also an appropriate algorithm for many other kinds of anomaly-detection use cases—for example, the media sentiment example mentioned earlier in this post.
  3. The processed anomaly data is then loaded into the Kinesis Firehose delivery stream.
  4. By using the built-in integration that Kinesis Firehose has with Amazon ES, you can easily export the processed anomaly data into the service and visualize it with Kibana.

Implementation steps

The following sections walk through the implementation steps in detail.

Creating the delivery stream

  1. Open the Amazon Kinesis Streams console.
  2. Create a new Kinesis stream. Give it a name that indicates it’s for raw incoming stream data—for example, RawStreamData. For Number of shards, type 1.
  3. The Python code provided below simulates a streaming application, such as an IoT device, and generates random data and anomalies into a Kinesis stream. The code generates two temperature ranges, where the first range is the hypothetical sensor’s normal operating temperature range (10–20), and the second is the anomaly temperature range (100–120).Make sure to change the stream name on line 16 and 20 and the Region on line 6 to match your configuration. Alternatively, you can download the Amazon Kinesis Data Generator from this repository and use it to generate the data.
    import json
    import datetime
    import random
    import testdata
    from boto import kinesis
    
    kinesis = kinesis.connect_to_region("us-east-1")
    
    def getData(iotName, lowVal, highVal):
       data = {}
       data["iotName"] = iotName
       data["iotValue"] = random.randint(lowVal, highVal) 
       return data
    
    while 1:
       rnd = random.random()
       if (rnd < 0.01):
          data = json.dumps(getData("DemoSensor", 100, 120))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print '***************************** anomaly ************************* ' + data
       else:
          data = json.dumps(getData("DemoSensor", 10, 20))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print data

  4. Open the Amazon Elasticsearch Service console and create a new domain.
    1. Give the domain a unique name. In the Configure cluster screen, use the default settings.
    2. In the Set up access policy screen, in the Set the domain access policy list, choose Allow access to the domain from specific IP(s).
    3. Enter the public IP address of your computer.
      Note: If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this AWS Database blog post to learn how to work with a proxy. For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Domain in the AWS Security Blog.
  5. After the Amazon ES domain is up and running, you can set up and configure Kinesis Firehose to export results to Amazon ES:
    1. Open the Amazon Kinesis Firehose console and choose Create Delivery Stream.
    2. In the Destination dropdown list, choose Amazon Elasticsearch Service.
    3. Type a stream name, and choose the Amazon ES domain that you created in Step 4.
    4. Provide an index name and ES type. In the S3 bucket dropdown list, choose Create New S3 bucket. Choose Next.
    5. In the configuration, change the Elasticsearch Buffer size to 1 MB and the Buffer interval to 60s. Use the default settings for all other fields. This shortens the time for the data to reach the ES cluster.
    6. Under IAM Role, choose Create/Update existing IAM role.
      The best practice is to create a new role every time. Otherwise, the console keeps adding policy documents to the same role. Eventually the size of the attached policies causes IAM to reject the role, but it does it in a non-obvious way, where the console basically quits functioning.
    7. Choose Next to move to the Review page.
  6. Review the configuration, and then choose Create Delivery Stream.
  7. Run the Python file for 1–2 minutes, and then press Ctrl+C to stop the execution. This loads some data into the stream for you to visualize in the next step.

Analyzing the data

Now it’s time to analyze the IoT streaming data using Amazon Kinesis Analytics.

  1. Open the Amazon Kinesis Analytics console and create a new application. Give the application a name, and then choose Create Application.
  2. On the next screen, choose Connect to a source. Choose the raw incoming data stream that you created earlier. (Note the stream name Source_SQL_STREAM_001 because you will need it later.)
  3. Use the default settings for everything else. When the schema discovery process is complete, it displays a success message with the formatted stream sample in a table as shown in the following screenshot. Review the data, and then choose Save and continue.
  4. Next, choose Go to SQL editor. When prompted, choose Yes, start application.
  5. Copy the following SQL code and paste it into the SQL editor window.
    CREATE OR REPLACE STREAM "TEMP_STREAM" (
       "iotName"        varchar (40),
       "iotValue"   integer,
       "ANOMALY_SCORE"  DOUBLE);
    -- Creates an output stream and defines a schema
    CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
       "iotName"       varchar(40),
       "iotValue"       integer,
       "ANOMALY_SCORE"  DOUBLE,
       "created" TimeStamp);
     
    -- Compute an anomaly score for each record in the source stream
    -- using Random Cut Forest
    CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "TEMP_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE FROM
      TABLE(RANDOM_CUT_FOREST(
        CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001")
      )
    );
    
    -- Sort records by descending anomaly score, insert into output stream
    CREATE OR REPLACE PUMP "OUTPUT_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE, ROWTIME FROM "TEMP_STREAM"
    ORDER BY FLOOR("TEMP_STREAM".ROWTIME TO SECOND), ANOMALY_SCORE DESC;

 

  1. Choose Save and run SQL.
    As the application is running, it displays the results as stream data arrives. If you don’t see any data coming in, run the Python script again to generate some fresh data. When there is data, it appears in a grid as shown in the following screenshot.Note that you are selecting data from the source stream name Source_SQL_STREAM_001 that you created previously. Also note the ANOMALY_SCORE column. This is the value that the Random_Cut_Forest function calculates based on the temperature ranges provided by the Python script. Higher (anomaly) temperature ranges have a higher score.Looking at the SQL code, note that the first two blocks of code create two new streams to store temporary data and the final result. The third block of code analyzes the raw source data (Stream_Pump_1) using the Random_Cut_Forest function. It calculates an anomaly score (ANOMALY_SCORE) and inserts it into the TEMP_STREAM stream. The final code block loads the result stored in the TEMP_STREAM into DESTINATION_SQL_STREAM.
  2. Choose Exit (done editing) next to the Save and run SQL button to return to the application configuration page.

Load processed data into the Kinesis Firehose delivery stream

Now, you can export the result from DESTINATION_SQL_STREAM into the Amazon Kinesis Firehose stream that you created previously.

  1. On the application configuration page, choose Connect to a destination.
  2. Choose the stream name that you created earlier, and use the default settings for everything else. Then choose Save and Continue.
  3. On the application configuration page, choose Exit to Kinesis Analytics applications to return to the Amazon Kinesis Analytics console.
  4. Run the Python script again for 4–5 minutes to generate enough data to flow through Amazon Kinesis Streams, Kinesis Analytics, Kinesis Firehose, and finally into the Amazon ES domain.
  5. Open the Kinesis Firehose console, choose the stream, and then choose the Monitoring
  6. As the processed data flows into Kinesis Firehose and Amazon ES, the metrics appear on the Delivery Stream metrics page. Keep in mind that the metrics page takes a few minutes to refresh with the latest data.
  7. Open the Amazon Elasticsearch Service dashboard in the AWS Management Console. The count in the Searchable documents column increases as shown in the following screenshot. In addition, the domain shows a cluster health of Yellow. This is because, by default, it needs two instances to deploy redundant copies of the index. To fix this, you can deploy two instances instead of one.

Visualize the data using Kibana

Now it’s time to launch Kibana and visualize the data.

  1. Use the ES domain link to go to the cluster detail page, and then choose the Kibana link as shown in the following screenshot.

    If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this blog post to learn how to work with a proxy.
  2. In the Kibana dashboard, choose the Discover tab to perform a query.
  3. You can also visualize the data using the different types of charts offered by Kibana. For example, by going to the Visualize tab, you can quickly create a split bar chart that aggregates by ANOMALY_SCORE per minute.


Conclusion

In this post, you learned how to use Amazon Kinesis to collect, process, and analyze real-time streaming data, and then export the results to Amazon ES for analysis and visualization with Kibana. If you have comments about this post, add them to the “Comments” section below. If you have questions or issues with implementing this solution, please open a new thread on the Amazon Kinesis or Amazon ES discussion forums.


Next Steps

Take your skills to the next level. Learn real-time clickstream anomaly detection with Amazon Kinesis Analytics.

 


About the Author

Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.

 

 

 

 

Bicrophonic Research Institute and the Sonic Bike

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/sonic-bike/

The Bicrophonic Sonic Bike, created by British sound artist Kaffe Matthews, utilises a Raspberry Pi and GPS signals to map location data and plays music and sound in response to the places you take it on your cycling adventures.

What is Bicrophonics?

Bicrophonics is about the mobility of sound, experienced and shared within a moving space, free of headphones and free of the internet. Music made by the journey you take, played with the space that you move through. The Bicrophonic Research Institute (BRI) http://sonicbikes.net

Cycling and music

I’m sure I wasn’t the only teen to go for bike rides with a group of friends and a radio. Spurred on by our favourite movie, the mid-nineties classic Now and Then, we’d hook up a pair of cheap portable speakers to our handlebars, crank up the volume, and sing our hearts out as we cycled aimlessly down country lanes in the cool light evenings of the British summer.

While Sonic Bikes don’t belt out the same classics that my precariously attached speakers provided, they do give you the same sense of connection to your travelling companions via sound. Linked to GPS locations on the same preset map of zones, each bike can produce the same music, creating a cloud of sound as you cycle.

Sonic Bikes

The Sonic Bike uses five physical components: a Raspberry Pi, power source, USB GPS receiver, rechargeable speakers, and subwoofer. Within the Raspberry Pi, the build utilises mapping software to divide a map into zones and connect each zone with a specific music track.

Sonic Bikes Raspberry Pi

Custom software enables the Raspberry Pi to locate itself among the zones using the USB GPS receiver. Then it plays back the appropriate track until it registers a new zone.

Bicrophonic Research Institute

The Bicrophonic Research Institute is a collective of artists and coders with the shared goal of creating sound directed by people and places via Sonic Bikes. In their own words:

Bicrophonics is about the mobility of sound, experienced and shared within a moving space, free of headphones and free of the internet. Music made by the journey you take, played with the space that you move through.

Their technology has potential beyond the aims of the BRI. The Sonic Bike software could be useful for navigation, logging data and playing beats to indicate when to alter speed or direction. You could even use it to create a guided cycle tour, including automatically reproduced information about specific places on the route.

For the creators of Sonic Bike, the project is ever-evolving, and “continues to be researched and developed to expand the compositional potentials and unique listening experiences it creates.”

Sensory Bike

A good example of this evolution is the Sensory Bike. This offshoot of the Sonic Bike idea plays sounds guided by the cyclist’s own movements – it acts like a two-wheeled musical instrument!

lean to go up, slow to go loud,

a work for Sensory Bikes, the Berlin wall and audience to ride it. ‘ lean to go up, slow to go loud ‘ explores freedom and celebrates escape. Celebrating human energy to find solutions, hot air balloons take off, train lines sing, people cheer and nature continues to grow.

Sensors on the wheels, handlebars, and brakes, together with a Sense HAT at the rear, register the unique way in which the rider navigates their location. The bike produces output based on these variables. Its creators at BRI say:

The Sensory Bike becomes a performative instrument – with riders choosing to go slow, go fast, to hop, zigzag, or circle, creating their own unique sound piece that speeds, reverses, and changes pitch while they dance on their bicycle.

Build your own Sonic Bike

As for many wonderful Raspberry Pi-based builds, the project’s code is available on GitHub, enabling makers to recreate it. All the BRI team ask is that you contact them so they can learn more of your plans and help in any way possible. They even provide code to create your own Sonic Kayak using GPS zones, temperature sensors, and an underwater microphone!

Sonic Kayaks explained

Sonic Kayaks are musical instruments for expanding our senses and scientific instruments for gathering marine micro-climate data. Made by foAm_Kernow with the Bicrophonic Research Institute (BRI), two were first launched at the British Science Festival in Swansea Bay September 6th 2016 and used by the public for 2 days.

The post Bicrophonic Research Institute and the Sonic Bike appeared first on Raspberry Pi.

The Cost of Cloud Storage

Post Syndicated from Tim Nufire original https://www.backblaze.com/blog/cost-of-cloud-storage/

the cost of the cloud as a percentage of revenue

This week, we’re celebrating the one year anniversary of the launch of Backblaze B2 Cloud Storage. Today’s post is focused on giving you a peek behind the curtain about the costs of providing cloud storage. Why? Over the last 10 years, the most common question we get is still “how do you do it?” In this multi-billion dollar, global industry exhibiting exponential growth, none of the other major players seem to be willing to discuss the underlying costs. By exposing a chunk of the Backblaze financials, we hope to provide a better understanding of what it costs to run “the cloud,” and continue our tradition of sharing information for the betterment of the larger community.

Context
Backblaze built one of the industry’s largest cloud storage systems and we’re proud of that accomplishment. We bootstrapped the business and funded our growth through a combination of our own business operations and just $5.3M in equity financing ($2.8M of which was invested into the business – the other $2.5M was a tender offer to shareholders). To do this, we had to build our storage system efficiently and run as a real, self-sustaining, business. After over a decade in the data storage business, we have developed a deep understanding of cloud storage economics.

Definitions
I promise we’ll get into the costs of cloud storage soon, but some quick definitions first:

    Revenue: Money we collect from customers.
    Cost of Goods Sold (“COGS”): The costs associated with providing the service.
    Operating Expenses (“OpEx”): The costs associated with developing and selling the service.
    Income/Loss: What is left after subtracting COGS and OpEx from Revenue.

I’m going to focus today’s discussion on the Cost of Goods Sold (“COGS”): What goes into it, how it breaks down, and what percent of revenue it makes up. Backblaze is a roughly break-even business with COGS accounting for 47% of our revenue and the remaining 53% spent on our Operating Expenses (“OpEx”) like developing new features, marketing, sales, office rent, and other administrative costs that are required for us to be a functional company.

This post’s focus on COGS should let us answer the commonly asked question of “how do you provide cloud storage for such a low cost?”

Breaking Down Cloud COGS

Providing a cloud storage service requires the following components (COGS and OpEX – below we break out COGS):
cloud infrastructure costs as a percentage of revenue

  • Hardware: 23% of Revenue
  • Backblaze stores data on hard drives. Those hard drives are “wrapped” with servers so they can connect to the public and store data. We’ve discussed our approach to how this works with our Vaults and Storage Pods. Our infrastructure is purpose built for data storage. That is, we thought about how data storage ought to work, and then built it from the ground up. Other companies may use different storage media like Flash, SSD, or even tape. But it all serves the same function of being the thing that data actually is stored on. For today, we’ll think of all this as “hardware.”

    We buy storage hardware that, on average, will last 5 years (60 months) before needing to be replaced. To account for hardware costs in a way that can be compared to our monthly expenses, we amortize them and recognize 1/60th of the purchase price each month.

    Storage Pods and hard drives are not the only hardware in our environment. We also have to buy the cabinets and rails that hold the servers, core servers that manage accounts/billing/etc., switches, routers, power strips, cables, and more. (Our post on bringing up a data center goes into some of this detail.) However, Storage Pods and the drives inside them make up about 90% of all the hardware cost.

  • Data Center (Space & Power): 8% of Revenue
  • “The cloud” is a great marketing term and one that has caught on for our industry. That said, all “clouds” store data on something physical like hard drives. Those hard drives (and servers) are actual, tangible things that take up actual space on earth, not in the clouds.

    At Backblaze, we lease space in colocation facilities which offer a secure, temperature controlled, reliable home for our equipment. Other companies build their own data centers. It’s the classic rent vs buy decision; but it always ends with hardware in racks in a data center.

    Hardware also needs power to function. Not everyone realizes it, but electricity is a significant cost of running cloud storage. In fact, some data center space is billed simply as a function of an electricity bill.

    Every hard drive storing data adds incremental space and power need. This is a cost that scales with storage growth.

    I also want to make a comment on taxes. We pay sales and property tax on hardware, and it is amortized as part of the hardware section above. However, it’s valuable to think about taxes when considering the data center since the location of the hardware actually drives the amount of taxes on the hardware that gets placed inside of it.

  • People: 7% of Revenue
  • Running a data center requires humans to make sure things go smoothly. The more data we store, the more human hands we need in the data center. All drives will fail eventually. When they fail, “stuff” needs to happen to get a replacement drive physically mounted inside the data center and filled with the customer data (all customer data is redundantly stored across multiple drives). The individuals that are associated specifically with managing the data center operations are included in COGS since, as you deploy more hard drives and servers, you need more of these people.

    Customer Support is the other group of people that are part of COGS. As customers use our services, questions invariably arise. To service our customers and get questions answered expediently, we staff customer support from our headquarters in San Mateo, CA. They do an amazing job! Staffing models, internally, are a function of the number of customers and the rate of acquiring new customers.

  • Bandwidth: 3% of Revenue
  • We have over 350 PB of customer data being stored across our data centers. The bulk of that has been uploaded by customers over the Internet (the other option, our Fireball service, is 6 months old and is seeing great adoption). Uploading data over the Internet requires bandwidth – basically, an Internet connection similar to the one running to your home or office. But, for a data center, instead of contracting with Time Warner or Comcast, we go “upstream.” Effectively, we’re buying wholesale.

    Understanding how that dynamic plays out with your customer base is a significant driver of how a cloud provider sets its pricing. Being in business for a decade has explicit advantages here. Because we understand our customer behavior, and have reached a certain scale, we are able to buy bandwidth in sufficient bulk to offer the industry’s best download pricing at $0.02 / Gigabyte (compared to $0.05 from Amazon, Google, and Microsoft).

    Why does optimizing download bandwidth charges matter for customers of a data storage business? Because it has a direct relationship to you being able to retrieve and use your data, which is important.

  • Other Fees: 6% of Revenue
  • We have grouped a the remaining costs inside of “Other Fees.” This includes fees we pay to our payment processor as well as the costs of running our Restore Return Refund program.

    A payment processor is required for businesses like ours that need to accept credit cards securely over the Internet. The bulk of the money we pay to the payment processor is actually passed through to pay the credit card companies like AmEx, Visa, and Mastercard.

    The Restore Return Refund program is a unique program for our consumer and business backup business. Customers can download any and all of their files directly from our website. We also offer customers the ability to order a hard drive with some or all of their data on it, we then FedEx it to the customer wherever in the world she is. If the customer chooses, she can return the drive to us for a full refund. Customers love the program, but it does cost Backblaze money. We choose to subsidize the cost associated with this service in an effort to provide the best customer experience we can.

The Big Picture

At the beginning of the post, I mentioned that Backblaze is, effectively, a break even business. The reality is that our products drive a profitable business but those profits are invested back into the business to fund product development and growth. That means growing our team as the size and complexity of the business expands; it also means being fortunate enough to have the cash on hand to fund “reserves” of extra hardware, bandwidth, data center space, etc. In our first few years as a bootstrapped business, having sufficient buffer was a challenge. Having weathered that storm, we are particularly proud of being in a financial place where we can afford to make things a bit more predictable.

All this adds up to answer the question of how Backblaze has managed to carve out its slice of the cloud market – a market that is a key focus for some of the largest companies of our time. We have innovated a novel, purpose built storage infrastructure with our Vaults and Pods. That infrastructure allows us to keep costs very, very low. Low costs enable us to offer the world’s most affordable, reliable cloud storage.

Does reliable, affordable storage matter? For a company like Vintage Aerial, it enables them to digitize 50 years’ worth of aerial photography of rural America and share that national treasure with the world. Having the best download pricing in the storage industry means Austin City Limits, a PBS show out of Austin, can digitize and preserve over 550 concerts.

We think offering purpose built, affordable storage is important. It empowers our customers to monetize existing assets, make sure data is backed up (and not lost), and focus on their core business because we can handle their data storage needs.

The post The Cost of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Getting started with soldering

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/getting-started-soldering/

In our newest resource video, Content and Curriculum Manager Laura Sach introduces viewers to the basics of soldering.

Getting started with soldering

Learn the basics of how to solder components together, and the safety precautions you need to take. Find a transcript of this video in our accompanying learning resource: raspberrypi.org/learning/getting-started-with-soldering/

So sit down, grab your Raspberry Pi Zero, and prepare to be schooled in the best (and warned about the worst) practices in the realm of soldering.

Do I have to?!

Yes. Yes, you do.

If you are planning to use a Raspberry Pi Zero or Zero W, or to build something magnificent using wires, buttons, lights, and more, you’ll want to practice your soldering technique. Those of us inexperienced in soldering have been jumping for joy since the release of the Pimoroni solderless header. However, if you want to your project to progress from the ‘prototyping with a breadboard’ stage to a durable final build, soldering is the best option for connecting all its components together.

soldering raspberry pi gif

Hot glue just won’t cut it this time. Sorry.

I promise it’s not hard to do, and the final result will give you a warm feeling of accomplishment…made warmer still if, like me, you burn yourself due to your inability to pay attention to instructions. (Please pay attention to the instructions.)

Soldering 101

As Laura explains in the video, there are two types of solder to choose from for your project: the lead-free kind that requires a slightly higher temperature to melt, and the lead-containing kind that – surprise, surprise – has lead in it. Although you’ll find other types of solder, one of these two is what you want for tinkering.

soldering raspberry pi

The decision…is yours.

In order to heat your solder and apply it to your project, you’ll need either Kryptonian heat vision* or, on this planet at least, a soldering iron. There is a variety of soldering irons available on the market, and as your making skills improve you will probably upgrade. But for now, try not to break the bank and choose an iron that’s within your budget. You may also want to ask around, as someone you know might be able to lend you theirs and help you out with your first soldering attempt.

Safety first!

Make sure you always solder in a well-ventilated area. Before you start, remove any small people, four-legged friends, and other trip hazards from the space and check you have everything you need close at hand.

soldering raspberry pi

The lab at Pi Towers is well ventilated thanks to this handy ventilation pipe…thingy.

And never forget, things get hot when you heat them! Always allow a moment for cooling before you handle your wonderful soldering efforts. I remember the first time I tried soldering a button to a Raspberry Pi and…let’s just say that I still bear the scars incured because I didn’t follow my own safety advice.

Let’s do this!

Now you’re geared up and ready to solder, follow along with Laura and fit a header to your Raspberry Pi Zero! You can also read a complete transcript of the video in our free Getting started with soldering  resource.

If you use Laura’s video to help you complete a soldering project, make sure to share your final piece with us via social media using the hashtag #ThanksLauraSach.

 

 

*spoiler alert!

The post Getting started with soldering appeared first on Raspberry Pi.

Build a Visualization and Monitoring Dashboard for IoT Data with Amazon Kinesis Analytics and Amazon QuickSight

Post Syndicated from Karan Desai original https://aws.amazon.com/blogs/big-data/build-a-visualization-and-monitoring-dashboard-for-iot-data-with-amazon-kinesis-analytics-and-amazon-quicksight/

Customers across the world are increasingly building innovative Internet of Things (IoT) workloads on AWS. With AWS, they can handle the constant stream of data coming from millions of new, internet-connected devices. This data can be a valuable source of information if it can be processed, analyzed, and visualized quickly in a scalable, cost-efficient manner. Engineers and developers can monitor performance and troubleshoot issues while sales and marketing can track usage patterns and statistics to base business decisions.

In this post, I demonstrate a sample solution to build a quick and easy monitoring and visualization dashboard for your IoT data using AWS serverless and managed services. There’s no need for purchasing any additional software or hardware. If you are already using AWS IoT, you can build this dashboard to tap into your existing device data. If you are new to AWS IoT, you can be up and running in minutes using sample data. Later, you can customize it to your needs, as your business grows to millions of devices and messages.

Architecture

The following is a high-level architecture diagram showing the serverless setup to configure.

 

AWS service overview

AWS IoT is a managed cloud platform that lets connected devices interact easily and securely with cloud applications and other devices. AWS IoT can process and route billions of messages to AWS endpoints and to other devices reliably and securely.

Amazon Kinesis Firehose is the easiest way to capture, transform, and load streaming data continuously into AWS from thousands of data sources, such as IoT devices. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.

Amazon Kinesis Analytics allows you to process streaming data coming from IoT devices in real time with standard SQL, without having to learn new programming languages or processing frameworks, providing actionable insights promptly.

The processed data is fed into Amazon QuickSight, which is a fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from the data.

The most popular way for Internet-connected devices to send data is using MQTT messages. The AWS IoT gateway receives these messages from registered IoT devices. The solution in this post uses device data from AWS Simple Beer Service (SBS), a series of internet-connected kegerators sending sensor outputs such as temperature, humidity, and sound levels in a JSON payload. You can use any existing IoT data source that you may have.

The AWS IoT rules engine allows selecting data from message payloads, processing it, and sending it to other services. You forward the data to a Firehose delivery stream to consolidate the continuous data stream into batches for further processing. The batched data is also stored temporarily in an Amazon S3 bucket for later retrieval and can be set for deletion after a specified time using S3 Lifecycle Management rules.

The incoming data from the Firehose delivery stream is fed into an Analytics application that provides an easy way to process the data in real time using standard SQL queries. Analytics allows writing standard SQL queries to extract specific components from the incoming data stream and perform real-time ETL on it. In this post, you use this feature to aggregate minimum and maximum temperature values from the sensors per minute. You load it in Amazon QuickSight to create a monitoring dashboard and check if the devices are over-heating or cooling down during use. You also extract every device’s location, parameters such as temperature, sound levels, humidity, and the time stamp in Analytics to use on the visualization dashboard.

The processed data from the two queries is fed into two Firehose delivery streams, both of which batch the data into CSV files every minute and store it in S3. The batching time interval is configurable between 1 and 15 minutes in 1-second intervals.

Finally, you use Amazon QuickSight to ingest the processed CSV files from S3 as a data source to build visualizations. Amazon QuickSight’s super-fast, parallel, in-memory, calculation engine (SPICE) parses the ingested data and allows you to create a variety of visualizations with different graph types. You can also use the Amazon QuickSight built-in Story feature to combine visualizations into business dashboards that can be shared in a secure manner.

Implementation

AWS IoT, Amazon Kinesis, and Amazon QuickSight are all fully managed services, which means you can complete the entire setup in just a few steps using the AWS Management Console. Don’t worry about setting up any underlying hardware or installing any additional software. So, get started.

Step 1. Set up your AWS IoT data source

Do you currently use AWS IoT? If you have an existing IoT thing set up and running on AWS IoT, you can skip to Step 2.

If you have an AWS IoT button or other IoT devices that can publish MQTT messages and would like to use that for the setup, follow the Getting Started with AWS IoT topic to connect your thing to AWS IoT. Continue to Step 2.

If you do not have an existing IoT device, you can generate simulated device data using a script on your local machine and have it publish to AWS IoT. The following script lets you set up your AWS IoT environment and publish simulated data that mimics device data from Simple Beer Service.

Generate sample Data

Running the sbs.py Python script generates fictitious AWS IoT messages from multiple SBS devices. The IoT rule sends the message to Firehose for further processing.

The script requires access to AWS CLI credentials and boto3 installation on the machine running the script. Download and run the following Python script:

https://github.com/awslabs/sbs-iot-data-generator/blob/master/sbs.py

The script generates random data that looks like the following:

{"deviceParameter": "Temperature", "deviceValue": 33, "deviceId": "SBS01", "dateTime": "2017-02-03 11:29:37"}
{"deviceParameter": "Sound", "deviceValue": 140, "deviceId": "SBS03", "dateTime": "2017-02-03 11:29:38"}
{"deviceParameter": "Humidity", "deviceValue": 63, "deviceId": "SBS01", "dateTime": "2017-02-03 11:29:39"}
{"deviceParameter": "Flow", "deviceValue": 80, "deviceId": "SBS04", "dateTime": "2017-02-03 11:29:41"}

Run the script and keep it running for the duration of the project to generate sufficient data.

Tip: If you encounter any issues running the script from your local machine, launch an EC2 instance and run the script there as a root user. Remember to assign an appropriate IAM role to your instance at the time of launch that allows it to access AWS IoT.

Step 2. Create three Firehose delivery streams

For this post, you require three Firehose delivery streams:  one to batch raw data from AWS IoT, and two to batch output device data and aggregated data from Analytics.

  1. In the console, choose Firehose.
  2. Create all three Firehose delivery streams using the following field values.

Delivery stream 1:

Name IoT-Source-Stream
S3 bucket <your unique name>-kinesis
S3 prefix source/

Delivery stream 2:

Name IoT-Destination-Data-Stream
S3 bucket <your unique name>-kinesis
S3 prefix data/

Delivery stream 3:

Name IoT-Destination-Aggregate-Stream
S3 bucket <your unique name>-kinesis
S3 prefix aggregate/

Step 3. Set up AWS IoT to receive and forward incoming data

  1. In the console, choose IoT.
  2. Create a new AWS IoT rule with the following field values.
Name IoT_to_Firehose
Attribute *
Topic Filter /sbs/devicedata/#
Add Action Send messages to an Amazon Kinesis Firehose stream (select IoT-Source-Stream from dropdown)
Select Separator “\n (newline)”

A quick check before proceeding further: make sure that you have run the script to generate simulated IoT data or that your IoT Thing is running and delivering data. If not, set it up now. The Amazon Kinesis Analytics application you set up in the next step needs the data to process it further.

Step 4: Create an Analytics application to process data

  1. In the console, choose Kinesis.
  2. Create a new application.
  3. Enter a name of your choice, for example, SBS-IoT-Data.
  4. For the source, choose IoT-Source-Stream.

Analytics auto-discovers the schema on the data by sampling records from the input stream. It also includes an in-built SQL editor that allows you to write standard SQL queries to transform incoming data.

Tip: If Analytics is unable to discover your incoming data, it may be missing the appropriate IAM permissions. In the IAM console, select the role that you assigned to your IoT rule in Step 3. Make sure that it has the ARN of the IoT-Source-Data Firehose stream listed in the firehose:putRecord section.

Here is a sample SQL query that generates two output streams:

  • DESTINATION_SQL_BASIC_STREAM contains the device ID, device parameter, its value, and the time stamp from the incoming stream.
  • DESTINATION_SQL_AGGREGATE_STREAM aggregates the maximum and minimum values of temperatures from the sensors over a one-minute period from the incoming data.
-- Create an output stream with four columns, which is used to send IoT data to the destination
CREATE OR REPLACE STREAM "DESTINATION_SQL_BASIC_STREAM" (dateTime TIMESTAMP, deviceId VARCHAR(8), deviceParameter VARCHAR(16), deviceValue INTEGER);

-- Create a pump that continuously selects from the source stream and inserts it into the output data stream
CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "DESTINATION_SQL_BASIC_STREAM"

-- Filter specific columns from the source stream
SELECT STREAM "dateTime", "deviceId", "deviceParameter", "deviceValue" FROM "SOURCE_SQL_STREAM_001";

-- Create a second output stream with three columns, which is used to send aggregated min/max data to the destination
CREATE OR REPLACE STREAM "DESTINATION_SQL_AGGREGATE_STREAM" (dateTime TIMESTAMP, highestTemp SMALLINT, lowestTemp SMALLINT);

-- Create a pump that continuously selects from a source stream 
CREATE OR REPLACE PUMP "STREAM_PUMP_2" AS INSERT INTO "DESTINATION_SQL_AGGREGATE_STREAM"

-- Extract time in minutes, plus the highest and lowest value of device temperature in that minute, into the destination aggregate stream, aggregated per minute
SELECT STREAM FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO MINUTE) AS "dateTime", MAX("deviceValue") AS "highestTemp", MIN("deviceValue") AS "lowestTemp" FROM "SOURCE_SQL_STREAM_001" WHERE "deviceParameter"='Temperature' GROUP BY FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO MINUTE);

Real-time analytics shows the results of the SQL query. If everything is working correctly, you see three streams listed, similar to the following screenshot.

Step 5: Connect the Analytics application to output Firehose delivery streams

You create two destinations for the two delivery streams that you created in the previous step. A single Analytics application can have multiple destinations defined; however, this needs to be set up using the AWS CLI, not from the console. If you do not already have it, install the AWS CLI on your local machine and configure it with your credentials.

Tip: If you are running the IoT script from an EC2 instance, it comes pre-installed with the AWS CLI.

Create the first destination delivery stream 

The AWS CLI command to create a new output Firehose delivery stream is as follows:

aws kinesisanalytics add-application-output --application-name <Name of Analytics Application> --current-application-version-id <number> --application-output 'Name=DESTINATION_SQL_BASIC_STREAM,KinesisFirehoseOutput={ResourceARN=<ARN of IoT-Data-Stream>,RoleARN=<ARN of Analytics application>,DestinationSchema={RecordFormatType=CSV}'

Do not copy this into the CLI just yet! Before entering this command, make the following four changes to personalize it:

  • For Name of Analytics Application, enter the value from Step 4, or from the Analytics console.
  • For current-application-version-ID, run the following command:
aws kinesisanalytics describe-application --application-name <application name from above>; | grep ApplicationVersionId
  • For ResourceARN, run the following command:
aws firehose describe-delivery-stream --delivery-stream-name IoT-Destination-Data-Stream | grep DeliveryStreamARN
  • For RoleARN, run the following command:
aws kinesisanalytics describe-application --application-name <application name from above>; | grep RoleARN

Now, paste the complete command in the AWS CLI and press Enter. If there are any errors, the response provides details. If everything goes well, a new destination delivery stream is created to send the first query (DESTINATION_SQL_BASIC_STREAM) to IoT-Destination-Data-Stream.

Create the second destination delivery stream

Following similar steps as above, create a second destination Firehose delivery stream with the following changes:

  • For Name of Analytics Application, enter the same name as the first delivery stream.
  • For current-application-version-ID, increment by 1 from the previous value (unless you made other changes in between these steps). If unsure, run the same command as above to get it again.
  • For ResourceARN, get the value by running the following CLI command:
aws firehose describe-delivery-stream --delivery-stream-name IoT-Destination-Aggregate-Stream | grep DeliveryStreamARN
  • For RoleArn, enter the same value as the first stream.

Run the aws kinesisanalytics CLI command, similar to the previous step but with the new parameters substituted. This creates the second output Firehose destination delivery stream.

Update the IAM role for Analytics to allow writing to both output streams.

  1. In the console, choose IAM, Roles.
  2. Select the role that you created with Analytics in Step 4.
  3. Choose Policy, JSON, and Edit.
  4. Find “Sid”: “WriteOutputFirehose” in the JSON document, go to the “Resource” section and make sure that it includes Resource ARNs of both streams that you found in the previous step.
  5. If it has only one ARN, add the second ARN and choose Save.

This completes the Amazon Kinesis setup. The incoming IoT data is processed by Analytics and delivered, using two output delivery streams, to two separate folders in your S3 bucket.

Step 6: Set up Amazon QuickSight to analyze the data

To build the visualization dashboard, ingest the processed CSV files from the S3 bucket into Amazon QuickSight.

  1. In the console, choose QuickSight.
  2. If this is your first time using Amazon QuickSight, you are asked to create a new account. Follow the prompts.
  3. When you are logged in to your account, choose New Analysis and enter a name of your choice.
  4. Choose New data set for the analysis or, if you have previously imported your data set, select one from the available data sets.
  5. You import two data sets: one with general device parameters information, and the other with aggregates of maximum and minimum temperatures for monitoring. For the first data set, choose S3 from the list of available data sources and enter a name, for example, IoT Device Data.
  6. The location of the S3 bucket and the objects to use are provided to Amazon QuickSight as a manifest file. Create a new manifest file following the supported formats for Amazon S3 manifest files.
  7. In the URIPrefixes section, provide your appropriate S3 bucket and folder location for the general device data. Hint: it should include <your unique name>-kinesis/data/.

Your manifest file should look similar to the following:

{ 
    "fileLocations": [                                                    
              {"URIPrefixes": ["https://s3.amazonaws.com/<YOUR_BUCKET_NAME>/data/<YEAR>/<MONTH>/<DATE>/<HOUR>/"]}
     ],
     "globalUploadSettings": { 
     "format": "CSV",  
     "delimiter": ","
    }
}

Amazon QuickSight imports and parses the data set, and provides available data fields that can be used for making graphs. The Edit/Preview data button allows you to format and transform the data, change data types, and filter or join your data. Make sure that the columns have the correct titles. If not, you can edit them and then save.

Tip: choose the downward arrow on the top right and unselect Files include headers to give each column appropriate headers. Choose Save. This takes you back to the data sets page.

Follow the same steps as above to import the second data set. This time, your manifest should include your aggregate data set folder on S3, which is named <your unique name>-kinesis/aggregate/. Update headers if necessary and choose Save & visualize.

Build an analysis

The visualization screen shows the data set that you last imported, which in this case is the aggregate data. To include the general device data as well, for Fields on the top left, choose Edit analysis data sets. Choose Add data set and select the other data set that you saved earlier.

Now both data sets are available on the analysis screen. For Visual Types at bottom left, select the type of graph to make. For Fields, select the fields to visualize. For example, drag Device ID, Device Parameter, and Value to Field wells, as shown in the screenshot below, to generate a visualization of average parameter values compared across devices.

You can create another visual by choose +Add. This time, select a line graph to show monitoring of the maximum temperature values of the sensors in any minute, from the aggregate data set.

If you would like to create an interactive story to present to your team or organization, you can choose the Story option on the left panel. Create a dashboard with multiple visualizations, to save and share securely with the intended audience. An example of a story is shown below.

Conclusion

Any data is valuable only when it can be actually put to use. In this post, you’ve seen how it’s possible to quickly build a simple Analytics application to ingest, process, and visualize IoT data in near real time entirely using AWS managed services. This solution is scalable and reliable, and costs a fraction of other business intelligence solutions. It is easy enough that anyone with an AWS account can build and use it without any special training.

If you have any questions or suggestions, please comment below.


About the Author

Karan Desai is a Solutions Architect with Amazon Web Services. He works with startups and small businesses in the US, helping them adopt cloud technology to build scalable and secure solutions using AWS. In his spare time, he likes to build personal IoT projects, travel to offbeat places and write about it.

 

 


Related

Visualize Big Data with Amazon QuickSight, Presto, and Apache Spark on Amazon EMR

 

 

 

 

 

 

 

Open source energy monitoring using Raspberry Pi

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/open-source-energy-monitoring-raspberry-pi/

OpenEnergyMonitor, who make open-source tools for energy monitoring, have been using Raspberry Pi since we launched in 2012. Like Raspberry Pi, they manufacture their hardware in Wales and send it to people all over the world. We invited co-founder Glyn Hudson to tell us why they do what they do, and how Raspberry Pi helps.

Hi, I’m Glyn from OpenEnergyMonitor. The OpenEnergyMonitor project was founded out of a desire for open-source tools to help people understand and relate to their use of energy, their energy systems, and the challenge of sustainable energy.

Photo: an emonPi energy monitoring unit in an aluminium case with an aerial and an LCD display, a mobile phone showing daily energy use as a histogram, and a bunch of daffodils in a glass bottle

The next 20 years will see a revolution in our energy systems, as we switch away from fossil fuels towards a zero-carbon energy supply.

By using energy monitoring, modelling, and assessment tools, we can take an informed approach to determine the best energy-saving measures to apply. We can then check to ensure solutions achieve their expected performance over time.

We started the OpenEnergyMonitor project in 2009, and the first versions of our energy monitoring system used an Arduino with Ethernet Shield, and later a Nanode RF with an embedded Ethernet controller. These early versions were limited by a very basic TCP/IP stack; running any sort of web application locally was totally out of the question!

I can remember my excitement at getting hold of the very first version of the Raspberry Pi in early 2012. Within a few hours of tearing open the padded envelope, we had Emoncms (our open-source web logging, graphing, and visualisation application) up and running locally on the Raspberry Pi. The Pi quickly became our web-connected base station of choice (emonBase). The following year, 2013, we launched the RFM12Pi receiver board (now updated to RFM69Pi). This allowed the Raspberry Pi to receive data via low-power RF 433Mhz from our emonTx energy monitoring unit, and later from our emonTH remote temperature and humidity monitoring node.

Diagram: communication between OpenEnergyMonitor monitoring units, base station and web interface

In 2015 we went all-in with Raspberry Pi when we launched the emonPi, an all-in-one Raspberry Pi energy monitoring unit, via Kickstarter. Thanks to the hard work of the Raspberry Pi Foundation, the emonPi has enjoyed several upgrades: extra processing power from the Raspberry Pi 2, then even more power and integrated wireless LAN thanks to the Raspberry Pi 3. With all this extra processing power, we have been able to build an open software stack including Emoncms, MQTT, Node-RED, and openHAB, allowing the emonPi to function as a powerful home automation hub.

Screenshot: Emoncms Apps interface to emonPi home automation hub, with histogram of daily electricity use

Emoncms Apps interface to emonPi home automation hub

Inspired by the Raspberry Pi Foundation, we manufacture and assemble our hardware in Wales, UK, and ship worldwide via our online store.

All of our work is fully open source. We believe this is a better way of doing things: we can learn from and build upon each other’s work, creating better solutions to the challenges we face. Using Raspberry Pi has allowed us to draw on the expertise and work of many other projects. With lots of help from our fantastic community, we have built an online learning resource section of our website to help others get started: it covers things like basic AC power theory, Arduino, and the bigger picture of sustainable energy.

To learn more about OpenEnergyMonitor systems, take a look at our Getting Started User Guide. We hope you’ll join our community.

The post Open source energy monitoring using Raspberry Pi appeared first on Raspberry Pi.

Test Your Streaming Data Solution with the New Amazon Kinesis Data Generator

Post Syndicated from Allan MacInnis original https://aws.amazon.com/blogs/big-data/test-your-streaming-data-solution-with-the-new-amazon-kinesis-data-generator/

When building a streaming data solution, most customers want to test it with data that is similar to their production data. Creating this data and streaming it to your solution can often be the most tedious task in testing the solution.

Amazon Kinesis Streams and Amazon Kinesis Firehose enable you to continuously capture and store terabytes of data per hour from hundreds of thousands of sources. Amazon Kinesis Analytics gives you the ability to use standard SQL to analyze and aggregate this data in real-time. It’s easy to create an Amazon Kinesis stream or Firehose delivery stream with just a few clicks in the AWS Management Console (or a few commands using the AWS CLI or Amazon Kinesis API). However, to generate a continuous stream of test data, you must write a custom process or script that runs continuously, using the AWS SDK or CLI to send test records to Amazon Kinesis. Although this task is necessary to adequately test your solution, it means more complexity and longer development and testing times.

Wouldn’t it be great if there were a user-friendly tool to generate test data and send it to Amazon Kinesis? Well, now there is—the Amazon Kinesis Data Generator (KDG).

KDG overview

The KDG simplifies the task of generating data and sending it to Amazon Kinesis. The tool provides a user-friendly UI that runs directly in your browser. With the KDG, you can do the following:

  • Create templates that represent records for your specific use cases
  • Populate the templates with fixed data or random data
  • Save the templates for future use
  • Continuously send thousands of records per second to your Amazon Kinesis stream or Firehose delivery stream

The KDG is open source, and you can find the source code on the Amazon Kinesis Data Generator repo in GitHub. Because the tool is a collection of static HTML and JavaScript files that run directly in your browser, you can start using it immediately without downloading or cloning the project. It is enabled as a static site in GitHub, and we created a short URL to access it.

To get started immediately, check it out at http://amzn.to/datagen.

Using the KDG

Getting started with the KDG requires only three short steps:

  1. Create an Amazon Cognito user in your AWS account (first-time only).
  2. Use this user’s credentials to log in to the KDG.
  3. Create a record template for your data.

When you’ve completed these steps, you can then send data to Streams or Firehose.

Create an Amazon Cognito user

The KDG is a great example of a mobile application that uses Amazon Cognito for a user repository and user authentication, and the AWS JavaScript SDK to communicate with AWS services directly from your browser. For information about how to build your own JavaScript application that uses Amazon Cognito, see Use Amazon Cognito in your website for simple AWS authentication on the AWS Mobile Blog.

Before you can start sending data to your Amazon Kinesis stream, you must create an Amazon Cognito user in your account who can write to Streams and Firehose. When you create the user, you create a username and password for that user. You use those credentials to sign in to the KDG. To simplify creating the Amazon Cognito user in your account, we created a Lambda function and a CloudFormation template. For more information about creating the Amazon Cognito user in your AWS account, see Configure Your AWS Account.

Note:  It’s important that you use the URL provided by the output of the CloudFormation stack the first time that you access the KDG. This URL contains parameters needed by the KDG. The KDG stores the values of these parameters locally, so you can then access the tool using the short URL, http://amzn.to/datagen.

Log in to the KDG

After you create an Amazon Cognito user in your account, the next step is to log in to the KDG. To do this, provide the username and password that you created earlier.

On the main page, you can configure your data templates and send data to an Amazon Kinesis stream or Firehose delivery stream.

The basic configuration is simple enough. All fields on the page are required:

  • Region: Choose the AWS Region that contains the Amazon Kinesis stream or Firehose delivery stream to receive your streaming data.
  • Stream/firehose name: Choose the name of the stream or delivery stream to receive your streaming data.
  • Records per second: Enter the number of records to send to your stream or delivery stream each second.
  • Record template: Enter the raw data, or a template that represents your data structure, to be used for each record sent by the KDG. For information about creating templates for your data, see the “Creating Record Templates” section, later in this post.

When you set the Records per second value, consider that the KDG isn’t intended to be a data producer for load-testing your application. However, it can easily send several thousand records per second from a single tab in your browser, which is plenty of data for most applications. In testing, the KDG has produced 80,000 records per second to a single Amazon Kinesis stream, but your mileage may vary. The maximum rate at which it produces records depends on your computer’s specs and the complexity of your record template.

Ensure that your stream or delivery stream is scaled appropriately:

  • 1,000 records/second or 1 MB/second to an Amazon Kinesis stream
  • 5,000 records/second or 5 MB/second to a Firehose delivery stream

Otherwise, Amazon Kinesis may reject records, and you won’t achieve your desired throughput. For more information about adding capacity to a stream by adding more shards, see Resharding a Stream. For information about increasing the capacity of a delivery stream, see Amazon Kinesis Firehose Limits.

Create record templates

The Record Template field is a free-text field where you can enter any text that represents a single streaming data record. You can create a single line of static data, so that each record sent to Amazon Kinesis is identical. Or, you can format the text as a template.

In this case, the KDG substitutes portions of the template with fake or random data before sending the record. This lets you introduce randomness or variability in each record that is sent in your data stream. The KDG uses Faker.js, an open source library, to generate fake data. For more information, see the faker.js project page in GitHub. The easiest way to see how this works is to review an example.

To simulate records being sent from a weather sensor Internet of Things (IoT) device, you want each record to be formatted in JSON. The following is an example of what a final record must look like:

{
	"sensorId": 40,
	"currentTemperature": 76,
	"status": "OK"
} 

For this use case, you want to simulate sending data from one of 50 sensors, so the sensorID field can be an integer between 1 and 50. The temperature value can range between 10 and 150, so the currentTemperature field should contain a value in this range. Finally, the status value can be one of three possible values: OK, FAIL, and WARN. The KDG template format uses moustache syntax (double curly-braces) to enclose items that should be replaced before the record is sent to Amazon Kinesis. To model the record, the template looks like this:

{
    "sensorId": {{random.number(50)}},
    "currentTemperature": {{random.number(
        {
            "min":10,
            "max":150
        }
    )}},
    "status": "{{random.arrayElement(
        ["OK","FAIL","WARN"]
    )}}"
}

Take a look at one more example, simulating a stream of records that represent rows from an Apache access log. A single Apache access log entry might look like this:

76.0.56.179 - - [29/Apr/2017:16:32:11 -05:00] "GET /wp-admin" 200 8233 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0 rv:6.0; CY) AppleWebKit/535.0.0 (KHTML, like Gecko) Version/4.0.3 Safari/535.0.0"

The following example shows how to create a template for the Apache access log:

{{internet.ip}} - - [{{date.now("DD/MMM/YYYY:HH:mm:ss Z")}}] "{{random.weightedArrayElement({"weights":[0.6,0.1,0.1,0.2],"data":["GET","POST","DELETE","PUT"]})}} {{random.arrayElement(["/list","/wp-content","/wp-admin","/explore","/search/tag/list","/app/main/posts","/posts/posts/explore"])}}" {{random.weightedArrayElement({"weights": [0.9,0.04,0.02,0.04], "data":["200","404","500","301"]})}} {{random.number(10000)}} "-" "{{internet.userAgent}}"

For more information about creating your own templates, see the Record Template section of the KDG documentation.

The KDG saves the templates that you create in your local browser storage. As long as you use the same browser on the same computer, you can reuse up to five templates.

Summary

Testing your streaming data solution has never been easier. Get started today by visiting the KDG hosted UI or its Amazon Kinesis Data Generator page in GitHub. The project is licensed under the Apache 2.0 license, so feel free to clone and modify it for your own use as necessary. And of course, please submit any issues or pull requests via GitHub.

If you have any questions or suggestions, please add them below.

 


About the Author

Allan MacInnis is a Solutions Architect at Amazon Web Services. He works with our customers to help them build streaming data solutions using Amazon Kinesis. In his spare time, he enjoys mountain biking and spending time with his family.

 

 


Related

Scale Your Amazon Kinesis Stream Capacity with UpdateShardCount

 

 

Sense HAT Emulator Upgrade

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/sense-hat-emulator-upgrade/

Last year, we partnered with Trinket to develop a web-based emulator for the Sense HAT, the multipurpose add-on board for the Raspberry Pi. Today, we are proud to announce an exciting new upgrade to the emulator. We hope this will make it even easier for you to design amazing experiments with the Sense HAT!

What’s new?

The original release of the emulator didn’t fully support all of the Sense HAT features. Specifically, the movement sensors were not emulated. Thanks to funding from the UK Space Agency, we are delighted to announce that a new round of development has just been completed. From today, the movement sensors are fully supported. The emulator also comes with a shiny new 3D interface, Astro Pi skin mode, and Pygame event handling. Click the ▶︎ button below to see what’s new!

Upgraded sensors

On a physical Sense HAT, real sensors react to changes in environmental conditions like fluctuations in temperature or humidity. The emulator has sliders which are designed to simulate this. However, emulating the movement sensor is a bit more complicated. The upgrade introduces a 3D slider, which is essentially a model of the Sense HAT that you can move with your mouse. Moving the model affects the readings provided by the accelerometer, gyroscope, and magnetometer sensors.

Code written in this emulator is directly portable to a physical Raspberry Pi and Sense HAT without modification. This means you can now develop and test programs using the movement sensors from any internet-connected computer, anywhere in the world.

Astro Pi mode

Astro Pi is our series of competitions offering students the chance to have their code run in space! The code is run on two space-hardened Raspberry Pi units, with attached Sense HATs, on the International Space Station.

Image of Astro Pi unit Sense HAT emulator upgrade

Astro Pi skin mode

There are a number of practical things that can catch you out when you are porting your Sense HAT code to an Astro Pi unit, though, such as the orientation of the screen and joystick. Just as having a 3D-printed Astro Pi case enables you to discover and overcome these, so does the Astro Pi skin mode in this emulator. In the bottom right-hand panel, there is an Astro Pi button which enables the mode: click it again to go back to the Sense HAT.

The joystick and push buttons are operated by pressing your keyboard keys: use the cursor keys and Enter for the joystick, and U, D, L, R, A, and B for the buttons.

Sense Hat resources for Code Clubs

Image of gallery of Code Club Sense HAT projects Sense HAT emulator upgrade

Click the image to visit the Code Club projects page

We also have a new range of Code Club resources which are based on the emulator. Of these, three use the environmental sensors and two use the movement sensors. The resources are an ideal way for any Code Club to get into physical computing.

The technology

The 3D models in the emulator are represented entirely with HTML and CSS. “This project pushed the Trinket team, and the 3D web, to its limit,” says Elliott Hauser, CEO of Trinket. “Our first step was to test whether pure 3D HTML/CSS was feasible, using Julian Garnier’s Tridiv.”

Sense HAT 3D image mockup Sense HAT emulator upgrade

The Trinket team’s preliminary 3D model of the Sense HAT

“We added JavaScript rotation logic and the proof of concept worked!” Elliot continues. “Countless iterations, SVG textures, and pixel-pushing tweaks later, the finished emulator is far more than the sum of its parts.”

Sense HAT emulator 3d image final version Sense HAT emulator upgrade

The finished Sense HAT model: doesn’t it look amazing?

Check out this blog post from Trinket for more on the technology and mathematics behind the models.

One of the compromises we’ve had to make is browser support. Unfortunately, browsers like Firefox and Microsoft Edge don’t fully support this technology yet. Instead, we recommend that you use Chrome, Safari, or Opera to access the emulator.

Where do I start?

If you’re new to the Sense HAT, you can simply copy and paste many of the code examples from our educational resources, like this one. Alternatively, you can check out our Sense HAT Essentials e-book. For a complete list of all the functions you can use, have a look at the Sense HAT API reference here.

The post Sense HAT Emulator Upgrade appeared first on Raspberry Pi.

Tableau: a generative music album based on the Sense HAT

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tableau-generative-music-album/

Multi-talented maker Giorgio Sancristoforo has used a Raspberry Pi and Sense HAT to create Tableau, a generative music album. It’s an innovative idea: the music constantly evolves as it reacts to environmental stimuli like atmospheric pressure, humidity, and temperature.

Tableau Generative Album

“There is no doubt that, as music is removed by the phonographrecord from the realm of live production and from the imperative of artistic activity and becomes petrified, it absorbs into itself, in this process of petrification, the very life that would otherwise vanish.”

Creating generative music

“I’ve been dreaming about using portable microcomputers to create a generative music album,” explains Giorgio. “Now my dream is finally a reality: this is my first portable generative LP (PGLP)”. Tableau uses both a Raspberry Pi 2 and a Sense HAT: the HAT provides the data for the album’s musical evolution via its range of onboard sensors.

Image of Tableau generative music device with Sense HAT illuminated

Photo credit: Giorgio Sancristoforo

The Sense HAT was originally designed for use aboard the International Space Station (ISS) as part of the ongoing Astro Pi challenge. It has, however, become a staple within the Raspberry Pi maker community. This is partly thanks to the myriad of possibilities offered by its five onboard sensors, five-button joystick, and 8 × 8 LED matrix.

Image of Tableau generative music device with Sense HAT illuminated

Photo credit: Giorgio Sancristoforo

Limited edition

The final release of Tableau consists of a limited edition of fifty PGLPs: each is set up to begin playing immediately power is connected, and the music will continue to evolve indefinitely. “Instead of being reproduced as on a CD or in an MP3 file, the music is spontaneously generated and arranged while you are listening to it,” Giorgio explains on his website. “It never sounds the same. Tableau creates an almost endless number of mixes of the LP (4 × 12 factorial). Each time you will listen, the music will be different, and it will keep on evolving until you switch the power off.”

Image of Tableau generative music device with Sense HAT illuminated

Photo credit: Giorgio Sancristoforo

Experiment with the Sense HAT

What really interests us is how the sound of Tableau might alter in different locations. Would it sound different in Cambridge as opposed to the deserts of Mexico? What about Antarctica versus the ISS?

If Giorgio’s project has piqued your interest, why not try using our free data logging resource for the Sense HAT? You can use it to collect information from the HAT’s onboard sensors and create your own projects. How about collecting data over a year, and transforming this into your own works of art?

Even if you don’t have access to the Sense HAT, you can experience it via the Sense HAT desktop emulator. This is a great solution if you want to work on Sense HAT-based projects in the classroom, as it reduces the amount of hardware you need.

If you’ve already built a project using the Sense HAT, make sure to share it in the comments below. We would love to see what you have been making!

 

The post Tableau: a generative music album based on the Sense HAT appeared first on Raspberry Pi.

A Day in the Life of a Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/day-life-datacenter-part/

Editor’s note: We’ve reposted this very popular 2016 blog entry because we often get questions about how Backblaze stores data, and we wanted to give you a look inside!

A data center is part of the “cloud”; as in cloud backup, cloud storage, cloud computing, and so on. It is often where your data goes or goes through, once it leaves your home, office, mobile phone, tablet, etc. While many of you have never been inside a data center, chances are you’ve seen one. Cleverly disguised to fit in, data centers are often nondescript buildings with few if any windows and little if any signage. They can be easy to miss. There are exceptions of course, but most data centers are happy to go completely unnoticed.

We’re going to take a look at a typical day in the life of a data center.

Getting Inside A Data Center

As you approach a data center, you’ll notice there isn’t much to notice. There’s no “here’s my datacenter” signage, and the parking lot is nearly empty. You might wonder, “is this the right place?” While larger, more prominent, data centers will have armed guards and gates, most data centers have a call-box outside of a locked door. In either case, data centers don’t like drop-in visitors, so unless you’ve already made prior arrangements, you’re going to be turned away. In short, regardless of whether it is a call-box or an armed guard, a primary line of defense is to know everyone whom you let in the door.

Once inside the building, you’re still a long way from being in the real data center. You’ll start by presenting the proper identification to the guard and fill out some paperwork. Depending on the facility and your level of access, you will have to provide a fingerprint for biometric access/exit confirmation. Eventually, you get a badge or other form of visual identification that shows your level of access. For example, you could have free range of the place (highly doubtful), or be allowed in certain defined areas (doubtful), or need an escort wherever you go (likely). For this post, we’ll give you access to the Backblaze areas in the data center, accompanied of course.

We’re ready to go inside, so attach your badge with your picture on it, get your finger ready to be scanned, and remember to smile for the cameras as you pass through the “box.” While not the only method, the “box” is a widely used security technique that allows one person at a time to pass through a room where they are recorded on video and visually approved before they can leave. Speaking of being on camera, by the time you get to this point, you will have passed dozens of cameras – hidden, visible, behind one-way glass, and so on.

Once past the “box,” you’re in the data center, right? Probably not. Data centers can be divided into areas or blocks each with different access codes and doors. Once out of the box, you still might only be able to access the snack room and the bathrooms. These “rooms” are always located outside of the data center floor. Let’s step inside, “badge in please.”

Inside the Data Center

While every data center is different, there are three things that most people find common in their experience; how clean it is, the noise level and the temperature.

Data Centers are Clean

From the moment you walk into a typical data center, you’ll notice that it is clean. While most data centers are not cleanrooms by definition, they do ensure the environment is suitable for the equipment housed there.

Data center Entry Mats

Cleanliness starts at the door. Mats like this one capture the dirt from the bottom of your shoes. These mats get replaced regularly. As you look around, you might notice that there are no trashcans on the data center floor. As a consequence, the data center staff follows the “whatever you bring in, you bring out” philosophy, sort of like hiking in the woods. Most data centers won’t allow food or drink on the data center floor, either. Instead, one has to leave the datacenter floor to have a snack or use the restroom.

Besides being visually clean, the air in a data center is also amazingly clean: Filtration systems filter particulates to the sub-micron level. Data center filters have a 99.97% (or higher) efficiency rating in removing 0.3-micron particles. In comparison, your typical home filter provides a 70% sub-micron efficiency level. That might explain the dust bunnies behind your gaming tower.

Data Centers Are Noisy

Data center Noise Levels

The decibel level in a given data center can vary considerably. As you can see, the Backblaze datacenter is between 76 and 78 decibels. This is the level when you are near the racks of Storage Pods. How loud is 78dB? Normal conversation is 60dB, a barking dog is 70dB, and a screaming child is only 80dB. In the US, OSHA has established 85dB as the lower threshold for potential noise damage. Still, 78dB is loud enough that we insist our data center staff wear ear protection on the floor. Their favorite earphones are Bose’s noise reduction models. They are a bit costly but well worth it.

The noise comes from a combination of the systems needed to operate the data center. Air filtration, heating, cooling, electric, and other systems in use. 6,000 spinning 3-inch fans in the Storage Pods produce a lot of noise.

Data Centers Are Hot and Cold

As noted, part of the noise comes from heating and air-conditioning systems, mostly air-conditioning. As you walk through the racks and racks of equipment in many data centers, you’ll alternate between warm aisles and cold aisles. In a typical raised floor data center, cold air rises from vents in the floor in front of each rack. Fans inside the servers in the racks, in our case Storage Pods, pull the air in from the cold aisle and through the server. By the time the air reaches the other side, the warm row, it is warmer and is sucked away by vents in the ceiling or above the racks.

There was a time when data centers were like meat lockers with some kept as cold as 55°F (12.8°C). Warmer heads prevailed, and over the years the average temperature has risen to over 80°F (26.7°C) with some companies pushing that even higher. That works for us, but in our case, we are more interested in the temperature inside our Storage Pods and more precisely the hard drives within. Previously we looked at the correlation between hard disk temperature and failure rate. The conclusion: As long as you run drives well within their allowed range of operating temperatures, there is no problem operating a data center at 80°F (26.7°C) or even higher. As for the employees, if they get hot they can always work in the cold aisle for a while and vice-versa.

Getting Out of a Data Center

When you’re finished visiting the data center, remember to leave yourself a few extra minutes to get out. The first challenge is to find your way back to the entrance. If an escort accompanies you, there’s no issue, but if you’re on your own, I hope you paid attention to the way inside. It’s amazing how all the walls and doors look alike as you’re wandering around looking for the exit, and with data centers getting larger and larger the task won’t get any easier. For example, the Switch SUPERNAP datacenter complex in Reno Nevada will be over 6.4 million square feet, roughly the size of the Pentagon. Having worked in the Pentagon, I can say that finding your way around a facility that large can be daunting. Of course, a friendly security guard is likely to show up to help if you get lost or curious.

On your way back out you’ll pass through the “box” once again for your exit cameo. Also, if you are trying to leave with more than you came in with you will need a fair bit of paperwork before you can turn in your credentials and exit the building. Don’t forget to wave at the cameras in the parking lot.

The post A Day in the Life of a Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Zelda-inspired ocarina-controlled home automation

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zelda-home-automation/

Allen Pan has wired up his home automation system to be controlled by memorable tunes from the classic Zelda franchise.

Zelda Ocarina Controlled Home Automation – Zelda: Ocarina of Time | Sufficiently Advanced

With Zelda: Breath of the Wild out on the Nintendo Switch, I made a home automation system based off the Zelda series using the ocarina from The Legend of Zelda: Ocarina of Time. Help Me Make More Awesome Stuff! https://www.patreon.com/sufficientlyadvanced Subscribe! http://goo.gl/xZvS5s Follow Sufficiently Advanced!

Listen!

Released in 1998, The Legend of Zelda: Ocarina of Time is the best game ever is still an iconic entry in the retro gaming history books.

Very few games have stuck with me in the same way Ocarina has, and I think it’s fair to say that, with the continued success of the Zelda franchise, I’m not the only one who has a special place in their heart for Link, particularly in this musical outing.

Legend of Zelda: Ocarina of Time screenshot

Thanks to Cynosure Gaming‘s Ocarina of Time review for the image.

Allen, or Sufficiently Advanced, as his YouTube subscribers know him, has used a Raspberry Pi to detect and recognise key tunes from the game, with each tune being linked (geddit?) to a specific task. By playing Zelda’s Lullaby (E, G, D, E, G, D), for instance, Allen can lock or unlock the door to his house. Other tunes have different functions: Epona’s Song unlocks the car (for Ocarina noobs, Epona is Link’s horse sidekick throughout most of the game), and Minuet of Forest waters the plants.

So how does it work?

It’s a fairly simple setup based around note recognition. When certain notes are played in a specific sequence, the Raspberry Pi detects the tune via a microphone within the Amazon Echo-inspired body of the build, and triggers the action related to the specific task. The small speaker you can see in the video plays a confirmation tune, again taken from the video game, to show that the task has been completed.

Legend of Zelda Ocarina of Time Raspberry Pi Home Automation system setup image

As for the tasks themselves, Allen has built a small controller for each action, whether it be a piece of wood that presses down on his car key, a servomotor that adjusts the ambient temperature, or a water pump to hydrate his plants. Each controller has its own small ESP8266 wireless connectivity module that links back to the wireless-enabled Raspberry Pi, cutting down on the need for a ton of wires about the home.

And yes, before anybody says it, we’re sure that Allen is aware that using tone recognition is not the safest means of locking and unlocking your home. This is just for fun.

Do-it-yourself home automation

While we don’t necessarily expect everyone to brush up on their ocarina skills and build their own Zelda-inspired home automation system, the idea of using something other than voice or text commands to control home appliances is a fun one.

You could use facial recognition at the door to start the kettle boiling, or the detection of certain gasses to – ahem!– spray an air freshener.

We love to see what you all get up to with the Raspberry Pi. Have you built your own home automation system controlled by something other than your voice? Share it in the comments below.

 

The post Zelda-inspired ocarina-controlled home automation appeared first on Raspberry Pi.

NeoPixel Temperature Stair Lights

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/neopixel-temperature-stair-lights/

Following a post-Christmas decision to keep illuminated decorations on her stairway bannister throughout the year, Lorraine Underwood found a new purpose for a strip of NeoPixels she had lying around.

Lorraine Underwood on Twitter

Changed the stair lights from a string to a strip & they look awesome! #neopixel #raspberrypi https://t.co/dksLwy1SE1

Simply running the lights up the stairs, blinking and flashing to a random code, wasn’t enough for her. By using an API to check the outdoor weather, Lorraine’s lights went from decorative to informative: they now give an indication of outside weather conditions through their colour and the quantity illuminated.

“The idea is that more lights will light up as it gets warmer,” Lorraine explains. “The temperature is checked every five minutes (I think that may even be a little too often). I am looking forward to walking downstairs to a nice warm yellow light instead of the current blue!”

In total, Lorraine had 240 lights in the strip; she created a chart indicating a range of outside temperatures and the quantity of lights which for each value, as well as specifying the colour of those lights, running from chilly blue through to scorching red.

Lorraine Underwood Neopixel stair way lights

Oh, Lorraine! We love your optimistic dreams of the British summer being more than its usual rainy 16 Celsius…

The lights are controlled by a Raspberry Pi Zero running a code that can be found on Lorraine’s blog. The code dictates which lights are lit and when.

Lorraine Underwood Neopixel stair way lights

“Do I need a coat today? I’ll check the stairs.”

Lorraine is planning some future additions to the build, including a toddler-proof 3D housing, powering the Zero from the lights’ power supply, and gathering her own temperature data instead of relying on a third-party API.

While gathering the temperature data from outside her house, she may also want to look into building an entire weather station, collecting extra data on rain, humidity, and wind conditions. After all, this is the UK: just because it’s hot outside, it doesn’t mean it’s not also raining.

The post NeoPixel Temperature Stair Lights appeared first on Raspberry Pi.

Being Smart

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/being-smart.html

Last weekend I set myself the task to write an ATA S.M.A.R.T. (i.e. hard
disk health monitoring) reader and parser. After spending some time reading
all kinds of T13 and T10 docs and a bit of hacking I now present
you the following new software:

  • libatasmart: a lean, small and clean implementation of an ATA S.M.A.R.T.
    reading and parsing library. It’s fairly comprehensive, however I only support
    a subset of the full S.M.A.R.T. set of functions: those parts which made sense to
    me, not the esoteric stuff. Here’s the API and here’s the README.
  • skdump: a little tool that produces a similar output to smartctl
    -a
    , but uses libatasmart.
  • sktest: a little tool for starting/aborting S.M.A.R.T. self-tests, based on libatasmart
  • gnome-disk-health-service: a little wrapper around
    libatasmart that exports its entire functionality via D-Bus, so that
    unpriviliged processes can introspect a drive’s health records, including
    temperature, number of bad sectors and suchlike. This is written in Vala, which
    BTW is awesome for doing D-Bus services. Actually after having done this once now I
    really hope I will never have to write a D-Bus server without Vala again. I
    also wrote a Vala .vapi file for libatasmart which is shipped in
    its tarball.
  • gnome-disk-health: a little tool that reads the S.M.A.R.T.
    data from g-d-h-s and presents it in a pretty dialog. Includes support for
    viewing attributes and starting self-tests and stuff. Also written with
    Vala.

Why? You might ask what the point of all this stuff is where
smartmontools already
exists. What I’d like to see on future GNOME desktops is that as soon as a
disk starts to fail a notification bubble pops up warning the user about this
fact, and suggesting that he makes backups and replaces the disk. For a tight
integration into the desktop, a S.M.A.R.T. implementation that is small, and not
C++, and a library (i.e. embeddable into other software with a sane interface)
is highly preferable. Also, stuff like distribution installers should link
against libatasmart to warn the user about old, and defective disks
before he even starts the installation on them. (Hey, anaconda developers! That means you! It’s a tiny library, and all you need to do is a single call: int sk_disk_smart_status(SkDisk *d, SkBool *good);)

Please note that I certainly don’t plan to replace smartmontools.
libatasmart will always implement only a subset of S.M.A.R.T. If you want
the full set of functionality then please refer to smartmontools.

Where’s this going? I plan to fully maintain libatasmart
(including skdump and sktest) for the future. However
g-d-h and g-d-h-s will probably just bitrot in my repository
— unless someone else wants to pick this up and maintain it. The reason my
further interest in those tools is rather limited is that for the long run we
will hopefully will see davidz’s DeviceKit-disks (screenhot)
changed to use this library for health monitoring. Then DK-d will export the
S.M.A.R.T. info on the bus, and a separate daemon would not be necessary anymore.
DK-d provides a single interface for all kinds of health parameters for
storage, including RAID health and suchlike. I thus think this is the way
forward and not g-d-h-s. (That should, of course, not hinder anyone to step up
and take up maintainership of g-d-h/g-d-h-s if he wants to. There might be good
reasons for doing so. Maybe because you need something to do, or because you
want a S.M.A.R.T. solution for the desktop now, and not wait until DeviceKit gets
pushed into all the distros).

So, here’s where you can get this stuff:

git://git.0pointer.de/libatasmart.git

git://git.0pointer.de/gnome-disk-health.git

Browse the GIT repos.

I will roll a 0.1 tarball of libatasmart soon. I’d be thankful if people could run
skdump on their disks and check if its output is basically the same as
smartctl -a‘s. Especially people with BE machines.

Of course the most important part of a software announcement is always the screenshot:

Smart-Ass!

return -ETOOMANYDOTS;