Tag Archives: arrival

Pi 3B+: 48 hours later

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/3b-plus-aftermath/

Unless you’ve been AFK for the last two days, you’ll no doubt be aware of the release of the brand-spanking-new Raspberry Pi 3 Model B+. With faster connectivity, more computing power, Power over Ethernet (PoE) pins, and the same $35 price point, the new board has been a hit across all our social media accounts! So while we wind down from launch week, let’s all pull up a chair, make yet another cup of coffee, and look through some of our favourite reactions from the last 48 hours.

Twitter

Our Twitter mentions were refreshing at hyperspeed on Wednesday, as you all began to hear the news and spread the word about the newest member to the Raspberry Pi family.

Tanya Fish on Twitter

Happy Pi Day, people! New @Raspberry_Pi 3B+ is out.

News outlets, maker sites, and hobbyists published posts and articles about the new Pi’s spec upgrades and their plans for the device.

Hackster.io on Twitter

This sort of attention to detail work is exactly what I love about being involved with @Raspberry_Pi. We’re squeezing the last drops of performance out of the 40nm process node, and perfecting Pi 3 in the same way that the original B+ perfected Pi 1.” https://t.co/hEj7JZOGeZ

And I think we counted about 150 uses of this GIF on Twitter alone:

YouTube

Andy Warburton 👾 on Twitter

Is something going on with the @Raspberry_Pi today? You’d never guess from my YouTube subscriptions page… 😀

A few members of our community were lucky enough to get their hands on a 3B+ early, and sat eagerly by the YouTube publish button, waiting to release their impressions of our new board to the world. Others, with no new Pi in hand yet, posted reaction vids to the launch, discussing their plans for the upgraded Pi and comparing statistics against its predecessors.

New Raspberry Pi 3 B+ (2018) Review and Speed Tests

Happy Pi Day World! There is a new Raspberry Pi 3, the B+! In this video I will review the new Pi 3 B+ and do some speed tests. Let me know in the comments if you are getting one and what you are planning on making with it!

Long-standing community members such as The Raspberry Pi Guy, Alex “RasPi.TV” Eames, and Michael Horne joined Adafruit, element14, and RS Components (whose team produced the most epic 3B+ video we’ve seen so far), and makers Tinkernut and Estefannie Explains It All in sharing their thoughts, performance tests, and baked goods on the big day.

What’s new on the Raspberry Pi 3 B+

It’s Pi day! Sorry, wondrous Mathematical constant, this day is no longer about you. The Raspberry Pi foundation just released a new version of the Raspberry Pi called the Rapsberry Pi B+.

If you have a YouTube or Vimeo channel, or if you create videos for other social media channels, and have published your impressions of the new Raspberry Pi, be sure to share a link with us so we can see what you think!

Instagram

We shared a few photos and videos on Instagram, and over 30000 of you checked out our Instagram Story on the day.

Some glamour shots of the latest member of the #RaspberryPi family – the Raspberry Pi 3 Model B+ . Will you be getting one? What are your plans for our newest Pi?

5,609 Likes, 103 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Some glamour shots of the latest member of the #RaspberryPi family – the Raspberry Pi 3 Model B+ ….”

As hot off the press (out of the oven? out of the solder bath?) Pi 3B+ boards start to make their way to eager makers’ homes, they are all broadcasting their excitement, and we love seeing what they plan to get up to with it.

The new #raspberrypi 3B+ suits the industrial setting. Check out my website for #RPI3B Vs RPI3BPlus network speed test. #NotEnoughTECH #network #test #internet

8 Likes, 1 Comments – Mat (@notenoughtech) on Instagram: “The new #raspberrypi 3B+ suits the industrial setting. Check out my website for #RPI3B Vs RPI3BPlus…”

The new Raspberry Pi 3 Model B+ is here and will be used for our Python staging server for our APIs #raspberrypi #pythoncode #googleadwords #shopify #datalayer

16 Likes, 3 Comments – Rob Edlin (@niddocks) on Instagram: “The new Raspberry Pi 3 Model B+ is here and will be used for our Python staging server for our APIs…”

In the news

Eben made an appearance on ITV Anglia on Wednesday, talking live on Facebook about the new Raspberry Pi.

ITV Anglia

As the latest version of the Raspberry Pi computer is launched in Cambridge, Dr Eben Upton talks about the inspiration of Professor Stephen Hawking and his legacy to science. Add your questions in…

He was also fortunate enough to spend the morning with some Sixth Form students from the local area.

Sascha Williams on Twitter

On a day where science is making the headlines, lovely to see the scientists of the future in our office – getting tips from fab @Raspberry_Pi founder @EbenUpton #scientists #RaspberryPi #PiDay2018 @sirissac6thform

Principal Hardware Engineer Roger Thornton will also make a live appearance online this week: he is co-hosting Hack Chat later today. And of course, you can see more of Roger and Eben in the video where they discuss the new 3B+.

Introducing the Raspberry Pi 3 Model B+

Raspberry Pi 3 Model B+ is now on sale now for $35.

It’s been a supremely busy week here at Pi Towers and across the globe in the offices of our Approved Resellers, and seeing your wonderful comments and sharing in your excitement has made it all worth it. Please keep it up, and be sure to share the arrival of your 3B+ as well as the projects into which you’ll be integrating them.

If you’d like to order a Raspberry Pi 3 Model B+, you can do so via our product page. And if you have any questions at all regarding the 3B+, the conversation is still taking place in the comments of Wednesday’s launch post, so head on over.

The post Pi 3B+: 48 hours later appeared first on Raspberry Pi.

The Challenges of Opening a Data Center — Part 1

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/choosing-data-center/

Backblaze storage pod in new data center

This is part one of a series. The second part will be posted later this week. Use the Join button above to receive notification of future posts in this series.

Though most of us have never set foot inside of a data center, as citizens of a data-driven world we nonetheless depend on the services that data centers provide almost as much as we depend on a reliable water supply, the electrical grid, and the highway system. Every time we send a tweet, post to Facebook, check our bank balance or credit score, watch a YouTube video, or back up a computer to the cloud we are interacting with a data center.

In this series, The Challenges of Opening a Data Center, we’ll talk in general terms about the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process. Many of the factors to consider will be similar for opening a private data center or seeking space in a public data center, but we’ll assume for the sake of this discussion that our needs are more modest than requiring a data center dedicated solely to our own use (i.e. we’re not Google, Facebook, or China Telecom).

Data center technology and management are changing rapidly, with new approaches to design and operation appearing every year. This means we won’t be able to cover everything happening in the world of data centers in our series, however, we hope our brief overview proves useful.

What is a Data Center?

A data center is the structure that houses a large group of networked computer servers typically used by businesses, governments, and organizations for the remote storage, processing, or distribution of large amounts of data.

While many organizations will have computing services in the same location as their offices that support their day-to-day operations, a data center is a structure dedicated to 24/7 large-scale data processing and handling.

Depending on how you define the term, there are anywhere from a half million data centers in the world to many millions. While it’s possible to say that an organization’s on-site servers and data storage can be called a data center, in this discussion we are using the term data center to refer to facilities that are expressly dedicated to housing computer systems and associated components, such as telecommunications and storage systems. The facility might be a private center, which is owned or leased by one tenant only, or a shared data center that offers what are called “colocation services,” and rents space, services, and equipment to multiple tenants in the center.

A large, modern data center operates around the clock, placing a priority on providing secure and uninterrrupted service, and generally includes redundant or backup power systems or supplies, redundant data communication connections, environmental controls, fire suppression systems, and numerous security devices. Such a center is an industrial-scale operation often using as much electricity as a small town.

Types of Data Centers

There are a number of ways to classify data centers according to how they will be used, whether they are owned or used by one or multiple organizations, whether and how they fit into a topology of other data centers; which technologies and management approaches they use for computing, storage, cooling, power, and operations; and increasingly visible these days: how green they are.

Data centers can be loosely classified into three types according to who owns them and who uses them.

Exclusive Data Centers are facilities wholly built, maintained, operated and managed by the business for the optimal operation of its IT equipment. Some of these centers are well-known companies such as Facebook, Google, or Microsoft, while others are less public-facing big telecoms, insurance companies, or other service providers.

Managed Hosting Providers are data centers managed by a third party on behalf of a business. The business does not own data center or space within it. Rather, the business rents IT equipment and infrastructure it needs instead of investing in the outright purchase of what it needs.

Colocation Data Centers are usually large facilities built to accommodate multiple businesses within the center. The business rents its own space within the data center and subsequently fills the space with its IT equipment, or possibly uses equipment provided by the data center operator.

Backblaze, for example, doesn’t own its own data centers but colocates in data centers owned by others. As Backblaze’s storage needs grow, Backblaze increases the space it uses within a given data center and/or expands to other data centers in the same or different geographic areas.

Availability is Key

When designing or selecting a data center, an organization needs to decide what level of availability is required for its services. The type of business or service it provides likely will dictate this. Any organization that provides real-time and/or critical data services will need the highest level of availability and redundancy, as well as the ability to rapidly failover (transfer operation to another center) when and if required. Some organizations require multiple data centers not just to handle the computer or storage capacity they use, but to provide alternate locations for operation if something should happen temporarily or permanently to one or more of their centers.

Organizations operating data centers that can’t afford any downtime at all will typically operate data centers that have a mirrored site that can take over if something happens to the first site, or they operate a second site in parallel to the first one. These data center topologies are called Active/Passive, and Active/Active, respectively. Should disaster or an outage occur, disaster mode would dictate immediately moving all of the primary data center’s processing to the second data center.

While some data center topologies are spread throughout a single country or continent, others extend around the world. Practically, data transmission speeds put a cap on centers that can be operated in parallel with the appearance of simultaneous operation. Linking two data centers located apart from each other — say no more than 60 miles to limit data latency issues — together with dark fiber (leased fiber optic cable) could enable both data centers to be operated as if they were in the same location, reducing staffing requirements yet providing immediate failover to the secondary data center if needed.

This redundancy of facilities and ensured availability is of paramount importance to those needing uninterrupted data center services.

Active/Passive Data Centers

Active/Active Data Centers

LEED Certification

Leadership in Energy and Environmental Design (LEED) is a rating system devised by the United States Green Building Council (USGBC) for the design, construction, and operation of green buildings. Facilities can achieve ratings of certified, silver, gold, or platinum based on criteria within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation and design.

Green certification has become increasingly important in data center design and operation as data centers require great amounts of electricity and often cooling water to operate. Green technologies can reduce costs for data center operation, as well as make the arrival of data centers more amenable to environmentally-conscious communities.

The ACT, Inc. data center in Iowa City, Iowa was the first data center in the U.S. to receive LEED-Platinum certification, the highest level available.

ACT Data Center exterior

ACT Data Center exterior

ACT Data Center interior

ACT Data Center interior

Factors to Consider When Selecting a Data Center

There are numerous factors to consider when deciding to build or to occupy space in a data center. Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services can affect costs, risk, security and other factors that need to be taken into consideration.

The size of the data center will be dictated by the business requirements of the owner or tenant. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows staff access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (i.e. one “U” or “RU” rack unit measuring 44.50 millimeters or 1.75 inches), to Backblaze’s Storage Pod design that fits a 4U chassis, to large freestanding storage silos that occupy many square feet of floor space.

Location

Location will be one of the biggest factors to consider when selecting a data center and encompasses many other factors that should be taken into account, such as geological risks, neighboring uses, and even local flight paths. Access to suitable available power at a suitable price point is often the most critical factor and the longest lead time item, followed by broadband service availability.

With more and more data centers available providing varied levels of service and cost, the choices increase each year. Data center brokers can be employed to find a data center, just as one might use a broker for home or other commercial real estate.

Websites listing available colocation space, such as upstack.io, or entire data centers for sale or lease, are widely used. A common practice is for a customer to publish its data center requirements, and the vendors compete to provide the most attractive bid in a reverse auction.

Business and Customer Proximity

The center’s closeness to a business or organization may or may not be a factor in the site selection. The organization might wish to be close enough to manage the center or supervise the on-site staff from a nearby business location. The location of customers might be a factor, especially if data transmission speeds and latency are important, or the business or customers have regulatory, political, tax, or other considerations that dictate areas suitable or not suitable for the storage and processing of data.

Climate

Local climate is a major factor in data center design because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling, which can total as much as 50% or more of a center’s power costs. The topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Nevertheless, data centers are located in both extremely cold regions and extremely hot ones, with innovative approaches used in both extremes to maintain desired temperatures within the center.

Geographic Stability and Extreme Weather Events

A major obvious factor in locating a data center is the stability of the actual site as regards weather, seismic activity, and the likelihood of weather events such as hurricanes, as well as fire or flooding.

Backblaze’s Sacramento data center describes its location as one of the most stable geographic locations in California, outside fault zones and floodplains.

Sacramento Data Center

Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category 5 hurricane winds.

Equinix Data Center in Miami

Equinix “NAP of the Americas” Data Center in Miami

Most data centers don’t have the extreme protection or history of the Bahnhof data center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described “Bond villain” ambiance.

Bahnhof Data Center under White Mountain in Stockholm

Usually, the data center owner or tenant will want to take into account the balance between cost and risk in the selection of a location. The Ideal quadrant below is obviously favored when making this compromise.

Cost vs Risk in selecting a data center

Cost = Construction/lease, power, bandwidth, cooling, labor, taxes
Risk = Environmental (seismic, weather, water, fire), political, economic

Risk mitigation also plays a strong role in pricing. The extent to which providers must implement special building techniques and operating technologies to protect the facility will affect price. When selecting a data center, organizations must make note of the data center’s certification level on the basis of regulatory requirements in the industry. These certifications can ensure that an organization is meeting necessary compliance requirements.

Power

Electrical power usually represents the largest cost in a data center. The cost a service provider pays for power will be affected by the source of the power, the regulatory environment, the facility size and the rate concessions, if any, offered by the utility. At higher level tiers, battery, generator, and redundant power grids are a required part of the picture.

Fault tolerance and power redundancy are absolutely necessary to maintain uninterrupted data center operation. Parallel redundancy is a safeguard to ensure that an uninterruptible power supply (UPS) system is in place to provide electrical power if necessary. The UPS system can be based on batteries, saved kinetic energy, or some type of generator using diesel or another fuel. The center will operate on the UPS system with another UPS system acting as a backup power generator. If a power outage occurs, the additional UPS system power generator is available.

Many data centers require the use of independent power grids, with service provided by different utility companies or services, to prevent against loss of electrical service no matter what the cause. Some data centers have intentionally located themselves near national borders so that they can obtain redundant power from not just separate grids, but from separate geopolitical sources.

Higher redundancy levels required by a company will of invariably lead to higher prices. If one requires high availability backed by a service-level agreement (SLA), one can expect to pay more than another company with less demanding redundancy requirements.

Stay Tuned for Part 2 of The Challenges of Opening a Data Center

That’s it for part 1 of this post. In subsequent posts, we’ll take a look at some other factors to consider when moving into a data center such as network bandwidth, cooling, and security. We’ll take a look at what is involved in moving into a new data center (including stories from Backblaze’s experiences). We’ll also investigate what it takes to keep a data center running, and some of the new technologies and trends affecting data center design and use. You can discover all posts on our blog tagged with “Data Center” by following the link https://www.backblaze.com/blog/tag/data-center/.

The second part of this series on The Challenges of Opening a Data Center will be posted later this week. Use the Join button above to receive notification of future posts in this series.

The post The Challenges of Opening a Data Center — Part 1 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backing Up Linux to Backblaze B2 with Duplicity and Restic

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/

Linux users have a variety of options for handling data backup. The choices range from free and open-source programs to paid commercial tools, and include applications that are purely command-line based (CLI) and others that have a graphical interface (GUI), or both.

If you take a look at our Backblaze B2 Cloud Storage Integrations page, you will see a number of offerings that enable you to back up your Linux desktops and servers to Backblaze B2. These include CloudBerry, Duplicity, Duplicacy, 45 Drives, GoodSync, HashBackup, QNAP, Restic, and Rclone, plus other choices for NAS and hybrid uses.

In this post, we’ll discuss two popular command line and open-source programs: one older, Duplicity, and a new player, Restic.

Old School vs. New School

We’re highlighting Duplicity and Restic today because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

Old School (Duplicity)

In the old school model, data is written sequentially to the storage medium. Once a section of data is recorded, new data is written starting where that section of data ends. It’s not possible to go back and change the data that’s already been written.

This old-school model has long been associated with the use of magnetic tape, a prime example of which is the LTO (Linear Tape-Open) standard. In this “write once” model, files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. As a Linux/Unix user, you undoubtedly are familiar with the TAR archive format, which is an acronym for Tape ARchive. TAR has been around since 1979 and was originally developed to write data to sequential I/O devices with no file system of their own.

It is from the use of tape that we get the full backup/incremental backup approach to backups. A backup sequence beings with a full backup of data. Each incremental backup contains what’s been changed since the last full backup until the next full backup is made and the process starts over, filling more and more tape or whatever medium is being used.

This is the model used by Duplicity: full and incremental backups. Duplicity backs up files by producing encrypted, digitally signed, versioned, TAR-format volumes and uploading them to a remote location, including Backblaze B2 Cloud Storage. Released under the terms of the GNU General Public License (GPL), Duplicity is free software.

With Duplicity, the first archive is a complete (full) backup, and subsequent (incremental) backups only add differences from the latest full or incremental backup. Chains consisting of a full backup and a series of incremental backups can be recovered to the point in time that any of the incremental steps were taken. If any of the incremental backups are missing, then reconstructing a complete and current backup is much more difficult and sometimes impossible.

Duplicity is available under many Unix-like operating systems (such as Linux, BSD, and Mac OS X) and ships with many popular Linux distributions including Ubuntu, Debian, and Fedora. It also can be used with Windows under Cygwin.

We recently published a KB article on How to configure Backblaze B2 with Duplicity on Linux that demonstrates how to set up Duplicity with B2 and back up and restore a directory from Linux.

New School (Restic)

With the arrival of non-sequential storage medium, such as disk drives, and new ideas such as deduplication, comes the new school approach, which is used by Restic. Data can be written and changed anywhere on the storage medium. This efficiency comes largely through the use of deduplication. Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, greatly increasing storage efficiency and flexibility.

Restic is a recently available multi-platform command line backup software program that is designed to be fast, efficient, and secure. Restic supports a variety of backends for storing backups, including a local server, SFTP server, HTTP Rest server, and a number of cloud storage providers, including Backblaze B2.

Files are uploaded to a B2 bucket as deduplicated, encrypted chunks. Each time a backup runs, only changed data is backed up. On each backup run, a snapshot is created enabling restores to a specific date or time.

Restic assumes that the storage location for repository is shared, so it always encrypts the backed up data. This is in addition to any encryption and security from the storage provider.

Restic is open source and free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

There’s a lot more you can do with Restic, including adding tags, mounting a repository locally, and scripting. To learn more, you can review the documentation at https://restic.readthedocs.io.

Coincidentally with this blog post, we published a KB article, How to configure Backblaze B2 with Restic on Linux, in which we show how to set up Restic for use with B2 and how to back up and restore a home directory from Linux to B2.

Which is Right for You?

While Duplicity is a popular, widely-available, and useful program, many users of cloud storage solutions such as B2 are moving to new-school solutions like Restic that take better advantage of the non-sequential access capabilities and speed of modern storage media used by cloud storage providers.

Tell us how you’re backing up Linux

Please let us know in the comments what you’re using for Linux backups, and if you have experience using Duplicity, Restic, or other backup software with Backblaze B2.

The post Backing Up Linux to Backblaze B2 with Duplicity and Restic appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Thomas and Ed become a RealLifeDoodle on the ISS

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/astro-pi-reallifedoodle/

Thanks to the very talented sooperdavid, creator of some of the wonderful animations known as RealLifeDoodles, Thomas Pesquet and Astro Pi Ed have been turned into one of the cutest videos on the internet.

space pi – Create, Discover and Share Awesome GIFs on Gfycat

Watch space pi GIF by sooperdave on Gfycat. Discover more GIFS online on Gfycat

And RealLifeDoodles aaaaare?

Thanks to the power of viral video, many will be aware of the ongoing Real Life Doodle phenomenon. Wait, you’re not aware?

Oh. Well, let me explain it to you.

Taking often comical video clips, those with a know-how and skill level that outweighs my own in spades add faces and emotions to inanimate objects, creating what the social media world refers to as a Real Life Doodle. From disappointed exercise balls to cannibalistic piles of leaves, these video clips are both cute and sometimes, though thankfully not always, a little heartbreaking.

letmegofree – Create, Discover and Share Awesome GIFs on Gfycat

Watch letmegofree GIF by sooperdave on Gfycat. Discover more reallifedoodles GIFs on Gfycat

Our own RealLifeDoodle

A few months back, when Programme Manager Dave Honess, better known to many as SpaceDave, sent me these Astro Pi videos for me to upload to YouTube, a small plan hatched in my brain. For in the midst of the video, and pointed out to me by SpaceDave – “I kind of love the way he just lets the unit drop out of shot” – was the most adorable sight as poor Ed drifted off into the great unknown of the ISS. Finding that I have this odd ability to consider many inanimate objects as ‘cute’, I wanted to see whether we could turn poor Ed into a RealLifeDoodle.

Heading to the Reddit RealLifeDoodle subreddit, I sent moderator sooperdavid a private message, asking if he’d be so kind as to bring our beloved Ed to life.

Yesterday, our dream came true!

Astro Pi

Unless you’re new to the world of the Raspberry Pi blog (in which case, welcome!), you’ll probably know about the Astro Pi Challenge. But for those who are unaware, let me break it down for you.

Raspberry Pi RealLifeDoodle

In 2015, two weeks before British ESA Astronaut Tim Peake journeyed to the International Space Station, two Raspberry Pis were sent up to await his arrival. Clad in 6063-grade aluminium flight cases and fitted with their own Sense HATs and camera modules, the Astro Pis Ed and Izzy were ready to receive the winning codes from school children in the UK. The following year, this time maintained by French ESA Astronaut Thomas Pesquet, children from every ESA member country got involved to send even more code to the ISS.

Get involved

Will there be another Astro Pi Challenge? Well, I just asked SpaceDave and he didn’t say no! So why not get yourself into training now and try out some of our space-themed free resources, including our 3D-print your own Astro Pi case tutorial? You can also follow the adventures of Ed and Izzy in our brilliant Story of Astro Pi cartoons.

Raspberry Pi RealLifeDoodle

And if you’re quick, there’s still time to take part in tomorrow’s Moonhack! Check out their website for more information and help the team at Code Club Australia beat their own world record!

The post Thomas and Ed become a RealLifeDoodle on the ISS appeared first on Raspberry Pi.

Pimoroni is 5 now!

Post Syndicated from guru original https://www.raspberrypi.org/blog/pimoroni-is-5-now/

Long read written by Pimoroni’s Paul Beech, best enjoyed over a cup o’ grog.

Every couple of years, I’ve done a “State of the Fleet” update here on the Raspberry Pi blog to tell everyone how the Sheffield Pirates are doing. Half a decade has gone by in a blink, but reading back over the previous posts shows that a lot has happened in that time!

TL;DR We’re an increasingly medium-sized design/manufacturing/e-commerce business with workshops in Sheffield, UK, and Essen, Germany, and we employ almost 40 people. We’re totally lovely. Thanks for supporting us!

 

We’ve come a long way, baby

I’m sitting looking out the window at Sheffield-on-Sea and feeling pretty lucky about how things are going. In the morning, I’ll be flying east for Maker Faire Tokyo with Niko (more on him later), and to say hi to some amazing people in Shenzhen (and to visit Huaqiangbei, of course). This is after I’ve already visited this year’s Maker Faires in New York, San Francisco, and Berlin.

Pimoroni started out small, but we’ve grown like weeds, and we’re steadily sauntering towards becoming a medium-sized business. That’s thanks to fantastic support from the people who buy our stuff and spread the word. In return, we try to be nice, friendly, and human in everything we do, and to make exciting things, ideally with our own hands here in Sheffield.

Pimoroni soldering

Handmade with love

We’ve made it onto a few ‘fastest-growing’ lists, and we’re in the top 500 of the Inc. 5000 Europe list. Adafruit did it first a few years back, and we’ve never gone wrong when we’ve followed in their footsteps.

The slightly weird nature of Pimoroni means we get listed as either a manufacturing or e-commerce business. In reality, we’re about four or five companies in one shell, which is very much against the conventions of “how business is done”. However, having seen what Adafruit, SparkFun, and Seeed do, we’re more than happy to design, manufacture, and sell our stuff in-house, as well as stocking the best stuff from across the maker community.

Pimoroni stocks

Product and process

The whole process of expansion has not been without its growing pains. We’re just under 40 people strong now, and have an outpost in Germany (also hilariously far from the sea for piratical activities). This means we’ve had to change things quickly to improve and automate processes, so that the wheels won’t fall off as things get bigger. Process optimization is incredibly interesting to a geek, especially the making sure that things are done well, that mistakes are easy to spot and to fix, and that nothing is missed.

At the end of 2015, we had a step change in how busy we were, and our post room and support started to suffer. As a consequence, we implemented measures to become more efficient, including small but important things like checking in parcels with a barcode scanner attached to a Raspberry Pi. That Pi has been happily running on the same SD card for a couple of years now without problems 😀

Pimoroni post room

Going postal?

We also hired a full-time support ninja, Matt, to keep the experience of getting stuff from us light and breezy and to ensure that any problems are sorted. He’s had hugely positive impact already by making the emails and replies you see more friendly. Of course, he’s also started using the laser cutters for tinkering projects. It’d be a shame to work at Pimoroni and not get to use all the wonderful toys, right?

Employing all the people

You can see some of the motley crew we employ here and there on the Pimoroni website. And if you drop by at the Raspberry Pi Birthday Party, Pi Wars, Maker Faires, Deer Shed Festival, or New Scientist Live in September, you’ll be seeing new Pimoroni faces as we start to engage with people more about what we do. On top of that, we’re starting to make proper videos (like Sandy’s soldering guide), as opposed to the 101 episodes of Bilge Tank we recorded in a rather off-the-cuff and haphazard fashion. Although that’s the beauty of Bilge Tank, right?

Pimoroni soldering

Such soldering setup

As Emma, Sandy, Lydia, and Tanya gel as a super creative team, we’re starting to create more formal educational resources, and to make kits that are suitable for a wider audience. Things like our Pi Zero W kits are products of their talents.

Emma is our new Head of Marketing. She’s really ‘The Only Marketing Person Who Would Ever Fit In At Pimoroni’, having been a core part of the Sheffield maker scene since we hung around with one Ben Nuttall, in the dark days before Raspberry Pi was a thing.

Through a series of fortunate coincidences, Niko and his equally talented wife Mena were there when we cut the first Pibow in 2012. They immediately pitched in to help us buy our second laser cutter so we could keep up with demand. They have been supporting Pimoroni with sourcing in East Asia, and now Niko has become a member of the Pirates’ Council and the Head of Engineering as we’re increasing the sophistication and scale of the things we do. The Unicorn HAT HD is one of his masterpieces.

Pimoroni devices

ALL the HATs!

We see ourselves as a wonderful island of misfit toys, and it feels good to have the best toy shop ever, and to support so many lovely people. Business is about more than just profits.

Where do we go to, me hearties?

So what are our plans? At the moment we’re still working absolutely flat-out as demand from wholesalers, retailers, and customers increases. We thought Raspberry Pi was big, but it turns out it’s just getting started. Near the end of 2016, it seemed to reach a whole new level of popularityand still we continue to meet people to whom we have to explain what a Pi is. It’s a good problem to have.

We need a bigger space, but it’s been hard to find somewhere suitable in Sheffield that won’t mean we’re stuck on an industrial estate miles from civilisation. That would be bad for the crewwe like having world-class burritos on our doorstep.

The good news is, it looks like our search is at an end! Just in time for the arrival of our ‘Super-Turbo-Death-Star’ new production line, which will enable to make devices in a bigger, better, faster, more ‘Now now now!’ fashion \o/

Pimoroni warehouse

Spacious, but not spacious enough!

We’ve got lots of treasure in the pipeline, but we want to pick up the pace of development even more and create many new HATs, pHATs, and SHIMs, e.g. for environmental sensing and audio applications. Picade will also be getting some love to make it slicker and more hackable.

We’re also starting to flirt with adding more engineering and production capabilities in-house. The plan is to try our hand at anodising, powder-coating, and maybe even injection-moulding if we get the space and find the right machine. Learning how to do things is amazing, and we love having an idea and being able to bring it to life in almost no time at all.

Pimoroni production

This is where the magic happens

Fanks!

There are so many people involved in supporting our success, and some people we love for just existing and doing wonderful things that make us want to do better. The biggest shout-outs go to Liz, Eben, Gordon, James, all the Raspberry Pi crew, and Limor and pt from Adafruit, for being the most supportive guiding lights a young maker company could ever need.

A note from us

It is amazing for us to witness the growth of businesses within the Raspberry Pi ecosystem. Pimoroni is a wonderful example of an organisation that is creating opportunities for makers within its local community, and the company is helping to reinvigorate Sheffield as the heart of making in the UK.

If you’d like to take advantage of the great products built by the Pirates, Monkeys, Robots, and Ninjas of Sheffield, you should do it soon: Pimoroni are giving everyone 20% off their homemade tech until 6 August.

Pimoroni, from all of us here at Pi Towers (both in the UK and USA), have a wonderful birthday, and many a grog on us!

The post Pimoroni is 5 now! appeared first on Raspberry Pi.

NYC Train Sign: real-time train tracking in New York City

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/nyc-train-sign/

Raspberry Pis, blinking lights, and APIs – what’s not to love? It’s really not surprising that the NYC Train Sign caught our attention – and it doesn’t hurt that its creators’ Instagram game is 👌 on point.

NYC Train Sign

NYC Train Sign. 158 likes · 2 talking about this. Live MTA train wait times signage.

Another transport sign?

Yes, yes, I know. Janina wrote about a bus timetable display only the other day. But hear me out, I have a totally legitimate reason why we’re covering this project as well…

…it’s just a really pretty-looking build, alright?

Public transport: a brief explanation

If you’ve been to New York City, or indeed have visited any busy metropolis, you’ll probably have braved the dread conveyor belt of empty-eyed masses that is…dundunduuun…public transport. Whenever you use it, unless you manage to hit that off-peak sweet spot (somewhere between 14.30 and 14.34) where the flow of human traffic is minimal, you are exposed to a hellish amalgam of rushing bodies, yells to ‘hold the door’, and the general funk of tight-packed public situations. Delicious.

NYC Train Sign Raspberry Pi

To be fair, Kramer has bad train etiquette

As APIs for public transport websites are becoming increasingly common and user-friendly, we’re seeing a rise in the number of transport-related builds. From Dr Lucy Rogers’ #WhereIsMyBus 3D-printed London icon to the VästtraPi bus departure screen mentioned above, projects using these APIs allow us respite from the throng and save us from waiting for delayed buses at drab and dreary stations.

Lucy Rogers WhereIsMyBus Raspberry Pi

image c/o Dr Lucy Rogers

We’ve seen a lot of bus builds, but have we seen train builds yet? Anyone? I’ll check: ‘Train your rat’, ‘Picademy teacher training’, ‘How to train your…’ Nope, I think this is the first. Maybe I’m wrong though, in which case please let me know in the comments.

NYC Train Sign

Let me see if I can get this right: the NYC Train Sign-building team at NYC Train Sign has created a real-time NYC train sign using a Raspberry Pi, LED matrix, and locally 3D-printed parts at their base in Brooklyn, NYC (…train sign – shoot!)

NYC Train Sign Raspberry Pi

The NYC Train Sign…so so pretty

The team, headed by creator Timothy Wu, uses the official NTA server API to fetch real-time arrival, departure, and delay information to display on their signs. They also handcraft the signs to fit your specifications (click here to buy your own). How very artisanal!

Do the BART(man)

As a result of the success of the NYC Train Sign, the team is now experimenting with signs for other transport services, including the San Francisco BART, Chicago CTA, and Boston MBTA. APIs are also available for services in other cities around the world, for example London and Los Angeles. We could probably do with a display like this in our London office! In fact, if you commute on public transport and can find the right API, I think one of these devices would be perfect for your workplace no matter where it is.

Using APIs

Given our free resources for a Tweeting Babbage and a…location marker poo (?!), it’s clear that at the Raspberry Pi Foundation we’re huge fans of using APIs in digital making projects. Therefore, it’s really no surprise that we like sharing them as well! So if you’ve created a project using an API, we’d love to see it. Pop a link into the comments below, or tag us on social media.

Now back to their Instagram game

Honestly, their photos are so aesthetically pleasing that I’m becoming a little jealous.

making of real-time nyc mta signs with raspberry pi in bushwick . as seen @kcbcbeer @fathersbk @houdinikitchenlab @dreammachinecreative @hihellobk . 3d-printing @3dbrooklyn vectors @virilemonarch . . #nyc #mta #subwaysystem #nycsubway #subway #metro #nycsubway #train #subwaysigns #3dprinting #3dmodel #3dprinter #3dprinting #3dprints #3d #newyorkcity #manhattan #brooklyn #bushwick #bronx #raspberrypi #code #javascript #php #sql #python #subwayart #subwaygraffiti

121 Likes, 4 Comments – @nyctrainsign on Instagram: “making of real-time nyc mta signs with raspberry pi in bushwick . as seen @kcbcbeer @fathersbk…”

The post NYC Train Sign: real-time train tracking in New York City appeared first on Raspberry Pi.

Under the Hood of Server-Side Encryption for Amazon Kinesis Streams

Post Syndicated from Damian Wylie original https://aws.amazon.com/blogs/big-data/under-the-hood-of-server-side-encryption-for-amazon-kinesis-streams/

Customers are using Amazon Kinesis Streams to ingest, process, and deliver data in real time from millions of devices or applications. Use cases for Kinesis Streams vary, but a few common ones include IoT data ingestion and analytics, log processing, clickstream analytics, and enterprise data bus architectures.

Within milliseconds of data arrival, applications (KCL, Apache Spark, AWS Lambda, Amazon Kinesis Analytics) attached to a stream are continuously mining value or delivering data to downstream destinations. Customers are then scaling their streams elastically to match demand. They pay incrementally for the resources that they need, while taking advantage of a fully managed, serverless streaming data service that allows them to focus on adding value closer to their customers.

These benefits are great; however, AWS learned that many customers could not take advantage of Kinesis Streams unless their data-at-rest within a stream was encrypted. Many customers did not want to manage encryption on their own, so they asked for a fully managed, automatic, server-side encryption mechanism leveraging centralized AWS Key Management Service (AWS KMS) customer master keys (CMK).

Motivated by this feedback, AWS added another fully managed, low cost aspect to Kinesis Streams by delivering server-side encryption via KMS managed encryption keys (SSE-KMS) in the following regions:

  • US East (N. Virginia)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)

In this post, I cover the mechanics of the Kinesis Streams server-side encryption feature. I also share a few best practices and considerations so that you can get started quickly.

Understanding the mechanics

The following section walks you through how Kinesis Streams uses CMKs to encrypt a message in the PutRecord or PutRecords path before it is propagated to the Kinesis Streams storage layer, and then decrypt it in the GetRecords path after it has been retrieved from the storage layer.

When server-side encryption is enabled—which takes just a few clicks in the console—the partition key and payload for every incoming record is encrypted automatically as it’s flowing into Kinesis Streams, using the selected CMK. When data is at rest within a stream, it’s encrypted.

When records are retrieved through a GetRecords request from the encrypted stream, they are decrypted automatically as they are flowing out of the service. That means your Kinesis Streams producers and consumers do not need to be aware of encryption. You have a fully managed data encryption feature at your fingertips, which can be enabled within seconds.

AWS also makes it easy to audit the application of server-side encryption. You can use the AWS Management Console for instant stream-level verification; the responses from PutRecord, PutRecords, and getRecords; or AWS CloudTrail.

Calling PutRecord or PutRecords

When server-side encryption is enabled for a particular stream, Kinesis Streams and KMS perform the following actions when your applications call PutRecord or PutRecords on a stream with server-side encryption enabled. The Amazon Kinesis Producer Library (KPL) uses PutRecords.

 

  1. Data is sent from a customer’s producer (client) to a Kinesis stream using TLS via HTTPS. Data in transit to a stream is encrypted by default.
  2. After data is received, it is momentarily stored in RAM within a front-end proxy layer.
  3. Kinesis Streams authenticates the producer, then impersonates the producer to request input keying material from KMS.
  4. KMS creates key material, encrypts it by using CMK, and sends both the plaintext and encrypted key material to the service, encrypted with TLS.
  5. The client uses the plaintext key material to derive data encryption keys (data keys) that are unique per-record.
  6. The client encrypts the payload and partition key using the data key in RAM within the front-end proxy layer and removes the plaintext data key from memory.
  7. The client appends the encrypted key material to the encrypted data.
  8. The plaintext key material is securely cached in memory within the front-end layer for reuse, until it expires after 5 minutes.
  9. The client delivers the encrypted message to a back-end store where it is stored at rest and fetchable by an authorized consumer through a GetRecords The Amazon Kinesis Client Library (KCL) calls GetRecords to retrieve records from a stream.

Calling getRecords

Kinesis Streams and KMS perform the following actions when your applications call GetRecords on a server-side encrypted stream.

 

  1. When a GeRecords call is made, the front-end proxy layer retrieves the encrypted record from its back-end store.
  2. The consumer (client) makes a request to KMS using a token generated by the customer’s request. KMS authorizes it.
  3. The client requests that KMS decrypt the encrypted key material.
  4. KMS decrypts the encrypted key material and sends the plaintext key material to the client.
  5. Kinesis Streams derives the per-record data keys from the decrypted key material.
  6. If the calling application is authorized, the client decrypts the payload and removes the plaintext data key from memory.
  7. The client delivers the payload over TLS and HTTPS to the consumer, requesting the records. Data in transit to a consumer is encrypted by default.

Verifying server-side encryption

Auditors or administrators often ask for proof that server-side encryption was or is enabled. Here are a few ways to do this.

To check if encryption is enabled now for your streams:

  • Use the AWS Management Console or the DescribeStream API operation. You can also see what CMK is being used for encryption.
  • See encryption in action by looking at responses from PutRecord, PutRecords, or GetRecords When encryption is enabled, the encryptionType parameter is set to “KMS”. If encryption is not enabled, encryptionType is not included in the response.

Sample PutRecord response

{
    "SequenceNumber": "49573959617140871741560010162505906306417380215064887298",
    "ShardId": "shardId-000000000000",
    "EncryptionType": "KMS"
}

Sample GetRecords response

{
    "Records": [
        {
            "Data": "aGVsbG8gd29ybGQ=", 
            "PartitionKey": "test", 
            "ApproximateArrivalTimestamp": 1498292565.825, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "495735762417140871741560010162505906306417380215064887298"
        }, 
        {
            "Data": "ZnJvZG8gbGl2ZXMK", 
            "PartitionKey": "3d0d9301-3c30-4c48-a9a8-e485b2982b28", 
            "ApproximateArrivalTimestamp": 1498292801.747, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "49573959617140871741560010162507115232237011062036103170"
        }
    ], 
    "NextShardIterator": "AAAAAAAAAAEvFypHZDx/4bJVAS34puwdiNcwssKqbh/XhRK7HSYRq3RS+YXJnVKJ8j0gQUt94bONdqQYHk9X9JHgefMUDKzDzndy5WbZWO4CS3hRdMdrbmJ/9KoR4lOfZvqTLt6JWQjDqXv0IaKs06/LHYcEA3oPcyQLOTJHdJl2EzplCTZnn/U295ovxvqF9g9DY8y2nVoMkdFLmdcEMVXjhCDKiRIt", 
    "MillisBehindLatest": 0
}

To check if encryption was enabled, use CloudTrail, which logs the StartStreamEncryption() and StopStreamEncryption() API calls made against a particular stream.

Getting started

It’s very easy to enable, disable, or modify server-side encryption for a particular stream.

  1. In the Kinesis Streams console, select a stream and choose Details.
  2. Select a CMK and select Enabled.
  3. Choose Save.

You can enable encryption only for a live stream, not upon stream creation.  Follow the same process to disable a stream. To use a different CMK, select it and choose Save.

Each of these tasks can also be accomplished using the StartStreamEncryption and StopStreamEncryption API operations.

Considerations

There are a few considerations you should be aware of when using server-side encryption for Kinesis Streams:

  • Permissions
  • Costs
  • Performance

Permissions

One benefit of using the “(Default) aws/kinesis” AWS managed key is that every producer and consumer with permissions to call PutRecord, PutRecords, or GetRecords inherits the right permissions over the “(Default) aws/kinesis” key automatically.

However, this is not necessarily the same case for a CMK. Kinesis Streams producers and consumers do not need to be aware of encryption. However, if you enable encryption using a custom master key but a producer or consumer doesn’t have IAM permissions to use it, PutRecord, PutRecords, or GetRecords requests fail.

This is a great security feature. On the other hand, it can effectively lead to data loss if you inadvertently apply a custom master key that restricts producers and consumers from interacting from the Kinesis stream. Take precautions when applying a custom master key. For more information about the minimum IAM permissions required for producers and consumers interacting with an encrypted stream, see Using Server-Side Encryption.

Costs

When you apply server-side encryption, you are subject to KMS API usage and key costs. Unlike custom KMS master keys, the “(Default) aws/kinesis” CMK is offered free of charge. However, you still need to pay for the API usage costs that Kinesis Streams incurs on your behalf.

API usage costs apply for every CMK, including custom ones. Kinesis Streams calls KMS approximately every 5 minutes when it is rotating the data key. In a 30-day month, the total cost of KMS API calls initiated by a Kinesis stream should be less than a few dollars.

Performance

During testing, AWS discovered that there was a slight increase (typically 0.2 millisecond or less per record) with put and get record latencies due to the additional overhead of encryption.

If you have questions or suggestions, please comment below.

AWS and the General Data Protection Regulation (GDPR)

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-and-the-general-data-protection-regulation/

European Union image

Just over a year ago, the European Commission approved and adopted the new General Data Protection Regulation (GDPR). The GDPR is the biggest change in data protection laws in Europe since the 1995 introduction of the European Union (EU) Data Protection Directive, also known as Directive 95/46/EC. The GDPR aims to strengthen the security and protection of personal data in the EU and will replace the Directive and all local laws relating to it.

AWS welcomes the arrival of the GDPR. The new, robust requirements raise the bar for data protection, security, and compliance, and will push the industry to follow the most stringent controls, helping to make everyone more secure. I am happy to announce today that all AWS services will comply with the GDPR when it becomes enforceable on May 25, 2018.

In this blog post, I explain the work AWS is doing to help customers with the GDPR as part of our continued commitment to help ensure they can comply with EU Data Protection requirements.

What has AWS been doing?

AWS continually maintains a high bar for security and compliance across all of our regions around the world. This has always been our highest priority—truly “job zero.” The AWS Cloud infrastructure has been architected to offer customers the most powerful, flexible, and secure cloud-computing environment available today. AWS also gives you a number of services and tools to enable you to build GDPR-compliant infrastructure on top of AWS.

One tool we give you is a Data Processing Agreement (DPA). I’m happy to announce today that we have a DPA that will meet the requirements of the GDPR. This GDPR DPA is available now to all AWS customers to help you prepare for May 25, 2018, when the GDPR becomes enforceable. For additional information about the new GDPR DPA or to obtain a copy, contact your AWS account manager.

In addition to account managers, we have teams of compliance experts, data protection specialists, and security experts working with customers across Europe to answer their questions and help them prepare for running workloads in the AWS Cloud after the GDPR comes into force. To further answer customers’ questions, we have updated our EU Data Protection website. This website includes information about what the GDPR is, the changes it brings to organizations operating in the EU, the services AWS offers to help you comply with the GDPR, and advice about how you can prepare.

Another topic we cover on the EU Data Protection website is AWS’s compliance with the CISPE Code of Conduct. The CISPE Code of Conduct helps cloud customers ensure that their cloud infrastructure provider is using appropriate data protection standards to protect their data in a manner consistent with the GDPR. AWS has declared that Amazon EC2, Amazon S3, Amazon RDS, AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon Elastic Block Storage (Amazon EBS) are fully compliant with the CISPE Code of Conduct. This declaration provides customers with assurances that they fully control their data in a safe, secure, and compliant environment when they use AWS. For more information about AWS’s compliance with the CISPE Code of Conduct, go to the CISPE website.

As well as giving customers a number of tools and services to build GDPR-compliant environments, AWS has achieved a number of internationally recognized certifications and accreditations. In the process, AWS has demonstrated compliance with third-party assurance frameworks such as ISO 27017 for cloud security, ISO 27018 for cloud privacy, PCI DSS Level 1, and SOC 1, SOC 2, and SOC 3. AWS also helps customers meet local security standards such as BSI’s Common Cloud Computing Controls Catalogue (C5) that is important in Germany. We will continue to pursue certifications and accreditations that are important to AWS customers.

What can you do?

Although the GDPR will not be enforceable until May 25, 2018, we are encouraging our customers and partners to start preparing now. If you have already implemented a high bar for compliance, security, and data privacy, the move to GDPR should be simple. However, if you have yet to start your journey to GDPR compliance, we urge you to start reviewing your security, compliance, and data protection processes now to ensure a smooth transition in May 2018.

You should consider the following key points in preparation for GDPR compliance:

  • Territorial reach – Determining whether the GDPR applies to your organization’s activities is essential to ensuring your organization’s ability to satisfy its compliance obligations.
  • Data subject rights – The GDPR enhances the rights of data subjects in a number of ways. You will need to make sure you can accommodate the rights of data subjects if you are processing their personal data.
  • Data breach notifications – If you are a data controller, you must report data breaches to the data protection authorities without undue delay and in any event within 72 hours of you becoming aware of a data breach.
  • Data protection officer (DPO) – You may need to appoint a DPO who will manage data security and other issues related to the processing of personal data.
  • Data protection impact assessment (DPIA) – You may need to conduct and, in some circumstances, you might be required to file with the supervisory authority a DPIA for your processing activities.
  • Data processing agreement (DPA) – You may need a DPA that will meet the requirements of the GDPR, particularly if personal data is transferred outside the European Economic Area.

AWS offers a wide range of services and features to help customers meet requirements of the GDPR, including services for access controls, monitoring, logging, and encryption. For more information about these services and features, see EU Data Protection.

At AWS, security, data protection, and compliance are our top priorities, and we will continue to work vigilantly to ensure that our customers are able to enjoy the benefits of AWS securely, compliantly, and without disruption in Europe and around the world. As we head toward May 2018, we will share more news and resources with you to help you comply with the GDPR.

– Steve

AWS Hot Startups – March 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-march-2017/

As the madness of March rounds up, take a break from all the basketball and check out the cool startups Tina Barr brings you for this month!

-Ana


The arrival of spring brings five new startups this month:

  • Amino Apps – providing social networks for hundreds of thousands of communities.
  • Appboy – empowering brands to strengthen customer relationships.
  • Arterys – revolutionizing the medical imaging industry.
  • Protenus – protecting patient data for healthcare organizations.
  • Syapse – improving targeted cancer care with shared data from across the country.

In case you missed them, check out February’s hot startups here.

Amino Apps (New York, NY)
Amino Logo
Amino Apps was founded on the belief that interest-based communities were underdeveloped and outdated, particularly when it came to mobile. CEO Ben Anderson and CTO Yin Wang created the app to give users access to hundreds of thousands of communities, each of them a complete social network dedicated to a single topic. Some of the largest communities have over 1 million members and are built around topics like popular TV shows, video games, sports, and an endless number of hobbies and other interests. Amino hosts communities from around the world and is currently available in six languages with many more on the way.

Navigating the Amino app is easy. Simply download the app (iOS or Android), sign up with a valid email address, choose a profile picture, and start exploring. Users can search for communities and join any that fit their interests. Each community has chatrooms, multimedia content, quizzes, and a seamless commenting system. If a community doesn’t exist yet, users can create it in minutes using the Amino Creator and Manager app (ACM). The largest user-generated communities are turned into their own apps, which gives communities their own piece of real estate on members’ phones, as well as in app stores.

Amino’s vast global network of hundreds of thousands of communities is run on AWS services. Every day users generate, share, and engage with an enormous amount of content across hundreds of mobile applications. By leveraging AWS services including Amazon EC2, Amazon RDS, Amazon S3, Amazon SQS, and Amazon CloudFront, Amino can continue to provide new features to their users while scaling their service capacity to keep up with user growth.

Interested in joining Amino? Check out their jobs page here.

Appboy (New York, NY)
In 2011, Bill Magnuson, Jon Hyman, and Mark Ghermezian saw a unique opportunity to strengthen and humanize relationships between brands and their customers through technology. The trio created Appboy to empower brands to build long-term relationships with their customers and today they are the leading lifecycle engagement platform for marketing, growth, and engagement teams. The team recognized that as rapid mobile growth became undeniable, many brands were becoming frustrated with the lack of compelling and seamless cross-channel experiences offered by existing marketing clouds. Many of today’s top mobile apps and enterprise companies trust Appboy to take their marketing to the next level. Appboy manages user profiles for nearly 700 million monthly active users, and is used to power more than 10 billion personalized messages monthly across a multitude of channels and devices.

Appboy creates a holistic user profile that offers a single view of each customer. That user profile in turn powers contextual cross-channel messaging, lifecycle engagement automation, and robust campaign insights and optimization opportunities. Appboy offers solutions that allow brands to create push notifications, targeted emails, in-app and in-browser messages, news feed cards, and webhooks to enhance the user experience and increase customer engagement. The company prides itself on its interoperability, connecting to a variety of complimentary marketing tools and technologies so brands can build the perfect stack to enable their strategies and experiments in real time.

AWS makes it easy for Appboy to dynamically size all of their service components and automatically scale up and down as needed. They use an array of services including Elastic Load Balancing, AWS Lambda, Amazon CloudWatch, Auto Scaling groups, and Amazon S3 to help scale capacity and better deal with unpredictable customer loads.

To keep up with the latest marketing trends and tactics, visit the Appboy digital magazine, Relate. Appboy was also recently featured in the #StartupsOnAir video series where they gave insight into their AWS usage.

Arterys (San Francisco, CA)
Getting test results back from a physician can often be a time consuming and tedious process. Clinicians typically employ a variety of techniques to manually measure medical images and then make their assessments. Arterys founders Fabien Beckers, John Axerio-Cilies, Albert Hsiao, and Shreyas Vasanawala realized that much more computation and advanced analytics were needed to harness all of the valuable information in medical images, especially those generated by MRI and CT scanners. Clinicians were often skipping measurements and making assessments based mostly on qualitative data. Their solution was to start a cloud/AI software company focused on accelerating data-driven medicine with advanced software products for post-processing of medical images.

Arterys’ products provide timely, accurate, and consistent quantification of images, improve speed to results, and improve the quality of the information offered to the treating physician. This allows for much better tracking of a patient’s condition, and thus better decisions about their care. Advanced analytics, such as deep learning and distributed cloud computing, are used to process images. The first Arterys product can contour cardiac anatomy as accurately as experts, but takes only 15-20 seconds instead of the 45-60 minutes required to do it manually. Their computing cloud platform is also fully HIPAA compliant.

Arterys relies on a variety of AWS services to process their medical images. Using deep learning and other advanced analytic tools, Arterys is able to render images without latency over a web browser using AWS G2 instances. They use Amazon EC2 extensively for all of their compute needs, including inference and rendering, and Amazon S3 is used to archive images that aren’t needed immediately, as well as manage costs. Arterys also employs Amazon Route 53, AWS CloudTrail, and Amazon EC2 Container Service.

Check out this quick video about the technology that Arterys is creating. They were also recently featured in the #StartupsOnAir video series and offered a quick demo of their product.

Protenus (Baltimore, MD)
Protenus Logo
Protenus founders Nick Culbertson and Robert Lord were medical students at Johns Hopkins Medical School when they saw first-hand how Electronic Health Record (EHR) systems could be used to improve patient care and share clinical data more efficiently. With increased efficiency came a huge issue – an onslaught of serious security and privacy concerns. Over the past two years, 140 million medical records have been breached, meaning that approximately 1 in 3 Americans have had their health data compromised. Health records contain a repository of sensitive information and a breach of that data can cause major havoc in a patient’s life – namely identity theft, prescription fraud, Medicare/Medicaid fraud, and improper performance of medical procedures. Using their experience and knowledge from former careers in the intelligence community and involvement in a leading hedge fund, Nick and Robert developed the prototype and algorithms that launched Protenus.

Today, Protenus offers a number of solutions that detect breaches and misuse of patient data for healthcare organizations nationwide. Using advanced analytics and AI, Protenus’ health data insights platform understands appropriate vs. inappropriate use of patient data in the EHR. It also protects privacy, aids compliance with HIPAA regulations, and ensures trust for patients and providers alike.

Protenus built and operates its SaaS offering atop Amazon EC2, where Dedicated Hosts and encrypted Amazon EBS volume are used to ensure compliance with HIPAA regulation for the storage of Protected Health Information. They use Elastic Load Balancing and Amazon Route 53 for DNS, enabling unique, secure client specific access points to their Protenus instance.

To learn more about threats to patient data, read Hospitals’ Biggest Threat to Patient Data is Hiding in Plain Sight on the Protenus blog. Also be sure to check out their recent video in the #StartupsOnAir series for more insight into their product.

Syapse (Palo Alto, CA)
Syapse provides a comprehensive software solution that enables clinicians to treat patients with precision medicine for targeted cancer therapies — treatments that are designed and chosen using genetic or molecular profiling. Existing hospital IT doesn’t support the robust infrastructure and clinical workflows required to treat patients with precision medicine at scale, but Syapse centralizes and organizes patient data to clinicians at the point of care. Syapse offers a variety of solutions for oncologists that allow them to access the full scope of patient data longitudinally, view recommended treatments or clinical trials for similar patients, and track outcomes over time. These solutions are helping health systems across the country to improve patient outcomes by offering the most innovative care to cancer patients.

Leading health systems such as Stanford Health Care, Providence St. Joseph Health, and Intermountain Healthcare are using Syapse to improve patient outcomes, streamline clinical workflows, and scale their precision medicine programs. A group of experts known as the Molecular Tumor Board (MTB) reviews complex cases and evaluates patient data, documents notes, and disseminates treatment recommendations to the treating physician. Syapse also provides reports that give health system staff insight into their institution’s oncology care, which can be used toward quality improvement, business goals, and understanding variables in the oncology service line.

Syapse uses Amazon Virtual Private Cloud, Amazon EC2 Dedicated Instances, and Amazon Elastic Block Store to build a high-performance, scalable, and HIPAA-compliant data platform that enables health systems to make precision medicine part of routine cancer care for patients throughout the country.

Be sure to check out the Syapse blog to learn more and also their recent video on the #StartupsOnAir video series where they discuss their product, HIPAA compliance, and more about how they are using AWS.

Thank you for checking out another month of awesome hot startups!

-Tina Barr

 

What’s on at the Raspberry Pi Big Birthday Weekend 2017

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/whats-on-raspberry-pi-birthday-2017/

Today, we’re announcing the exciting schedule for our Big Birthday Weekend. It’s Code Club’s fifth birthday too, making it even more special: a double celebration!

Birthday Weekend GIF

We have a packed programme of activities this year. As usual, there’ll be talks and workshops catering to all interests and levels of skill, along with newly introduced drop-in sessions alongside the pre-registered workshops, offering everyone the opportunity to get hands-on experience of digital making. Once you’ve got your ticket for the Big Birthday Weekend, all the events are free, but some have limited places: make sure you take a look at the schedule and click the links to reserve your place at the workshops you want to attend.

Birthday party schedule image

Click the image for the full weekend schedule

On Saturday, Dr Sam Aaron will close the show in the auditorium, live-coding music through Sonic Pi, and on Sunday, we have CBBC presenter Fran Scott who will be live-coding explosions!




On arrival, be sure to collect your programme lanyard as this will be your pass for the day. Then grab a free goody bag and get exploring: the venue will be full to the brim with fabulous creations that members of our community have built, from talking heads to full-sized traffic lights. As always, you’ll be able to chat with our suppliers and gain information on the latest kits in the vendor marketplace.

You can even sign up to our Raspberry Pi Quiz, ‘Trivial Pi-suit’, to test your knowledge against other teams and win prizes!

Since it’s not a party without cake, we’ll be handing out cupcakes each afternoon. And those over 18 can use their complimentary token to grab a pint in the Market Bar from 3pm, thanks to Fuzzy Duck Brewery.

Don’t have a ticket yet? There are still a few available via the venue website, Cambridge Junction, but be quick – they won’t be available for long!

An added extra for Saturday: Live Retro Comedy Night at the Centre for Computing History.

A one-off night of mirth!! Join us for geeky comedy featuring video games, VHS covers, 80s nostalgia, the supernatural, computing history, and movies.

This is a fundraising event for Matthew Dons, a good friend of Raspberry Pi, with proceeds going toward his immunotherapy treatment for bowel cancer. Eben and Liz Upton are set to be in attendance, and would love to see members of the Raspberry Pi community filling the seats. Grab your ticket here.

The post What’s on at the Raspberry Pi Big Birthday Weekend 2017 appeared first on Raspberry Pi.

Create Tables in Amazon Athena from Nested JSON and Mappings Using JSONSerDe

Post Syndicated from Rick Wiggins original https://aws.amazon.com/blogs/big-data/create-tables-in-amazon-athena-from-nested-json-and-mappings-using-jsonserde/

Most systems use Java Script Object Notation (JSON) to log event information. Although it’s efficient and flexible, deriving information from JSON is difficult.

In this post, you will use the tightly coupled integration of Amazon Kinesis Firehose for log delivery, Amazon S3 for log storage, and Amazon Athena with JSONSerDe to run SQL queries against these logs without the need for data transformation or insertion into a database. It’s done in a completely serverless way. There’s no need to provision any compute.

Amazon SES provides highly detailed logs for every message that travels through the service and, with SES event publishing, makes them available through Firehose. However, parsing detailed logs for trends or compliance data would require a significant investment in infrastructure and development time. Athena is a boon to these data seekers because it can query this dataset at rest, in its native format, with zero code or architecture. On top of that, it uses largely native SQL queries and syntax.

Walkthrough: Establishing a dataset

We start with a dataset of an SES send event that looks like this:

{
	"eventType": "Send",
	"mail": {
		"timestamp": "2017-01-18T18:08:44.830Z",
		"source": "[email protected]",
		"sourceArn": "arn:aws:ses:us-west-2:111222333:identity/[email protected]",
		"sendingAccountId": "111222333",
		"messageId": "01010159b2c4471e-fc6e26e2-af14-4f28-b814-69e488740023-000000",
		"destination": ["[email protected]"],
		"headersTruncated": false,
		"headers": [{
				"name": "From",
				"value": "[email protected]"
			}, {
				"name": "To",
				"value": "[email protected]"
			}, {
				"name": "Subject",
				"value": "Bounced Like a Bad Check"
			}, {
				"name": "MIME-Version",
				"value": "1.0"
			}, {
				"name": "Content-Type",
				"value": "text/plain; charset=UTF-8"
			}, {
				"name": "Content-Transfer-Encoding",
				"value": "7bit"
			}
		],
		"commonHeaders": {
			"from": ["[email protected]"],
			"to": ["[email protected]"],
			"messageId": "01010159b2c4471e-fc6e26e2-af14-4f28-b814-69e488740023-000000",
			"subject": "Test"
		},
		"tags": {
			"ses:configuration-set": ["Firehose"],
			"ses:source-ip": ["54.55.55.55"],
			"ses:from-domain": ["amazon.com"],
			"ses:caller-identity": ["root"]
		}
	},
	"send": {}
}

This dataset contains a lot of valuable information about this SES interaction. There are thousands of datasets in the same format to parse for insights. Getting this data is straightforward.

1. Create a configuration set in the SES console or CLI that uses a Firehose delivery stream to send and store logs in S3 in near real-time.
NestedJson_1

2. Use SES to send a few test emails. Be sure to define your new configuration set during the send.

To do this, when you create your message in the SES console, choose More options. This will display more fields, including one for Configuration Set.
NestedJson_2
You can also use your SES verified identity and the AWS CLI to send messages to the mailbox simulator addresses.

$ aws ses send-email --to [email protected] --from [email protected] --subject "Bounced Like a Bad Check" --text "This should bounce" --configuration-set-name Firehose

3. Select your S3 bucket to see that logs are being created.
NestedJson_3

Walkthrough: Querying with Athena

Amazon Athena is an interactive query service that makes it easy to use standard SQL to analyze data resting in Amazon S3. Athena requires no servers, so there is no infrastructure to manage. You pay only for the queries you run. This makes it perfect for a variety of standard data formats, including CSV, JSON, ORC, and Parquet.

You now need to supply Athena with information about your data and define the schema for your logs with a Hive-compliant DDL statement. Athena uses Presto, a distributed SQL engine, to run queries. It also uses Apache Hive DDL syntax to create, drop, and alter tables and partitions. Athena uses an approach known as schema-on-read, which allows you to use this schema at the time you execute the query. Essentially, you are going to be creating a mapping for each field in the log to a corresponding column in your results.

If you are familiar with Apache Hive, you might find creating tables on Athena to be pretty similar. You can create tables by writing the DDL statement in the query editor or by using the wizard or JDBC driver. An important part of this table creation is the SerDe, a short name for “Serializer and Deserializer.” Because your data is in JSON format, you will be using org.openx.data.jsonserde.JsonSerDe, natively supported by Athena, to help you parse the data. Along the way, you will address two common problems with Hive/Presto and JSON datasets:

  • Nested or multi-level JSON.
  • Forbidden characters (handled with mappings).

In the Athena Query Editor, use the following DDL statement to create your first Athena table. For  LOCATION, use the path to the S3 bucket for your logs:

CREATE EXTERNAL TABLE sesblog (
  eventType string,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:boolean,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>
              > 
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/' 

In this DDL statement, you are declaring each of the fields in the JSON dataset along with its Presto data type. You are using Hive collection data types like Array and Struct to set up groups of objects.

Walkthrough: Nested JSON

Defining the mail key is interesting because the JSON inside is nested three levels deep. In the example, you are creating a top-level struct called mail which has several other keys nested inside. This includes fields like messageId and destination at the second level. You can also see that the field timestamp is surrounded by the backtick (`) character. timestamp is also a reserved Presto data type so you should use backticks here to allow the creation of a column of the same name without confusing the table creation command. On the third level is the data for headers. It contains a group of entries in name:value pairs. You define this as an array with the structure of <name:string,value:string> defining your schema expectations here. You must enclose `from` in the commonHeaders struct with backticks to allow this reserved word column creation.

Now that you have created your table, you can fire off some queries!

SELECT * FROM sesblog limit 10;

This output shows your two top-level columns (eventType and mail) but this isn’t useful except to tell you there is data being queried. You can use some nested notation to build more relevant queries to target data you care about.

“Which messages did I bounce from Monday’s campaign?”

SELECT eventtype as Event,
       mail.destination as Destination, 
       mail.messageId as MessageID,
       mail.timestamp as Timestamp
FROM sesblog
WHERE eventType = 'Bounce' and mail.timestamp like '2017-01-09%'

“How many messages have I bounced to a specific domain?”

SELECT COUNT(*) as Bounces 
FROM sesblog
WHERE eventType = 'Bounce' and mail.destination like '%amazonses.com%'

“Which messages did I bounce to the domain amazonses.com?”

SELECT eventtype as Event,
       mail.destination as Destination, 
       mail.messageId as MessageID 
FROM sesblog
WHERE eventType = 'Bounce' and mail.destination like '%amazonses.com%'

There are much deeper queries that can be written from this dataset to find the data relevant to your use case. You might have noticed that your table creation did not specify a schema for the tags section of the JSON event. You’ll do that next.

Walkthrough: Handling forbidden characters with mappings

Here is a major roadblock you might encounter during the initial creation of the DDL to handle this dataset: you have little control over the data format provided in the logs and Hive uses the colon (:) character for the very important job of defining data types. You need to give the JSONSerDe a way to parse these key fields in the tags section of your event. This is some of the most crucial data in an auditing and security use case because it can help you determine who was responsible for a message creation.

In the Athena query editor, use the following DDL statement to create your second Athena table. For LOCATION, use the path to the S3 bucket for your logs:

CREATE EXTERNAL TABLE sesblog2 (
  eventType string,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:boolean,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>,
              tags:struct<ses_configurationset:string,ses_source_ip:string,ses_from_domain:string,ses_caller_identity:string>
              > 
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  "mapping.ses_configurationset"="ses:configuration-set",
  "mapping.ses_source_ip"="ses:source-ip", 
  "mapping.ses_from_domain"="ses:from-domain", 
  "mapping.ses_caller_identity"="ses:caller-identity"
  )
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/' 

In your new table creation, you have added a section for SERDEPROPERTIES. This allows you to give the SerDe some additional information about your dataset. For your dataset, you are using the mapping property to work around your data containing a column name with a colon smack in the middle of it. ses:configuration-set would be interpreted as a column named ses with the datatype of configuration-set. Unlike your earlier implementation, you can’t surround an operator like that with backticks. The JSON SERDEPROPERTIES mapping section allows you to account for any illegal characters in your data by remapping the fields during the table’s creation.

For example, you have simply defined that the column in the ses data known as ses:configuration-set will now be known to Athena and your queries as ses_configurationset. This mapping doesn’t do anything to the source data in S3. This is a Hive concept only. It won’t alter your existing data. You have set up mappings in the Properties section for the four fields in your dataset (changing all instances of colon to the better-supported underscore) and in your table creation you have used those new mapping names in the creation of the tags struct.

Now that you have access to these additional authentication and auditing fields, your queries can answer some more questions.

“Who is creating all of these bounced messages?”

SELECT eventtype as Event,
         mail.timestamp as Timestamp,
         mail.tags.ses_source_ip as SourceIP,
         mail.tags.ses_caller_identity as AuthenticatedBy,
         mail.commonHeaders."from" as FromAddress,
         mail.commonHeaders.to as ToAddress
FROM sesblog2
WHERE eventtype = 'Bounce'

Of special note here is the handling of the column mail.commonHeaders.”from”. Because from is a reserved operational word in Presto, surround it in quotation marks (“) to keep it from being interpreted as an action.

Walkthrough: Querying using SES custom tagging

What makes this mail.tags section so special is that SES will let you add your own custom tags to your outbound messages. Now you can label messages with tags that are important to you, and use Athena to report on those tags. For example, if you wanted to add a Campaign tag to track a marketing campaign, you could use the –tags flag to send a message from the SES CLI:

$ aws ses send-email --to [email protected] --from [email protected] --subject "Perfume Campaign Test" --text "Buy our Smells" --configuration-set-name Firehose --tags Name=Campaign,Value=Perfume

This results in a new entry in your dataset that includes your custom tag.

…
		"tags": {
			"ses:configuration-set": ["Firehose"],
			"Campaign": ["Perfume"],
			"ses:source-ip": ["54.55.55.55"],
			"ses:from-domain": ["amazon.com"],
			"ses:caller-identity": ["root"],
			"ses:outgoing-ip": ["54.240.27.11"]
		}
…

You can then create a third table to account for the Campaign tagging.

CREATE EXTERNAL TABLE sesblog3 (
  eventType string,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:string,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>,
              tags:struct<ses_configurationset:string,Campaign:string,ses_source_ip:string,ses_from_domain:string,ses_caller_identity:string>
              > 
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  "mapping.ses_configurationset"="ses:configuration-set",
  "mapping.ses_source_ip"="ses:source-ip", 
  "mapping.ses_from_domain"="ses:from-domain", 
  "mapping.ses_caller_identity"="ses:caller-identity"
  )
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/' 

Then you can use this custom value to begin to query which you can define on each outbound email.

SELECT eventtype as Event,
       mail.destination as Destination, 
       mail.messageId as MessageID,
       mail.tags.Campaign as Campaign
FROM sesblog3
where mail.tags.Campaign like '%Perfume%'

NestedJson_4

Walkthrough: Building your own DDL programmatically with hive-json-schema

In all of these examples, your table creation statements were based on a single SES interaction type, send. SES has other interaction types like delivery, complaint, and bounce, all which have some additional fields. I’ll leave you with this, a master DDL that can parse all the different SES eventTypes and can create one table where you can begin querying your data.

Building a properly working JSONSerDe DLL by hand is tedious and a bit error-prone, so this time around you’ll be using an open source tool commonly used by AWS Support. All you have to do manually is set up your mappings for the unsupported SES columns that contain colons.

This sample JSON file contains all possible fields from across the SES eventTypes. It has been run through hive-json-schema, which is a great starting point to build nested JSON DDLs.

Here is the resulting “master” DDL to query all types of SES logs:

CREATE EXTERNAL TABLE sesmaster (
  eventType string,
  complaint struct<arrivaldate:string, 
                   complainedrecipients:array<struct<emailaddress:string>>,
                   complaintfeedbacktype:string, 
                   feedbackid:string, 
                   `timestamp`:string, 
                   useragent:string>,
  bounce struct<bouncedrecipients:array<struct<action:string, diagnosticcode:string, emailaddress:string, status:string>>,
                bouncesubtype:string, 
                bouncetype:string, 
                feedbackid:string,
                reportingmta:string, 
                `timestamp`:string>,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:boolean,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>,
              tags:struct<ses_configurationset:string,ses_source_ip:string,ses_outgoing_ip:string,ses_from_domain:string,ses_caller_identity:string>
              >,
  send string,
  delivery struct<processingtimemillis:int,
                  recipients:array<string>, 
                  reportingmta:string, 
                  smtpresponse:string, 
                  `timestamp`:string>
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  "mapping.ses_configurationset"="ses:configuration-set",
  "mapping.ses_source_ip"="ses:source-ip", 
  "mapping.ses_from_domain"="ses:from-domain", 
  "mapping.ses_caller_identity"="ses:caller-identity",
  "mapping.ses_outgoing_ip"="ses:outgoing-ip"
  )
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/'

Conclusion

In this post, you’ve seen how to use Amazon Athena in real-world use cases to query the JSON used in AWS service logs. Some of these use cases can be operational like bounce and complaint handling. Others report on trends and marketing data like querying deliveries from a campaign. Still others provide audit and security like answering the question, which machine or user is sending all of these messages? You’ve also seen how to handle both nested JSON and SerDe mappings so that you can use your dataset in its native format without making changes to the data to get your queries running.

With the new AWS QuickSight suite of tools, you also now have a data source that that can be used to build dashboards. This makes reporting on this data even easier. For information about using Athena as a QuickSight data source, see this blog post.

There are also optimizations you can make to these tables to increase query performance or to set up partitions to query only the data you need and restrict the amount of data scanned. If you only need to report on data for a finite amount of time, you could optionally set up S3 lifecycle configuration to transition old data to Amazon Glacier or to delete it altogether.

Feel free to leave questions or suggestions in the comments.

 


 

About the  Author

rick_wiggins_100Rick Wiggins is a Cloud Support Engineer for AWS Premium Support. He works with our customers to build solutions for Email, Storage and Content Delivery, helping them spend more time on their business and less time on infrastructure. In his spare time, he enjoys traveling the world with his family and volunteering at his children’s school teaching lessons in Computer Science and STEM.

 

 

Related

Migrate External Table Definitions from a Hive Metastore to Amazon Athena

exporting_hive

 

 

 

 

 

 

 

AWS Reinvent 2016: Embiggen your business with Amazon Web Services

Post Syndicated from Brian Huang original https://www.anchor.com.au/blog/2016/12/aws-reinvent-2016/

Three weeks ago, Amazon Web Services ran their annual love-fest in Las Vegas and it was quite a remarkable week. On arrival, attendees (all 32,000+ of them) were given a shiny new Alexa Echo Dot, Amazon’s latest entrant into the growing market for voice controlled, AI-based smart assistants, a segment that includes Apple’s Siri, Google Assistant and Microsoft’s Cortana.

Amazon have now made it clear that they’re taking Artificial Intelligence (AI) and Machine Learning very seriously, with four brand new, developer-focussed AI related services (Polly, Rekognition, Lex and MXNet) announced during the week. The free Alexa Echo Dots yet another incentive for developers to start building apps that make use of (and ultimately contribute to) Amazon’s efforts in this space.

The week was brought to a close with a spectacular party, headlined by Martin Garrix, named the world’s top DJ in 2016 by djmag.com. Goes to show that some of the worlds biggest geeks and code cutters are also capable of cutting some serious rug:

While the potential of artificial intelligence and machine learning technologies are both exciting and somewhat scary, there was plenty more to consider over the course of the week with a bevy of announcements such as new server instance types, enhanced support and orchestration for containers (Blox), low cost, simple to launch virtual servers from $5 per month (Lightsail), free of charge DDoS protection (AWS Shield), application performance monitoring and debugging (X-Ray), a new ”Internet of Things” (IoT) play to help developers build and manage smart, connected devices (Greengrass) and a fully managed continuous integration (CI) service (CodeBuild) that neatly rounds off Amazon’s DevOps-friendly suite of CI/CD services — and that’s just scratching the surface.

There’s a summary of the announcements here: https://aws.amazon.com/new/reinvent/

With videos of the sessions here: https://www.youtube.com/user/AmazonWebServices

For me, the main takeaway was that the pace of technology-enabled change is continuing to accelerate and Amazon Web Services is very likely to be at the heart of it.

AWS pace of innovation 2016

AWS pace of innovation 2016

AWS is a sales and innovation machine, continuing to put distance between themselves and their competitors — their sheer pace of innovation would appear almost impossible to compete with. The public clouds of Microsoft, IBM and Google would need years to catch up and that’s assuming AWS were sporting enough to stand still for that long.

In 2016, AWS announced around 1000 new services and updates — simply incredible if you’re company whose product and development teams are making use of the platform, and quite simply terrifying if you’re just about anyone else. As AWS continue their march up the value chain, those in the business of infrastructure services, monitoring, BI, data analytics, CI/CD developer tools, network security and even artificial intelligence (AI) all have very good reason to be concerned.

Interestingly, AWS reported an annual revenue run rate of nearly $13 billion with an incredible growth rate of 55% this past year, while the traditional big IT vendors — VMware, HP, Oracle, Cisco, Dell, EMC and IBM have gone backwards — dropping from a collective $221 billion revenue in 2012, to $206 billion in 2016.

Momentum for the public cloud keeps growing, and it’s easy to see why.

AWS is without doubt the leader in the field, and according to Andy Jassy (AWS CEO and pleasingly the very same guy who first presented Jeff Bezos with the AWS business plan) they are the fastest growing, US$1 billion-plus technology company ever, with Gartner estimating in 2015 that AWS is more than ten times the size of the next 14 competitors in the public cloud space combined — Microsoft, Google and IBM included.

Just look at these revenue and YOY growth numbers:
aws-revenueaws-yoy-growth
Source — https://www.statista.com/statistics/250520/forecast-of-amazon-web-services-revenue/

If you’re an application developer looking to win in your market, you would be remiss not to give careful consideration to building your application on top of AWS. Legacy IT infrastructure still has its place, but if your business is looking to the future then the cloud with all its automation and as-a-service goodness is where it’s at.

AWS’ API-driven infrastructure services enable you to take your development processes and application smarts to the next level. Adopting continuous delivery allows your product and development teams to move many orders of magnitude faster than they do today, reducing outages, improving software quality and security. And once your applications are infrastructure aware (aka “cloud native”), they’ll auto-scale seamlessly with the peaks and troughs of customer demand, self-heal when things go wrong and deliver a a great experience to your customers — no matter where they are in the world.

If you’re serious about embiggening your business, you need to embiggen your product and software development capabilities, and you need to do it quickly. Wondering where you’ll get the biggest bang for your buck? Where you’ll find the most efficiency gains? AWS looks like a pretty safe bet to me.

The post AWS Reinvent 2016: Embiggen your business with Amazon Web Services appeared first on AWS Managed Services by Anchor.

Whiskey and Snow: Grafana Labs Converges in NYC

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2016/01/23/whiskey-and-snow-grafana-labs-converges-in-nyc/

The snow has arrived in full force here in New York. Our subway authority, the MTA, has stored 31 miles of trains underground. The roads have been closed. The city that never sleeps is in full hibernation.
Beating the storm’s arrival by only a few hours, our team from Sweden landed last night.
From all corners of the world, almost the entire Grafana Labs team is converging for a week of face-to-face strategizing and development in NYC.

Yet Another Kit

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/rtkit.html

A while
back
I was celebrating that arrival of secure realtime scheduling
for the desktop. As it appears this was a bit premature then, since (mis-)using
cgroups for this turned out to be more problematic and messy than I
anticipated.

As a followup I’d now like to point you to this announcement I
posted to LAD yesterday
, introducing RealtimeKit which should fix
the problem for good. It has now entered Rawhide becoming part of the default
install (by means of being a dependency of PulseAudio), and I assume the other
distros are going to adopt it pretty soon, too.

Read the full announcement.