Tag Archives: new product

Raspberry Pi 400: the $70 desktop PC

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-400-the-70-desktop-pc/

Raspberry Pi has always been a PC company. Inspired by the home computers of the 1980s, our mission is to put affordable, high-performance, programmable computers into the hands of people all over the world. And inspired by these classic PCs, here is Raspberry Pi 400: a complete personal computer, built into a compact keyboard.

Raspberry Pi 4, which we launched in June last year, is roughly forty times as powerful as the original Raspberry Pi, and offers an experience that is indistinguishable from a legacy PC for the majority of users. Particularly since the start of the COVID-19 pandemic, we’ve seen a rapid increase in the use of Raspberry Pi 4 for home working and studying.

A front view of the Raspberry Pi keyboard

But user friendliness is about more than performance: it can also be about form factor. In particular, having fewer objects on your desk makes for a simpler set-up experience. Classic home computers – BBC Micros, ZX Spectrums, Commodore Amigas, and the rest – integrated the motherboard directly into the keyboard. No separate system unit and case; no keyboard cable. Just a computer, a power supply, a monitor cable, and (sometimes) a mouse.

Raspberry Pi 400

We’ve never been shy about borrowing a good idea. Which brings us to Raspberry Pi 400: it’s a faster, cooler 4GB Raspberry Pi 4, integrated into a compact keyboard. Priced at just $70 for the computer on its own, or $100 for a ready-to-go kit, if you’re looking for an affordable PC for day-to-day use this is the Raspberry Pi for you.

Buy the kit

The Raspberry Pi 400 Personal Computer Kit is the “Christmas morning” product, with the best possible out-of-box experience: a complete PC which plugs into your TV or monitor. The kit comprises:

  • A Raspberry Pi 400 computer
  • Our official USB mouse
  • Our official USB-C power supply
  • An SD card with Raspberry Pi OS pre-installed
  • A micro HDMI to HDMI cable
  • The official Raspberry Pi Beginner’s Guide

At launch, we are supporting English (UK and US), French, Italian, German, and Spanish keyboard layouts, with (for the first time) translated versions of the Beginner’s Guide. In the near future, we plan to support the same set of languages as our official keyboard.

Buy the computer

Saving money by bringing your own peripherals has always been part of the Raspberry Pi ethos. If you already have the other bits of the kit, you can buy a Raspberry Pi 400 computer on its own for just $70.

A close up of the left-hand keys of the Raspberry Pi 400

Buy the book

To accompany Raspberry Pi 400, we’ve released a fourth edition of our popular Raspberry Pi Beginner’s Guide, packed with updated material to help you get the most out of your new PC.

You can buy a copy of the Beginner’s Guide today from the Raspberry Pi Press store, or download a free PDF.

Where to buy Raspberry Pi 400

UK, US, and French Raspberry Pi 400 kits and computers are available to buy right now. Italian, German, and Spanish units are on their way to Raspberry Pi Approved Resellers, who should have them in stock in the next week.

We expect that Approved Resellers in India, Australia, and New Zealand will have kits and computers in stock by the end of the year. We’re rapidly rolling out compliance certification for other territories too, so that Raspberry Pi 400 will be available around the world in the first few months of 2021.

Of course, if you’re anywhere near Cambridge, you can head over to the Raspberry Pi Store to pick up your Raspberry Pi 400 today.

What does everyone else think?

We let a handful of people take an early look at Raspberry Pi 400 so they could try it out and pull together their thoughts to share with you. Here’s what some of them made of it.

Simon Martin, who has spent the last couple of years bringing Raspberry Pi 400 to life, will be here tomorrow to share some of the interesting technical challenges that he encountered along the way. In the meantime, start thinking about what you’ll do with your Raspberry Pi PC.

The post Raspberry Pi 400: the $70 desktop PC appeared first on Raspberry Pi.

Raspberry Pi Compute Module 4 on sale now from $25

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-compute-module-4/

It’s become a tradition that we follow each Raspberry Pi model with a system-on-module variant based on the same core silicon. Raspberry Pi 1 gave rise to the original Compute Module in 2014; Raspberry Pi 3 and 3+ were followed by Compute Module 3 and 3+ in 2017 and 2019 respectively. Only Raspberry Pi 2, our shortest-lived flagship product at just thirteen months, escaped the Compute Module treatment.

It’s been sixteen months since we unleashed Raspberry Pi 4 on the world, and today we’re announcing the launch of Compute Module 4, starting from $25.

Over half of the seven million Raspberry Pi units we sell each year go into industrial and commercial applications, from digital signage to thin clients to process automation. Many of these applications use the familiar single-board Raspberry Pi, but for users who want a more compact or custom form factor, or on-board eMMC storage, Compute Module products provide a simple way to move from a Raspberry Pi-based prototype to volume production.

A step change in performance

Built on the same 64-bit quad-core BCM2711 application processor as Raspberry Pi 4, our Compute Module 4 delivers a step change in performance over its predecessors: faster CPU cores, better multimedia, more interfacing capabilities, and, for the first time, a choice of RAM densities and a wireless connectivity option.

Raspberry Pi Compute Module 4
Raspberry Pi Compute Module 4

You can find detailed specs here, but let’s run through the highlights:

  • 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU
  • VideoCore VI graphics, supporting OpenGL ES 3.x
  • 4Kp60 hardware decode of H.265 (HEVC) video
  • 1080p60 hardware decode, and 1080p30 hardware encode of H.264 (AVC) video
  • Dual HDMI interfaces, at resolutions up to 4K
  • Single-lane PCI Express 2.0 interface
  • Dual MIPI DSI display, and dual MIPI CSI-2 camera interfaces
  • 1GB, 2GB, 4GB or 8GB LPDDR4-3200 SDRAM
  • Optional 8GB, 16GB or 32GB eMMC Flash storage
  • Optional 2.4GHz and 5GHz IEEE 802.11b/g/n/ac wireless LAN and Bluetooth 5.0
  • Gigabit Ethernet PHY with IEEE 1588 support
  • 28 GPIO pins, with up to 6 × UART, 6 × I2C and 5 × SPI
Compute Module 4 Lite (without eMMC Flash memory)
Compute Module 4 Lite, our variant without eMMC Flash memory

New, more compact form factor

Compute Module 4 introduces a brand new form factor, and a compatibility break with earlier Compute Modules. Where previous modules adopted the JEDEC DDR2 SODIMM mechanical standard, with I/O signals on an edge connector, we now bring I/O signals to two high-density perpendicular connectors (one for power and low-speed interfaces, and one for high-speed interfaces).

This significantly reduces the overall footprint of the module on its carrier board, letting you achieve smaller form factors for your products.

High-density connector on board underside
High-density connector on board underside

32 variants

With four RAM options, four Flash options, and optional wireless connectivity, we have a total of 32 variants, with prices ranging from $25 (for the 1GB RAM, Lite, no wireless variant) to $90 (for the 8GB RAM, 32GB Flash, wireless variant).

We’re very pleased that the four variants with 1GB RAM and no wireless keep the same price points ($25, $30, $35, and $40) as their Compute Module 3+ equivalents: once again, we’ve managed to pack a lot more performance into the platform without increasing the price.

You can find the full price list in the Compute Module 4 product brief.

Compute Module 4 IO Board

To help you get started with Compute Module 4, we are also launching an updated IO Board. Like the IO boards for earlier Compute Module products, this breaks out all the interfaces from the Compute Module to standard connectors, providing a ready-made development platform and a starting point for your own designs.

Compute Module 4 IO Board
Compute Module 4 IO Board

The IO board provides:

  • Two full-size HDMI ports
  • Gigabit Ethernet jack
  • Two USB 2.0 ports
  • MicroSD card socket (only for use with Lite, no-eMMC Compute Module 4 variants)
  • PCI Express Gen 2 x1 socket
  • HAT footprint with 40-pin GPIO connector and PoE header
  • 12V input via barrel jack (supports up to 26V if PCIe unused)
  • Camera and display FPC connectors
  • Real-time clock with battery backup

CAD for the IO board is available in KiCad format. You may recall that a few years ago we made a donation to support improvements to KiCad’s differential pair routing and track length control features; now you can use this feature-rich, open-source PCB layout package to design your own Compute Module carrier board.

Compute Module 4 mounted on the IO Board
Compute Module 4 mounted on the IO Board

In addition to serving as a development platform and reference design, we expect the IO board to be a finished product in its own right: if you require a Raspberry Pi that supports a wider range of input voltages, has all its major connectors in a single plane, or allows you to attach your own PCI Express devices, then Compute Module 4 with the IO Board does what you need.

We’ve set the price of the bare IO board at just $35, so a complete package including a Compute Module starts from $60.

Compute Module 4 Antenna Kit

We expect that most users of wireless Compute Module variants will be happy with the on-board PCB antenna. However, in some circumstances -- for example, where the product is in a metal case, or where it is not possible to provide the necessary ground plane cut-out under the module -- an external antenna will be required. The Compute Module 4 Antenna Kit comprises a whip antenna, with a bulkhead screw fixture and U.FL connector to attach to the socket on the module.

Antenna Kit and Compute Module 4
Antenna Kit and Compute Module 4

When using ether the Antenna Kit or the on-board antenna, you can take advantage of our modular certification to reduce the conformance testing costs for your finished product. And remember, the Raspberry Pi Integrator Programme is there to help you get your Compute Module-based product to market.

Our most powerful Compute Module

This is our best Compute Module yet. It’s also our first product designed by Dominic Plunkett, who joined us almost exactly a year ago.

I sat down with Dominic last week to discuss Compute Module 4 in greater detail, and you can find the video of our conversation here. Dominic will also be sharing more technical detail in the blog tomorrow.

In the meantime, check out the Compute Module 4 page for the datasheet and other details, and start thinking about what you’ll build with Compute Module 4.

The post Raspberry Pi Compute Module 4 on sale now from $25 appeared first on Raspberry Pi.

New book: The Official Raspberry Pi Camera Guide

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/new-book-the-official-raspberry-pi-camera-guide/

To coincide with yesterday’s launch of the Raspberry Pi High Quality Camera, Raspberry Pi Press has created a new Official Camera Guide to help you get started and inspire your future projects.

The Raspberry Pi High Quality Camera

Connecting a High Quality Camera turns your Raspberry Pi into a powerful digital camera. This 132-page book tells you everything you need to know to set up the camera, attach a lens, and start capturing high-resolution photos and video footage.

Make those photos snazzy

The book tells you everything you need to know in order to use the camera by issuing commands in a terminal window or via SSH. It also demonstrates how to control the camera with Python using the excellent picamera library.

You’ll discover the many image modes and effects available – our favourite is ‘posterise’.

Build some amazing camera-based projects

Once you’ve got the basics down, you can start using your camera for a variety of exciting Raspberry Pi projects showcased across the book’s 17 packed chapters. Want to make a camera trap to monitor the wildlife in your garden? Build a smart door with a video doorbell? Try out high-speed and time-lapse photography? Or even find out which car is parked in your driveway using automatic number-plate recognition? The book has all this covered, and a whole lot more.

Don’t have a High Quality Camera yet? No problem. All the commands in the book are exactly the same for the standard Raspberry Pi Camera Module, so you can also use this model with the help of our Official Camera Guide.

Snap it up!

The Official Raspberry Pi Camera Guide is available now from the Raspberry Pi Press online store for £10. And, as always, we have also released the book as a free PDF. But the physical book feels so good to hold and looks so handsome on your bookshelf, we don’t think you’ll regret getting your hands on the print edition.

Whichever format you choose, have fun shooting amazing photos and videos with the new High Quality Camera. And do share what you capture with us on social media using #ShotOnRaspberryPi.

The post New book: The Official Raspberry Pi Camera Guide appeared first on Raspberry Pi.

New product: Raspberry Pi High Quality Camera on sale now at $50

Post Syndicated from Simon Martin original https://www.raspberrypi.org/blog/new-product-raspberry-pi-high-quality-camera-on-sale-now-at-50/

We’re pleased to announce a new member of the Raspberry Pi camera family: the 12.3-megapixel High Quality Camera, available today for just $50, alongside a range of interchangeable lenses starting at $25.

NEW Raspberry Pi High Quality Camera

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspbe…

It’s really rather good, as you can see from this shot of Cambridge’s finest bit of perpendicular architecture.

At 69 years, King’s College Chapel took only slightly longer to finish than the High Quality Camera.

And this similarly pleasing bit of chip architecture.

Ready for your closeup.

Raspberry Pi and the camera community

There has always been a big overlap between Raspberry Pi hackers and camera hackers. Even back in 2012, people (okay, substantially Dave Hunt) were finding interesting ways to squeeze more functionality out of DSLR cameras using their Raspberry Pi computers.

Dave’s water droplet photography. Still, beautiful.

The OG Raspberry Pi camera module

In 2013, we launched our first camera board, built around the OmniVision OV5647 5‑megapixel sensor, followed rapidly by the original Pi NoIR board, with infrared sensitivity and a little magic square of blue plastic. Before long, people were attaching them to telescopes and using them to monitor plant health from drones (using the aforementioned little square of plastic).

TJ EMSLEY Moon Photography

We like the Moon.

Sadly, OV5647 went end-of-life in 2015, and the 5-megapixel camera has the distinction of being one of only three products (along with the original Raspberry Pi 1 and the official WiFi dongle) that we’ve ever discontinued. Its replacement, built around the 8-megapixel Sony IMX219 sensor, launched in April 2016; it has found a home in all sorts of cool projects, from line-followers to cucumber sorters, ever since. Going through our sales figures while writing this post, we were amazed to discover we’ve sold over 1.7 million of these to date.

The limitations of fixed-focus

Versatile though they are, there are limitations to mobile phone-type fixed-focus modules. The sensors themselves are relatively small, which translates into a lower signal-to-noise ratio and poorer low-light performance; and of course there is no option to replace the lens assembly with a more expensive one, or one with different optical properties. These are the shortcomings that the High Quality Camera is designed to address.

Photograph of a Raspberry Pi 4 captured by the Raspberry Pi Camera Module v2
Photograph of a Raspberry Pi 4 captured by the Raspberry Pi High Quality Camera

Raspberry Pi High Quality Camera

Raspberry Pi High Quality Camera, without a lens attached

Features include:

  • 12.3 megapixel Sony IMX477 sensor
  • 1.55μm × 1.55μm pixel size – double the pixel area of IMX219
  • Back-illuminated sensor architecture for improved sensitivity
  • Support for off-the-shelf C- and CS-mount lenses
  • Integrated back-focus adjustment ring and tripod mount

We expect that over time people will use quite a wide variety of lenses, but for starters our Approved Resellers will be offering a couple of options: a 6 mm CS‑mount lens at $25, and a very shiny 16 mm C-mount lens priced at $50.

Our launch-day lens selection.

Read all about it

Also out today is our new Official Raspberry Pi Camera Guide, covering both the familiar Raspberry Pi Camera Module and the new Raspberry Pi High Quality Camera.

We’ll never not be in love with Jack’s amazing design work.

Our new guide, published by Raspberry Pi Press, walks you through setting up and using your camera with your Raspberry Pi computer. You’ll also learn how to use filters and effects to enhance your photos and videos, and how to set up creative projects such as stop-motion animation stations, wildlife cameras, smart doorbells, and much more.

Aardman ain’t got nothing on you.

You can purchase the book in print today from the Raspberry Pi Press store for £10, or download the PDF for free from The MagPi magazine website.

Credits

As with every product we build, the High Quality Camera has taught us interesting new things, in this case about producing precision-machined aluminium components at scale (and to think we thought injection moulding was hard!). Getting this right has been something of a labour of love for me over the past three years, designing the hardware and getting it to production. Naush Patuck tuned the VideoCore IV ISP for this sensor; David Plowman helped with lens evaluation; Phil King produced the book; Austin Su provided manufacturing support.

We’d like to acknowledge Phil Holden at Sony in San Jose, the manufacturing team at Sony UK Tec in Pencoed for their camera test and assembly expertise, and Shenzhen O-HN Optoelectronic for solving our precision engineering challenges.

FAQS

Which Raspberry Pi models support the High Quality Camera?

The High Quality Camera is compatible with almost all Raspberry Pi models, from the original Raspberry Pi 1 Model B onward. Some very early Raspberry Pi Zero boards from the start of 2016 lack a camera connector, and other Zero users will need the same adapter FPC that is used with Camera Module v2.

What about Camera Module v2?

The regular and infrared versions of Camera Module v2 will still be available. The High Quality Camera does not supersede it. Instead, it provides a different tradeoff between price, performance, and size.

What lenses can I use with the High Quality Camera?

You can use C- and CS-mount lenses out of the box (C-mount lenses use the included C-CS adapter). Third-party adapters are available from a wide variety of lens standards to CS-mount, so it is possible to connect any lens that meets the back‑focus requirements.

We’re looking forward to seeing the oldest and/or weirdest lenses anyone can get working, but here’s one for starters, courtesy of Fiacre.

Do not try this at home. Or do: fine either way.

The post New product: Raspberry Pi High Quality Camera on sale now at $50 appeared first on Raspberry Pi.

Raspberry Pi 4 on sale now from $35

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-4-on-sale-now-from-35/

We have a surprise for you today: Raspberry Pi 4 is now on sale, starting at $35. This is a comprehensive upgrade, touching almost every element of the platform. For the first time we provide a PC-like level of performance for most users, while retaining the interfacing capabilities and hackability of the classic Raspberry Pi line.

Raspberry Pi 4: your new $35 computer

Get your Raspberry Pi 4 now: http://rpf.io/ytraspberrypi4 #RaspberryPi4 Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the #RaspberryPi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Get yours today from our Approved Resellers, or from the Raspberry Pi Store in Cambridge, open today 8am–8pm!

Raspberry Pi 4 Model B

Here are the highlights:

  • A 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU (~3× performance)
  • 1GB, 2GB, or 4GB of LPDDR4 SDRAM
  • Full-throughput Gigabit Ethernet
  • Dual-band 802.11ac wireless networking
  • Bluetooth 5.0
  • Two USB 3.0 and two USB 2.0 ports
  • Dual monitor support, at resolutions up to 4K
  • VideoCore VI graphics, supporting OpenGL ES 3.x
  • 4Kp60 hardware decode of HEVC video
  • Complete compatibility with earlier Raspberry Pi products

And here it is in the flesh:

Still a handsome devil

Raspberry Pi 4 memory options

This is the first time we’re offering a choice of memory capacities. We’ve gone for the following price structure, retaining our signature $35 price for the entry-level model:

RAMRetail price
1GB$35
2GB$45
4GB$55

As always these prices exclude sales tax, import duty (where appropriate), and shipping. All three variants are launching today: we have initially built more of the 2GB variant than of the others, and will adjust the mix over time as we discover which one is most popular.

New Raspberry Pi 4, new features

At first glance, the Raspberry Pi 4 board looks very similar to our previous $35 products, all the way back to 2014’s Raspberry Pi 1B+. James worked hard to keep it this way, but for the first time he has made a small number of essential tweaks to the form factor to accommodate new features.

Power

We’ve moved from USB micro-B to USB-C for our power connector. This supports an extra 500mA of current, ensuring we have a full 1.2A for downstream USB devices, even under heavy CPU load.

An extra half amp, and USB OTG to boot

Video

To accommodate dual display output within the existing board footprint, we’ve replaced the type-A (full-size) HDMI connector with a pair of type-D (micro) HDMI connectors.

Seeing double

Ethernet and USB

Our Gigabit Ethernet magjack has moved to the top right of the board, from the bottom right, greatly simplifying PCB routing. The 4-pin Power-over-Ethernet (PoE) connector remains in the same location, so Raspberry Pi 4 remains compatible with the PoE HAT.

Through the looking glass

The Ethernet controller on the main SoC is connected to an external Broadcom PHY over a dedicated RGMII link, providing full throughput. USB is provided via an external VLI controller, connected over a single PCI Express Gen 2 lane, and providing a total of 4Gbps of bandwidth, shared between the four ports.

All three connectors on the right-hand side of the board overhang the edge by an additional millimetre, with the aim of simplifying case design. In all other respects, the connector and mounting hole layout remains the same, ensuring compatibility with existing HATs and other accessories.

New Raspbian software

To support Raspberry Pi 4, we are shipping a radically overhauled operating system, based on the forthcoming Debian 10 Buster release. This brings numerous behind-the-scenes technical improvements, along with an extensively modernised user interface, and updated applications including the Chromium 74 web browser. Simon will take an in-depth look at the changes in tomorrow’s blog post, but for now, here’s a screenshot of it in action.

Raspbian Buster desktop

Some advice for those who are keen to get going with Raspbian Buster right away: we strongly recommend you download a new image, rather than upgrading an existing card. This ensures that you’re starting with a clean, working Buster system. If you really, really want to try upgrading, make a backup first.

One notable step forward is that for Raspberry Pi 4, we are retiring the legacy graphics driver stack used on previous models. Instead, we’re using the Mesa “V3D” driver developed by Eric Anholt at Broadcom over the last five years. This offers many benefits, including OpenGL-accelerated web browsing and desktop composition, and the ability to run 3D applications in a window under X. It also eliminates roughly half of the lines of closed-source code in the platform.

New Raspberry Pi 4 accessories

Connector and form-factor changes bring with them a requirement for new accessories. We’re sensitive to the fact that we’re requiring people to buy these: Mike and Austin have worked hard to source good-quality, cost-effective products for our reseller and licensee partners, and to find low-cost alternatives where possible.

Raspberry Pi 4 Case

Gordon has been working with our design partners Kinneir Dufort and manufacturers T-Zero to develop an all-new two-part case, priced at $5.

New toy, new toy box

We’re very pleased with how this has turned out, but if you’d like to re-use one of our existing cases, you can simply cut away the plastic fins on the right-hand side and omit one of the side panels as shown below.

Quick work with a Dremel

Raspberry Pi 4 Power Supply

Good, low-cost USB-C power supplies (and USB-C cables) are surprisingly hard to find, as we discovered when sending out prototype units to alpha testers. So we worked with Ktec to develop a suitable 5V/3A power supply; this is priced at $8, and is available in UK (type G), European (type C), North American (type A) and Australian (type I) plug formats.

Behold the marvel that is BS 1363

If you’d like to re-use a Raspberry Pi 3 Official Power Supply, our resellers are offering a $1 adapter which converts from USB micro-B to USB-C. The thick wires and good load-step response of the old official supply make this a surprisingly competitive solution if you don’t need a full 3 amps.

Somewhat less marvellous, but still good

Raspberry Pi 4 micro HDMI Cables

Again, low-cost micro HDMI cables which reliably support the 6Gbps data rate needed for 4Kp60 video can be hard to find. We like the Amazon Basics cable, but we’ve also sourced a 1m cable, which will be available from our resellers for $5.

Official micro HDMI to HDMI cable

Updated Raspberry Pi Beginner’s Guide

At the end of last year, Raspberry Pi Press released the Official Raspberry Pi Beginner’s Guide. Gareth Halfacree has produced an updated version, covering the new features of Raspberry Pi 4 and our updated operating system.

Little computer people

Raspberry Pi 4 Desktop Kit

Bringing all of this together, we’re offering a complete Desktop Kit. This is priced at $120, and comprises:

  • A 4GB Raspberry Pi 4
  • An official case
  • An official PSU
  • An official mouse and keyboard
  • A pair of HDMI cables
  • A copy of the updated Beginner’s Guide
  • A pre-installed 32GB microSD card

Raspberry Pi Desktop Kit

Raspberry Pi Store

This is the first product launch following the opening of our store in Cambridge, UK. For the first time, you can come and buy Raspberry Pi 4 directly from us, today. We’ll be open from 8am to 8pm, with units set up for you to play with and a couple of thousand on hand for you to buy. We even have some exclusive launch-day swag.

The Raspberry Pi Store sign

Form an orderly line

If you’re in the bottom right-hand corner of the UK, come on over and check it out!

New Raspberry Pi silicon

Since we launched the original Raspberry Pi in 2012, all our products have been based on 40nm silicon, with performance improvements delivered by adding progressively larger in-order cores (Cortex-A7, Cortex-A53) to the original ARM11-based BCM2835 design. With BCM2837B0 for Raspberry Pi 3B+ we reached the end of that particular road: we could no longer afford to toggle more transistors within our power budget.

Raspberry Pi 4 is built around BCM2711, a complete re-implementation of BCM283X on 28nm. The power savings delivered by the smaller process geometry have allowed us to replace Cortex-A53 with the much more powerful, out-of-order, Cortex-A72 core; this can execute more instructions per clock, yielding performance increases over Raspberry Pi 3B+ of between two and four times, depending on the benchmark.

We’ve taken advantage of the process change to overhaul many other elements of the design. We moved to a more modern memory technology, LPDDR4, tripling available bandwidth; we upgraded the entire display pipeline, including video decode, 3D graphics and display output to support 4Kp60 (or dual 4Kp30) throughput; and we addressed the non-multimedia I/O limitations of previous devices by adding on-board Gigabit Ethernet and PCI Express controllers.

Raspberry Pi 4 FAQs

We’ll keep updating this list over the next couple of days, but here are a few to get you started.

Wait, is it 2020 yet?

In the past, we’ve indicated 2020 as a likely introduction date for Raspberry Pi 4. We budgeted time for four silicon revisions of BCM2711 (A0, B0, C0, and C1); in comparison, we ship BCM2835C2 (the fifth revision of that design) on Raspberry Pi 1 and Zero.

Fortunately, 2711B0 has turned out to be production-ready, which has taken roughly 9–12 months out of the schedule.

Are you discontinuing earlier Raspberry Pi models?

No. We have a lot of industrial customers who will want to stick with the existing products for the time being. We’ll keep building these models for as long as there’s demand. Raspberry Pi 1B+, 2B, 3B, and 3B+ will continue to sell for $25, $35, $35, and $35 respectively.

What about a Model A version?

Historically, we’ve produced cut-down, lower-cost, versions of some of our $35 products, including Model 1A+ in 2014, and Model 3A+ at the end of last year. At present we haven’t identified a sensible set of changes to allow us to do a “Model 4A” product at significantly less than $35. We’ll keep looking though.

What about the Compute Module?

CM1, CM3, and CM3+ will continue to be available. We are evaluating options for producing a Compute Module product based on the Raspberry Pi 4 chipset.

Are you still using VideoCore?

Yes. VideoCore 3D is the only publicly documented 3D graphics core for ARM‑based SoCs, and we want to make Raspberry Pi more open over time, not less.

Credits

A project like Raspberry Pi 4 is the work of many hundreds of people, and we always try to acknowledge some of those people here.

This time round, particular credit is due to James Adams, who designed the board itself (you’ll find his signature under the USB 3.0 socket); to Mike Buffham, who ran the commercial operation, working with suppliers, licensees, and resellers to bring our most complicated product yet to market; and to all those at Raspberry Pi and Broadcom who have worked tirelessly to make this product a reality over the last few years.

A partial list of others who made major direct contributions to the BCM2711 chip program, CYW43455, VL805, and MxL7704 integrations, DRAM qualification, and Raspberry Pi 4 itself follows:

James Adams, Cyrus Afghahi, Snehil Agrawal, Sam Alder, Kiarash Amiri, Andrew Anderson, Eng Lim Ang, Eric Anholt, Greg Annandale, Satheesh Appukuttan, Amy Au, Ben Avison, Matt Bace, Neil Bailey, Jock Baird, Scott Baker, Alix Ball, Giles Ballard, Paul Barnes, Russell Barnes, Fiona Batchelor, Alex Bate, Kris Baxter, Paul Beech, Michael Belhazy, Jonathan Bell, John Bellairs, Oguz Benderli, Doug Berger, Ron Berthiaume, Raj Bharadwaj, Geoff Blackman, Ed Bleich, Debbie Brandenburg, David Brewer, Daniel Brierton, Adam Brown, Mike Buffham, Dan Caley, Mark Calleja, Rob Canaway, Cindy Cao, Victor Carmon, Ian Carter, Alex Carter, Amy Carter, Mark Castruita, KK Chan, Louis Chan, Nick Chase, Sherman Chen, Henry Chen, Yuliang Cheng, Chun Fai Cheung, Ravi Chhabra, Scott Clark, Tim Clifford, Nigel Clift, Dom Cobley, Steve Cole, Philip Colligan, Stephen Cook, Sheena Coote, Sherry Coutu, John Cowan-Hughes, John Cox, Peter Coyle, Jon Cronk, Darryl Cross, Steve Dalton, Neil Davies, Russell Davis, Tom De Vall, Jason Demas, Todd DeRego, Ellie Dobson, David Doyle, Alex Eames, Nicola Early, Jeff Echtenkamp, Andrew Edwards, Kevin Edwards, Phil Elwell, Dave Emett, Jiin Taur Eng, Gabrielle England, YG Eom, Peggy Escobedo, Andy Evans, Mark Evans, Florian Fainelli, David Ferguson, Ilan Finkelstein, Nick Francis, Liam Fraser, Ian Furlong, David Gammon, Jan Gaterman, Eric Gavami, Doug Giles, Andrew Goros, Tim Gover, Trevor Gowen, Peter Green, Simon Greening, Tracey Gregory, Efim Gukovsky, Gareth Halfacree, Mark Harris, Lucy Hattersley, James Hay, Richard Hayler, Gordon Henderson, Leon Hesch, Albert Hickey, Kevin Hill, Stefan Ho, Andrew Hoare, Lewis Hodder, William Hollingworth, Gordon Hollingworth, Michael Horne, Wanchen Hsu, David Hsu, Kevin YC Huang, Pei Huang, Peter Huang, Scofield Huang, James Hughes, Andy Hulbert, Carl Hunt, Rami Husni, Steven Hwang, Incognitum, Bruno Izern, Olivier Jacquemart, Mini Jain, Anurag Jain, Anand Jain, Geraint James, Dinesh Jayabharathi, Vinit Jayaraj, Nick Jeffery, Mengjie Jiang, David John, Alison Johnston, Lily Jones, Richard Jones, Tony Jones, Gareth Jones, Gary Kao, Gary Keall, Gerald Kelly, Ian Kersley, Gerard Khoo, Dani Kidouchim, Phil King, Andreas Knobloch, Bahar Kordi-Borojeni, Claire Kuo, Nicole Kuo, Wayne Kusumo, Koen Lampaert, Wyn Landon, Trever Latham, William Lee, Joon Lee, William Lee, Dave Lee, Simon Lewis, David Lewsey, Sherman Li, Xizhe Li, Jay Li, John CH Lin, Johan Lin, Jonic Linley, Chris Liou, Lestin Liu, Simon Long, Roy Longbottom, Patrick Loo, James Lougheed, Janice Lu, Fu Luo-Larson, Jeff Lussier, Helen Lynn, Terence Mackown, Neil MacLeod, Kevin Malone, Shahin Maloyan, Tim Mamtora, Stuart Martin, Simon Martin, Daniel Mason, Karen Matulis, Andrea Mauri, Scott McGregor, Steven Mcninch, Ben Mercer, Kamal Merchant, James Mills, Vassil Mitov, Brendan Moran, Alan Morgan, Giorgia Muirhead, Fiacre Muller, Aram Nahidipour, Siew Ling Ng, Thinh Nguyen, Lee Nguyen, Steve Noh, Paul Noonan, Keri Norris, Rhian Norris, Ben Nuttall, Brian O’Halloran, Martin O’Hanlon, Yong Oh, Simon Oliver, Mandy Oliver, Emma Ormond, Shiji Pan, Christopher Pasqualino, Max Passell, Naush Patuck, Eric Phiri, Dominic Plunkett, Karthik Rajendran, Ashwin Rao, Nick Raptopoulos, Chaitanya Ray, Justin Rees, Hias Reichl, Lorraine Richards, David Richardson, Tim Richardson, Dan Riiff, Peter de Rivaz, Josh Rix, Alwyn Roberts, Andrew Robinson, Kevin Robinson, Paul Rolfe, Marcelo Romero, Jonathan Rosenfeld, Sarah Roth, Matt Rowley, Matthew Rowley, Dave Saarinen, Ali Salem, Suzie Sanders, Graham Sanderson, Aniruddha Sane, Marion Scheuermann, Serge Schneider, Graham Scott, Marc Scott, Saran Kumar Seethapathi, Shawn Shadburn, Abdul Shaik, Mark Skala, Graham Smith, Michael Smith, Martin Sperl, Ajay Srivastava, Nick Steele, Ben Stephens, Dave Stevenson, Mike Stimson, Chee Siong Su, Austin Su, Prem Swaroop, Grant Taylor, Daniel Thompsett, Stuart Thomson, Eddie Thorn, Roger Thornton, Chris Tomlinson, Stephen Toomey, Mohamed Toubella, Frankie Tsai, Richard Tuck, Mike Unwin, Liz Upton, Manoj Vajhallya, Sandeep Venkatadas, Divya Vittal, John Wadsworth, Stefan Wahren, Irene Wang, Jeremy Wang, Rich Wells, Simon West, Joe Whaley, Craig Wightman, Oli Wilkin, Richard Wilkins, Sarah Williams, Jack Willis, Rob Wilson, Luke Wren, Romona Wu, Zheng Xu, Paul Yang, Pawel Zackiewicz, Ling Zhang, Jean Zhou, Ulf Ziemann, Rob Zwetsloot.

If you’re not on this list and think you should be, please let me know, and accept my apologies.

The post Raspberry Pi 4 on sale now from $35 appeared first on Raspberry Pi.

Create wearable tech with Sophy Wong and our new book | HackSpace magazine issue 18

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/create-wearable-tech-projects-with-sophy-wong/

Forget Apple Watch and Fitbit — if we’re going to wear something electronic, we want to make it ourselves!

Wearable Tech Projects, from the makers of HackSpace magazine, is a 164-page book packed with projects for the fashionable electronics enthusiast, with more than 30 projects which will blink, flash, and spark joy in your life.

Sophy Wong HackSpace Wearable Tech Projects book

Make a wearable game controller

Fans of Sophy Wong will already know about the amazing wearable tech that she develops. We wanted to make sure that more people discovered her work and the incredible world of wearable technology. You’ll start simple with sewable circuits and LEDs, and work all the way up to building your own wearable controller (complete with feathers) for an interactive, fully immersive game of Flappy Bird.

Sophy Wong HackSpace Wearable Tech Projects book

Pick up the tricks of the trade

Along the way, you’ll embed NFC data in a pair of cufflinks, laser cut jewellery, 3D print LED diffusers onto fabric for a cyberpunk leather jacket, and lots more.

 

Sophy Wong HackSpace Wearable Tech Projects book

Learn new techniques from Sophy Wong

You’ll discover new techniques for working with fabric, find out about the best microcontrollers for your projects, and learn the basics of CircuitPython, the language developed at Adafruit for physical computing. There’s no ‘Hello, World!’ or computer theory here; this is all about practical results and making unique, fascinating things to wear.

Get your copy today

Wearable Tech Projects is available to buy online for £10 with free delivery. You can also get it from WHSmith and all the usual high street retail suspects.


And that’s not all. There is also a new issue of HackSpace magazine out now, with an awesome special feature on space! You can find your copy at the same retailers as above. You can also download both Issue 18 and the Wearables book for free from the HackSpace website.

 

The post Create wearable tech with Sophy Wong and our new book | HackSpace magazine issue 18 appeared first on Raspberry Pi.

Introducing the Raspberry Pi TV HAT

Post Syndicated from Roger Thornton original https://www.raspberrypi.org/blog/raspberry-pi-tv-hat/

Today we are excited to launch a new add-on board for your Raspberry Pi: the Raspberry Pi TV HAT.

A photograph of a Raspberry Pi a TV HAT with aerial lead connected Oct 2018

The TV HAT connects to the 40-pin GPIO header and to a suitable antenna, allowing your Raspberry Pi to receive DVB-T2 television broadcasts.

A photograph of a Raspberry Pi Zero W with TV HAT connected Oct 2018

Watch TV with your Raspberry Pi

With the board, you can receive and view television on a Raspberry Pi, or you can use your Pi as a server to stream television over a network to other devices. The TV HAT works with all 40-pin GPIO Raspberry Pi boards when running as a server. If you want to watch TV on the Pi itself, we recommend using a Pi 2, 3, or 3B+, as you may need more processing power for this.

A photograph of a Raspberry Pi 3 Model B+ with TV HAT connected Oct 2018

Stream television over your network

Viewing television is not restricted to Raspberry Pi computers: with a TV HAT connected to your network, you can view streams on any network-connected device. That includes other computers, mobile phones, and tablets. You can find instructions for setting up your TV HAT in our step-by-step guide.

A photograph of a Raspberry Pi 3 Model B+ with TV HAT connected Oct 2018
A photograph of a Raspberry Pi a TV HAT with aerial lead connected Oct 2018
A photograph of a Raspberry Pi Zero W with TV HAT connected Oct 2018

New HAT form factor

The Raspberry Pi TV HAT follows a new form factor of HAT (Hardware Attached on Top), which we are also announcing today. The TV HAT is a half-size HAT that matches the outline of Raspberry Pi Zero boards. A new HAT spec is available now. No features have changed electrically – this is a purely mechanical change.

Raspberry Pi TV HAT mechanical drawing Oct 2018

A mechanical drawing of a Raspberry Pi TV HAT, exemplifying the spec of the new HAT form factor. Click to embiggen.

The TV HAT has three bolt holes; we omitted the fourth so that the HAT can be placed on a large-size Pi without obstructing the display connector.

The board comes with a set of mechanical spacers, a 40-way header, and an aerial adaptor.

A photograph of a Raspberry Pi TV HAT Oct 2018

Licences

Digital Video Broadcast (DVB) is a widely adopted standard for transmitting broadcast television; see countries that have adopted the DVB standard here.

Initially, we will be offering the TV HAT in Europe only. Compliance work is already underway to open other DVB-T2 regions. If you purchase a TV HAT, you must have the appropriate license or approval to receive broadcast television. You can find a list of licenses for Europe here. If in doubt, please contact your local licensing body.

The Raspberry Pi TV HAT opens up some fantastic opportunities for people looking to embed a TV receiver into their networks. Head over to the TV HAT product page to find out where to get hold of yours. We can’t wait to see what you use it for!

The post Introducing the Raspberry Pi TV HAT appeared first on Raspberry Pi.

Analyze data in Amazon DynamoDB using Amazon SageMaker for real-time prediction

Post Syndicated from YongSeong Lee original https://aws.amazon.com/blogs/big-data/analyze-data-in-amazon-dynamodb-using-amazon-sagemaker-for-real-time-prediction/

Many companies across the globe use Amazon DynamoDB to store and query historical user-interaction data. DynamoDB is a fast NoSQL database used by applications that need consistent, single-digit millisecond latency.

Often, customers want to turn their valuable data in DynamoDB into insights by analyzing a copy of their table stored in Amazon S3. Doing this separates their analytical queries from their low-latency critical paths. This data can be the primary source for understanding customers’ past behavior, predicting future behavior, and generating downstream business value. Customers often turn to DynamoDB because of its great scalability and high availability. After a successful launch, many customers want to use the data in DynamoDB to predict future behaviors or provide personalized recommendations.

DynamoDB is a good fit for low-latency reads and writes, but it’s not practical to scan all data in a DynamoDB database to train a model. In this post, I demonstrate how you can use DynamoDB table data copied to Amazon S3 by AWS Data Pipeline to predict customer behavior. I also demonstrate how you can use this data to provide personalized recommendations for customers using Amazon SageMaker. You can also run ad hoc queries using Amazon Athena against the data. DynamoDB recently released on-demand backups to create full table backups with no performance impact. However, it’s not suitable for our purposes in this post, so I chose AWS Data Pipeline instead to create managed backups are accessible from other services.

To do this, I describe how to read the DynamoDB backup file format in Data Pipeline. I also describe how to convert the objects in S3 to a CSV format that Amazon SageMaker can read. In addition, I show how to schedule regular exports and transformations using Data Pipeline. The sample data used in this post is from Bank Marketing Data Set of UCI.

The solution that I describe provides the following benefits:

  • Separates analytical queries from production traffic on your DynamoDB table, preserving your DynamoDB read capacity units (RCUs) for important production requests
  • Automatically updates your model to get real-time predictions
  • Optimizes for performance (so it doesn’t compete with DynamoDB RCUs after the export) and for cost (using data you already have)
  • Makes it easier for developers of all skill levels to use Amazon SageMaker

All code and data set in this post are available in this .zip file.

Solution architecture

The following diagram shows the overall architecture of the solution.

The steps that data follows through the architecture are as follows:

  1. Data Pipeline regularly copies the full contents of a DynamoDB table as JSON into an S3
  2. Exported JSON files are converted to comma-separated value (CSV) format to use as a data source for Amazon SageMaker.
  3. Amazon SageMaker renews the model artifact and update the endpoint.
  4. The converted CSV is available for ad hoc queries with Amazon Athena.
  5. Data Pipeline controls this flow and repeats the cycle based on the schedule defined by customer requirements.

Building the auto-updating model

This section discusses details about how to read the DynamoDB exported data in Data Pipeline and build automated workflows for real-time prediction with a regularly updated model.

Download sample scripts and data

Before you begin, take the following steps:

  1. Download sample scripts in this .zip file.
  2. Unzip the src.zip file.
  3. Find the automation_script.sh file and edit it for your environment. For example, you need to replace 's3://<your bucket>/<datasource path>/' with your own S3 path to the data source for Amazon ML. In the script, the text enclosed by angle brackets—< and >—should be replaced with your own path.
  4. Upload the json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar file to your S3 path so that the ADD jar command in Apache Hive can refer to it.

For this solution, the banking.csv  should be imported into a DynamoDB table.

Export a DynamoDB table

To export the DynamoDB table to S3, open the Data Pipeline console and choose the Export DynamoDB table to S3 template. In this template, Data Pipeline creates an Amazon EMR cluster and performs an export in the EMRActivity activity. Set proper intervals for backups according to your business requirements.

One core node(m3.xlarge) provides the default capacity for the EMR cluster and should be suitable for the solution in this post. Leave the option to resize the cluster before running enabled in the TableBackupActivity activity to let Data Pipeline scale the cluster to match the table size. The process of converting to CSV format and renewing models happens in this EMR cluster.

For a more in-depth look at how to export data from DynamoDB, see Export Data from DynamoDB in the Data Pipeline documentation.

Add the script to an existing pipeline

After you export your DynamoDB table, you add an additional EMR step to EMRActivity by following these steps:

  1. Open the Data Pipeline console and choose the ID for the pipeline that you want to add the script to.
  2. For Actions, choose Edit.
  3. In the editing console, choose the Activities category and add an EMR step using the custom script downloaded in the previous section, as shown below.

Paste the following command into the new step after the data ­­upload step:

s3://#{myDDBRegion}.elasticmapreduce/libs/script-runner/script-runner.jar,s3://<your bucket name>/automation_script.sh,#{output.directoryPath},#{myDDBRegion}

The element #{output.directoryPath} references the S3 path where the data pipeline exports DynamoDB data as JSON. The path should be passed to the script as an argument.

The bash script has two goals, converting data formats and renewing the Amazon SageMaker model. Subsequent sections discuss the contents of the automation script.

Automation script: Convert JSON data to CSV with Hive

We use Apache Hive to transform the data into a new format. The Hive QL script to create an external table and transform the data is included in the custom script that you added to the Data Pipeline definition.

When you run the Hive scripts, do so with the -e option. Also, define the Hive table with the 'org.openx.data.jsonserde.JsonSerDe' row format to parse and read JSON format. The SQL creates a Hive EXTERNAL table, and it reads the DynamoDB backup data on the S3 path passed to it by Data Pipeline.

Note: You should create the table with the “EXTERNAL” keyword to avoid the backup data being accidentally deleted from S3 if you drop the table.

The full automation script for converting follows. Add your own bucket name and data source path in the highlighted areas.

#!/bin/bash
hive -e "
ADD jar s3://<your bucket name>/json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar ; 
DROP TABLE IF EXISTS blog_backup_data ;
CREATE EXTERNAL TABLE blog_backup_data (
 customer_id map<string,string>,
 age map<string,string>, job map<string,string>, 
 marital map<string,string>,education map<string,string>, 
 default map<string,string>, housing map<string,string>,
 loan map<string,string>, contact map<string,string>, 
 month map<string,string>, day_of_week map<string,string>, 
 duration map<string,string>, campaign map<string,string>,
 pdays map<string,string>, previous map<string,string>, 
 poutcome map<string,string>, emp_var_rate map<string,string>, 
 cons_price_idx map<string,string>, cons_conf_idx map<string,string>,
 euribor3m map<string,string>, nr_employed map<string,string>, 
 y map<string,string> ) 
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' 
LOCATION '$1/';

INSERT OVERWRITE DIRECTORY 's3://<your bucket name>/<datasource path>/' 
SELECT concat( customer_id['s'],',', 
 age['n'],',', job['s'],',', 
 marital['s'],',', education['s'],',', default['s'],',', 
 housing['s'],',', loan['s'],',', contact['s'],',', 
 month['s'],',', day_of_week['s'],',', duration['n'],',', 
 campaign['n'],',',pdays['n'],',',previous['n'],',', 
 poutcome['s'],',', emp_var_rate['n'],',', cons_price_idx['n'],',',
 cons_conf_idx['n'],',', euribor3m['n'],',', nr_employed['n'],',', y['n'] ) 
FROM blog_backup_data
WHERE customer_id['s'] > 0 ; 

After creating an external table, you need to read data. You then use the INSERT OVERWRITE DIRECTORY ~ SELECT command to write CSV data to the S3 path that you designated as the data source for Amazon SageMaker.

Depending on your requirements, you can eliminate or process the columns in the SELECT clause in this step to optimize data analysis. For example, you might remove some columns that have unpredictable correlations with the target value because keeping the wrong columns might expose your model to “overfitting” during the training. In this post, customer_id  columns is removed. Overfitting can make your prediction weak. More information about overfitting can be found in the topic Model Fit: Underfitting vs. Overfitting in the Amazon ML documentation.

Automation script: Renew the Amazon SageMaker model

After the CSV data is replaced and ready to use, create a new model artifact for Amazon SageMaker with the updated dataset on S3.  For renewing model artifact, you must create a new training job.  Training jobs can be run using the AWS SDK ( for example, Amazon SageMaker boto3 ) or the Amazon SageMaker Python SDK that can be installed with “pip install sagemaker” command as well as the AWS CLI for Amazon SageMaker described in this post.

In addition, consider how to smoothly renew your existing model without service impact, because your model is called by applications in real time. To do this, you need to create a new endpoint configuration first and update a current endpoint with the endpoint configuration that is just created.

#!/bin/bash
## Define variable 
REGION=$2
DTTIME=`date +%Y-%m-%d-%H-%M-%S`
ROLE="<your AmazonSageMaker-ExecutionRole>" 


# Select containers image based on region.  
case "$REGION" in
"us-west-2" )
    IMAGE="174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest"
    ;;
"us-east-1" )
    IMAGE="382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest" 
    ;;
"us-east-2" )
    IMAGE="404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest" 
    ;;
"eu-west-1" )
    IMAGE="438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest" 
    ;;
 *)
    echo "Invalid Region Name"
    exit 1 ;  
esac

# Start training job and creating model artifact 
TRAINING_JOB_NAME=TRAIN-${DTTIME} 
S3OUTPUT="s3://<your bucket name>/model/" 
INSTANCETYPE="ml.m4.xlarge"
INSTANCECOUNT=1
VOLUMESIZE=5 
aws sagemaker create-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --algorithm-specification TrainingImage=${IMAGE},TrainingInputMode=File --role-arn ${ROLE}  --input-data-config '[{ "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://<your bucket name>/<datasource path>/", "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "text/csv", "CompressionType": "None" , "RecordWrapperType": "None"  }]'  --output-data-config S3OutputPath=${S3OUTPUT} --resource-config  InstanceType=${INSTANCETYPE},InstanceCount=${INSTANCECOUNT},VolumeSizeInGB=${VOLUMESIZE} --stopping-condition MaxRuntimeInSeconds=120 --hyper-parameters feature_dim=20,predictor_type=binary_classifier  

# Wait until job completed 
aws sagemaker wait training-job-completed-or-stopped --training-job-name ${TRAINING_JOB_NAME}  --region ${REGION}

# Get newly created model artifact and create model
MODELARTIFACT=`aws sagemaker describe-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --query 'ModelArtifacts.S3ModelArtifacts' --output text `
MODELNAME=MODEL-${DTTIME}
aws sagemaker create-model --region ${REGION} --model-name ${MODELNAME}  --primary-container Image=${IMAGE},ModelDataUrl=${MODELARTIFACT}  --execution-role-arn ${ROLE}

# create a new endpoint configuration 
CONFIGNAME=CONFIG-${DTTIME}
aws sagemaker  create-endpoint-config --region ${REGION} --endpoint-config-name ${CONFIGNAME}  --production-variants  VariantName=Users,ModelName=${MODELNAME},InitialInstanceCount=1,InstanceType=ml.m4.xlarge

# create or update the endpoint
STATUS=`aws sagemaker describe-endpoint --endpoint-name  ServiceEndpoint --query 'EndpointStatus' --output text --region ${REGION} `
if [[ $STATUS -ne "InService" ]] ;
then
    aws sagemaker  create-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}    
else
    aws sagemaker  update-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}
fi

Grant permission

Before you execute the script, you must grant proper permission to Data Pipeline. Data Pipeline uses the DataPipelineDefaultResourceRole role by default. I added the following policy to DataPipelineDefaultResourceRole to allow Data Pipeline to create, delete, and update the Amazon SageMaker model and data source in the script.

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "sagemaker:CreateTrainingJob",
 "sagemaker:DescribeTrainingJob",
 "sagemaker:CreateModel",
 "sagemaker:CreateEndpointConfig",
 "sagemaker:DescribeEndpoint",
 "sagemaker:CreateEndpoint",
 "sagemaker:UpdateEndpoint",
 "iam:PassRole"
 ],
 "Resource": "*"
 }
 ]
}

Use real-time prediction

After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint. This approach is useful for interactive web, mobile, or desktop applications.

Following, I provide a simple Python code example that queries against Amazon SageMaker endpoint URL with its name (“ServiceEndpoint”) and then uses them for real-time prediction.

=== Python sample for real-time prediction ===

#!/usr/bin/env python
import boto3
import json 

client = boto3.client('sagemaker-runtime', region_name ='<your region>' )
new_customer_info = '34,10,2,4,1,2,1,1,6,3,190,1,3,4,3,-1.7,94.055,-39.8,0.715,4991.6'
response = client.invoke_endpoint(
    EndpointName='ServiceEndpoint',
    Body=new_customer_info, 
    ContentType='text/csv'
)
result = json.loads(response['Body'].read().decode())
print(result)
--- output(response) ---
{u'predictions': [{u'score': 0.7528127431869507, u'predicted_label': 1.0}]}

Solution summary

The solution takes the following steps:

  1. Data Pipeline exports DynamoDB table data into S3. The original JSON data should be kept to recover the table in the rare event that this is needed. Data Pipeline then converts JSON to CSV so that Amazon SageMaker can read the data.Note: You should select only meaningful attributes when you convert CSV. For example, if you judge that the “campaign” attribute is not correlated, you can eliminate this attribute from the CSV.
  2. Train the Amazon SageMaker model with the new data source.
  3. When a new customer comes to your site, you can judge how likely it is for this customer to subscribe to your new product based on “predictedScores” provided by Amazon SageMaker.
  4. If the new user subscribes your new product, your application must update the attribute “y” to the value 1 (for yes). This updated data is provided for the next model renewal as a new data source. It serves to improve the accuracy of your prediction. With each new entry, your application can become smarter and deliver better predictions.

Running ad hoc queries using Amazon Athena

Amazon Athena is a serverless query service that makes it easy to analyze large amounts of data stored in Amazon S3 using standard SQL. Athena is useful for examining data and collecting statistics or informative summaries about data. You can also use the powerful analytic functions of Presto, as described in the topic Aggregate Functions of Presto in the Presto documentation.

With the Data Pipeline scheduled activity, recent CSV data is always located in S3 so that you can run ad hoc queries against the data using Amazon Athena. I show this with example SQL statements following. For an in-depth description of this process, see the post Interactive SQL Queries for Data in Amazon S3 on the AWS News Blog. 

Creating an Amazon Athena table and running it

Simply, you can create an EXTERNAL table for the CSV data on S3 in Amazon Athena Management Console.

=== Table Creation ===
CREATE EXTERNAL TABLE datasource (
 age int, 
 job string, 
 marital string , 
 education string, 
 default string, 
 housing string, 
 loan string, 
 contact string, 
 month string, 
 day_of_week string, 
 duration int, 
 campaign int, 
 pdays int , 
 previous int , 
 poutcome string, 
 emp_var_rate double, 
 cons_price_idx double,
 cons_conf_idx double, 
 euribor3m double, 
 nr_employed double, 
 y int 
)
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n' 
LOCATION 's3://<your bucket name>/<datasource path>/';

The following query calculates the correlation coefficient between the target attribute and other attributes using Amazon Athena.

=== Sample Query ===

SELECT corr(age,y) AS correlation_age_and_target, 
 corr(duration,y) AS correlation_duration_and_target, 
 corr(campaign,y) AS correlation_campaign_and_target,
 corr(contact,y) AS correlation_contact_and_target
FROM ( SELECT age , duration , campaign , y , 
 CASE WHEN contact = 'telephone' THEN 1 ELSE 0 END AS contact 
 FROM datasource 
 ) datasource ;

Conclusion

In this post, I introduce an example of how to analyze data in DynamoDB by using table data in Amazon S3 to optimize DynamoDB table read capacity. You can then use the analyzed data as a new data source to train an Amazon SageMaker model for accurate real-time prediction. In addition, you can run ad hoc queries against the data on S3 using Amazon Athena. I also present how to automate these procedures by using Data Pipeline.

You can adapt this example to your specific use case at hand, and hopefully this post helps you accelerate your development. You can find more examples and use cases for Amazon SageMaker in the video AWS 2017: Introducing Amazon SageMaker on the AWS website.

 


Additional Reading

If you found this post useful, be sure to check out Serving Real-Time Machine Learning Predictions on Amazon EMR and Analyzing Data in S3 using Amazon Athena.

 


About the Author

Yong Seong Lee is a Cloud Support Engineer for AWS Big Data Services. He is interested in every technology related to data/databases and helping customers who have difficulties in using AWS services. His motto is “Enjoy life, be curious and have maximum experience.”

 

 

Raspberry Pi 3 Model B+ on sale now at $35

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-3-model-bplus-sale-now-35/

Here’s a long post. We think you’ll find it interesting. If you don’t have time to read it all, we recommend you watch this video, which will fill you in with everything you need, and then head straight to the product page to fill yer boots. (We recommend the video anyway, even if you do have time for a long read. ‘Cos it’s fab.)

A BRAND-NEW PI FOR π DAY

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

If you’ve been a Raspberry Pi watcher for a while now, you’ll have a bit of a feel for how we update our products. Just over two years ago, we released Raspberry Pi 3 Model B. This was our first 64-bit product, and our first product to feature integrated wireless connectivity. Since then, we’ve sold over nine million Raspberry Pi 3 units (we’ve sold 19 million Raspberry Pis in total), which have been put to work in schools, homes, offices and factories all over the globe.

Those Raspberry Pi watchers will know that we have a history of releasing improved versions of our products a couple of years into their lives. The first example was Raspberry Pi 1 Model B+, which added two additional USB ports, introduced our current form factor, and rolled up a variety of other feedback from the community. Raspberry Pi 2 didn’t get this treatment, of course, as it was superseded after only one year; but it feels like it’s high time that Raspberry Pi 3 received the “plus” treatment.

So, without further ado, Raspberry Pi 3 Model B+ is now on sale for $35 (the same price as the existing Raspberry Pi 3 Model B), featuring:

  • A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU
  • Dual-band 802.11ac wireless LAN and Bluetooth 4.2
  • Faster Ethernet (Gigabit Ethernet over USB 2.0)
  • Power-over-Ethernet support (with separate PoE HAT)
  • Improved PXE network and USB mass-storage booting
  • Improved thermal management

Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

Behold the shiny

Raspberry Pi 3B+ is available to buy today from our network of Approved Resellers.

New features, new chips

Roger Thornton did the design work on this revision of the Raspberry Pi. Here, he and I have a chat about what’s new.

Introducing the Raspberry Pi 3 Model B+

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

The new product is built around BCM2837B0, an updated version of the 64-bit Broadcom application processor used in Raspberry Pi 3B, which incorporates power integrity optimisations, and a heat spreader (that’s the shiny metal bit you can see in the photos). Together these allow us to reach higher clock frequencies (or to run at lower voltages to reduce power consumption), and to more accurately monitor and control the temperature of the chip.

Dual-band wireless LAN and Bluetooth are provided by the Cypress CYW43455 “combo” chip, connected to a Proant PCB antenna similar to the one used on Raspberry Pi Zero W. Compared to its predecessor, Raspberry Pi 3B+ delivers somewhat better performance in the 2.4GHz band, and far better performance in the 5GHz band, as demonstrated by these iperf results from LibreELEC developer Milhouse.

Tx bandwidth (Mb/s)Rx bandwidth (Mb/s)
Raspberry Pi 3B35.735.6
Raspberry Pi 3B+ (2.4GHz)46.746.3
Raspberry Pi 3B+ (5GHz)102102

The wireless circuitry is encapsulated under a metal shield, rather fetchingly embossed with our logo. This has allowed us to certify the entire board as a radio module under FCC rules, which in turn will significantly reduce the cost of conformance testing Raspberry Pi-based products.

We’ll be teaching metalwork next.

Previous Raspberry Pi devices have used the LAN951x family of chips, which combine a USB hub and 10/100 Ethernet controller. For Raspberry Pi 3B+, Microchip have supported us with an upgraded version, LAN7515, which supports Gigabit Ethernet. While the USB 2.0 connection to the application processor limits the available bandwidth, we still see roughly a threefold increase in throughput compared to Raspberry Pi 3B. Again, here are some typical iperf results.

Tx bandwidth (Mb/s)Rx bandwidth (Mb/s)
Raspberry Pi 3B94.195.5
Raspberry Pi 3B+315315

We use a magjack that supports Power over Ethernet (PoE), and bring the relevant signals to a new 4-pin header. We will shortly launch a PoE HAT which can generate the 5V necessary to power the Raspberry Pi from the 48V PoE supply.

There… are… four… pins!

Coming soon to a Raspberry Pi 3B+ near you

Raspberry Pi 3B was our first product to support PXE Ethernet boot. Testing it in the wild shook out a number of compatibility issues with particular switches and traffic environments. Gordon has rolled up fixes for all known issues into the BCM2837B0 boot ROM, and PXE boot is now enabled by default.

Clocking, voltages and thermals

The improved power integrity of the BCM2837B0 package, and the improved regulation accuracy of our new MaxLinear MxL7704 power management IC, have allowed us to tune our clocking and voltage rules for both better peak performance and longer-duration sustained performance.

Below 70°C, we use the improvements to increase the core frequency to 1.4GHz. Above 70°C, we drop to 1.2GHz, and use the improvements to decrease the core voltage, increasing the period of time before we reach our 80°C thermal throttle; the reduction in power consumption is such that many use cases will never reach the throttle. Like a modern smartphone, we treat the thermal mass of the device as a resource, to be spent carefully with the goal of optimising user experience.

This graph, courtesy of Gareth Halfacree, demonstrates that Raspberry Pi 3B+ runs faster and at a lower temperature for the duration of an eight‑minute quad‑core Sysbench CPU test.

Note that Raspberry Pi 3B+ does consume substantially more power than its predecessor. We strongly encourage you to use a high-quality 2.5A power supply, such as the official Raspberry Pi Universal Power Supply.

FAQs

We’ll keep updating this list over the next couple of days, but here are a few to get you started.

Are you discontinuing earlier Raspberry Pi models?

No. We have a lot of industrial customers who will want to stick with the existing products for the time being. We’ll keep building these models for as long as there’s demand. Raspberry Pi 1B+, Raspberry Pi 2B, and Raspberry Pi 3B will continue to sell for $25, $35, and $35 respectively.

What about Model A+?

Raspberry Pi 1A+ continues to be the $20 entry-level “big” Raspberry Pi for the time being. We are considering the possibility of producing a Raspberry Pi 3A+ in due course.

What about the Compute Module?

CM1, CM3 and CM3L will continue to be available. We may offer versions of CM3 and CM3L with BCM2837B0 in due course, depending on customer demand.

Are you still using VideoCore?

Yes. VideoCore IV 3D is the only publicly-documented 3D graphics core for ARM‑based SoCs, and we want to make Raspberry Pi more open over time, not less.

Credits

A project like this requires a vast amount of focused work from a large team over an extended period. Particular credit is due to Roger Thornton, who designed the board and ran the exhaustive (and exhausting) RF compliance campaign, and to the team at the Sony UK Technology Centre in Pencoed, South Wales. A partial list of others who made major direct contributions to the BCM2837B0 chip program, CYW43455 integration, LAN7515 and MxL7704 developments, and Raspberry Pi 3B+ itself follows:

James Adams, David Armour, Jonathan Bell, Maria Blazquez, Jamie Brogan-Shaw, Mike Buffham, Rob Campling, Cindy Cao, Victor Carmon, KK Chan, Nick Chase, Nigel Cheetham, Scott Clark, Nigel Clift, Dominic Cobley, Peter Coyle, John Cronk, Di Dai, Kurt Dennis, David Doyle, Andrew Edwards, Phil Elwell, John Ferdinand, Doug Freegard, Ian Furlong, Shawn Guo, Philip Harrison, Jason Hicks, Stefan Ho, Andrew Hoare, Gordon Hollingworth, Tuomas Hollman, EikPei Hu, James Hughes, Andy Hulbert, Anand Jain, David John, Prasanna Kerekoppa, Shaik Labeeb, Trevor Latham, Steve Le, David Lee, David Lewsey, Sherman Li, Xizhe Li, Simon Long, Fu Luo Larson, Juan Martinez, Sandhya Menon, Ben Mercer, James Mills, Max Passell, Mark Perry, Eric Phiri, Ashwin Rao, Justin Rees, James Reilly, Matt Rowley, Akshaye Sama, Ian Saturley, Serge Schneider, Manuel Sedlmair, Shawn Shadburn, Veeresh Shivashimper, Graham Smith, Ben Stephens, Mike Stimson, Yuree Tchong, Stuart Thomson, John Wadsworth, Ian Watch, Sarah Williams, Jason Zhu.

If you’re not on this list and think you should be, please let me know, and accept my apologies.

The post Raspberry Pi 3 Model B+ on sale now at $35 appeared first on Raspberry Pi.

Preparing for AWS Certificate Manager (ACM) Support of Certificate Transparency

Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-get-ready-for-certificate-transparency/

 

Update from March 27, 2018: On March 27, 2018, we updated ACM APIs so that you can disable Certificate Transparency logging on a per-certificate basis.


Starting April 30, 2018, Google Chrome will require all publicly trusted certificates issued after this date to be logged in at least two Certificate Transparency logs. This means that any certificate issued that is not logged will result in an error message in Google Chrome. Beginning April 24, 2018, Amazon will log all new and renewed certificates in at least two public logs unless you disable Certificate Transparency logging.

Without Certificate Transparency, it can be difficult for a domain owner to know if an unexpected certificate was issued for their domain. Under the current system, no record is kept of certificates being issued, and domain owners do not have a reliable way to identify rogue certificates.

To address this situation, Certificate Transparency creates a cryptographically secure log of each certificate issued. Domain owners can search the log to identify unexpected certificates, whether issued by mistake or malice. Domain owners can also identify Certificate Authorities (CAs) that are improperly issuing certificates. In this blog post, I explain more about Certificate Transparency and tell you how to prepare for it.

How does Certificate Transparency work?

When a CA issues a publicly trusted certificate, the CA must submit the certificate to one or more Certificate Transparency log servers. The Certificate Transparency log server responds with a signed certificate timestamp (SCT) that confirms the log server will add the certificate to the list of known certificates. The SCT is then embedded in the certificate and delivered automatically to a browser. The SCT is like a receipt that proves the certificate was published into the Certificate Transparency log. Starting April 30, Google Chrome will require an SCT as proof that the certificate was published to a Certificate Transparency log in order to trust the certificate without displaying an error message.

What is Amazon doing to support Certificate Transparency?

Certificate Transparency is a good practice. It enables AWS customers to be more confident that an unauthorized certificate hasn’t been issued by a CA. Beginning on April 24, 2018, Amazon will log all new and renewed certificates in at least two Certificate Transparency logs unless you disable Certificate Transparency logging.

We recognize that there can be times when our customers do not want to log certificates. For example, if you are building a website for an unreleased product and have registered the subdomain, newproduct.example.com, requesting a logged certificate for your domain will make it publicly known that the new product is coming. Certificate Transparency logging also can expose server hostnames that you want to keep private. Hostnames such as payments.example.com can reveal the purpose of a server and provide attackers with information about your private network. These logs do not contain the private key for your certificate. For these reasons, on March 27, 2018 we updated ACM APIs so that you can disable Certificate Transparency logging on a per-certificate basis using the ACM APIs or with the AWS CLI. Doing so will lead to errors in Google Chrome, which may be preferable to exposing the information.

Please refer to ACM documentation for specifics on how to opt out of Certificate Transparency logging.

Conclusion

Beginning April 24, 2018, ACM will begin logging all new and renewed certificates by default. If you don’t want a certificate to be logged, you’ll be able to opt out using the AWS API or CLI. However, for Google Chrome to trust the certificate, all issued or imported certificates must have the SCT information embedded in them by April 30, 2018.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, start a new thread in the ACM forum.

– Jonathan

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Zero WH: pre-soldered headers and what to do with them

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zero-wh/

If you head over to the website of your favourite Raspberry Pi Approved Reseller today, you may find the new Zero WH available to purchase. But what it is? Why is it different, and what can you do with it?

Raspberry Pi Zero WH

“If you like pre-soldered headers, and getting caught in the rain…”

Raspberry Pi Zero WH

Imagine a Raspberry Pi Zero W. Now add a professionally soldered header. Boom, that’s the Raspberry Pi Zero WH! It’s your same great-tasting Pi, with a brand-new…crust? It’s perfect for everyone who doesn’t own a soldering iron or who wants the soldering legwork done for them.

What you can do with the Zero WH

What can’t you do? Am I right?! The small size of the Zero W makes it perfect for projects with minimal wiggle-room. In such projects, some people have no need for GPIO pins — they simply solder directly to the board. However, there are many instances where you do want a header on your Zero W, for example in order to easily take advantage of the GPIO expander tool for Debian Stretch on a PC or Mac.

GPIO expander in clubs and classrooms

As Ben Nuttall explains in his blog post on the topic:

[The GPIO expander tool] is a real game-changer for Raspberry Jams, Code Clubs, CoderDojos, and schools. You can live boot the Raspberry Pi Desktop OS from a USB stick, use Linux PCs, or even install [the Pi OS] on old computers. Then you have really simple access to physical computing without full Raspberry Pi setups, and with no SD cards to configure.

Using the GPIO expander with the Raspberry Pi Zero WH decreases the setup cost for anyone interested in trying out physical computing in the classroom or at home. (And once you’ve stuck your toes in, you’ll obviously fall in love and will soon find yourself with multiple Raspberry Pi models, HATs aplenty, and an area in your home dedicated to your new adventure in Raspberry Pi. Don’t say I didn’t warn you.)

Other uses for a Zero W with a header

The GPIO expander setup is just one of a multitude of uses for a Raspberry Pi Zero W with a header. You may want the header for prototyping before you commit to soldering wires directly to a board. Or you may have a temporary build in mind for your Zero W, in which case you won’t want to commit to soldering wires to the board at all.

Raspberry Pi Zero WH

Your use case may be something else entirely — tell us in the comments below how you’d utilise a pre-soldered Raspberry Pi Zero WH in your project. The best project idea will receive ten imaginary house points of absolutely no practical use, but immense emotional value. Decide amongst yourselves who you believe should win them — I’m going to go waste a few more hours playing SLUG!

The post Zero WH: pre-soldered headers and what to do with them appeared first on Raspberry Pi.

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.

How to Compete with Giants

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-compete-with-giants/

How to Compete with Giants

This post by Backblaze’s CEO and co-founder Gleb Budman is the sixth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants

Use the Join button above to receive notification of new posts in this series.

Perhaps your business is competing in a brand new space free from established competitors. Most of us, though, start companies that compete with existing offerings from large, established companies. You need to come up with a better mousetrap — not the first mousetrap.

That’s the challenge Backblaze faced. In this post, I’d like to share some of the lessons I learned from that experience.

Backblaze vs. Giants

Competing with established companies that are orders of magnitude larger can be daunting. How can you succeed?

I’ll set the stage by offering a few sets of giants we compete with:

  • When we started Backblaze, we offered online backup in a market where companies had been offering “online backup” for at least a decade, and even the newer entrants had raised tens of millions of dollars.
  • When we built our storage servers, the alternatives were EMC, NetApp, and Dell — each of which had a market cap of over $10 billion.
  • When we introduced our cloud storage offering, B2, our direct competitors were Amazon, Google, and Microsoft. You might have heard of them.

What did we learn by competing with these giants on a bootstrapped budget? Let’s take a look.

Determine What Success Means

For a long time Apple considered Apple TV to be a hobby, not a real product worth focusing on, because it did not generate a billion in revenue. For a $10 billion per year revenue company, a new business that generates $50 million won’t move the needle and often isn’t worth putting focus on. However, for a startup, getting to $50 million in revenue can be the start of a wildly successful business.

Lesson Learned: Don’t let the giants set your success metrics.

The Advantages Startups Have

The giants have a lot of advantages: more money, people, scale, resources, access, etc. Following their playbook and attacking head-on means you’re simply outgunned. Common paths to failure are trying to build more features, enter more markets, outspend on marketing, and other similar approaches where scale and resources are the primary determinants of success.

But being a startup affords many advantages most giants would salivate over. As a nimble startup you can leverage those to succeed. Let’s breakdown nine competitive advantages we’ve used that you can too.

1. Drive Focus

It’s hard to build a $10 billion revenue business doing just one thing, and most giants have a broad portfolio of businesses, numerous products for each, and targeting a variety of customer segments in multiple markets. That adds complexity and distributes management attention.

Startups get the benefit of having everyone in the company be extremely focused, often on a singular mission, product, customer segment, and market. While our competitors sell everything from advertising to Zantac, and are investing in groceries and shipping, Backblaze has focused exclusively on cloud storage. This means all of our best people (i.e. everyone) is focused on our cloud storage business. Where is all of your focus going?

Lesson Learned: Align everyone in your company to a singular focus to dramatically out-perform larger teams.

2. Use Lack-of-Scale as an Advantage

You may have heard Paul Graham say “Do things that don’t scale.” There are a host of things you can do specifically because you don’t have the same scale as the giants. Use that as an advantage.

When we look for data center space, we have more options than our largest competitors because there are simply more spaces available with room for 100 cabinets than for 1,000 cabinets. With some searching, we can find data center space that is better/cheaper.

When a flood in Thailand destroyed factories, causing the world’s supply of hard drives to plummet and prices to triple, we started drive farming. The giants certainly couldn’t. It was a bit crazy, but it let us keep prices unchanged for our customers.

Our Chief Cloud Officer, Tim, used to work at Adobe. Because of their size, any new product needed to always launch in a multitude of languages and in global markets. Once launched, they had scale. But getting any new product launched was incredibly challenging.

Lesson Learned: Use lack-of-scale to exploit opportunities that are closed to giants.

3. Build a Better Product

This one is probably obvious. If you’re going to provide the same product, at the same price, to the same customers — why do it? Remember that better does not always mean more features. Here’s one way we built a better product that didn’t require being a bigger company.

All online backup services required customers to choose what to include in their backup. We found that this was complicated for users since they often didn’t know what needed to be backed up. We flipped the model to back up everything and allow users to exclude if they wanted to, but it was not required. This reduced the number of features/options, while making it easier and better for the user.

This didn’t require the resources of a huge company; it just required understanding customers a bit deeper and thinking about the solution differently. Building a better product is the most classic startup competitive advantage.

Lesson Learned: Dig deep with your customers to understand and deliver a better mousetrap.

4. Provide Better Service

How can you provide better service? Use your advantages. Escalations from your customer care folks to engineering can go through fewer hoops. Fixing an issue and shipping can be quicker. Access to real answers on Twitter or Facebook can be more effective.

A strategic decision we made was to have all customer support people as full-time employees in our headquarters. This ensures they are in close contact to the whole company for feedback to quickly go both ways.

Having a smaller team and fewer layers enables faster internal communication, which increases customer happiness. And the option to do things that don’t scale — such as help a customer in a unique situation — can go a long way in building customer loyalty.

Lesson Learned: Service your customers better by establishing clear internal communications.

5. Remove The Unnecessary

After determining that the industry standard EMC/NetApp/Dell storage servers would be too expensive to build our own cloud storage upon, we decided to build our own infrastructure. Many said we were crazy to compete with these multi-billion dollar companies and that it would be impossible to build a lower cost storage server. However, not only did it prove to not be impossible — it wasn’t even that hard.

One key trick? Remove the unnecessary. While EMC and others built servers to sell to other companies for a wide variety of use cases, Backblaze needed servers that only Backblaze would run, and for a single use case. As a result we could tailor the servers for our needs by removing redundancy from each server (since we would run redundant servers), and using lower-performance components (since we would get high-performance by running parallel servers).

What do your customers and use cases not need? This can trim costs and complexity while often improving the product for your use case.

Lesson Learned: Don’t think “what can we add” to what the giants offer — think “what can we remove.”

6. Be Easy

How many times have you visited a large company website, particularly one that’s not consumer-focused, only to leave saying, “Huh? I don’t understand what you do.” Keeping your website clear, and your product and pricing simple, will dramatically increase conversion and customer satisfaction. If you’re able to make it 2x easier and thus increasing your conversion by 2x, you’ve just allowed yourself to spend ½ as much acquiring a customer.

Providing unlimited data backup wasn’t specifically about providing more storage — it was about making it easier. Since users didn’t know how much data they needed to back up, charging per gigabyte meant they wouldn’t know the cost. Providing unlimited data backup meant they could just relax.

Customers love easy — and being smaller makes easy easier to deliver. Use that as an advantage in your website, marketing materials, pricing, product, and in every other customer interaction.

Lesson Learned: Ease-of-use isn’t a slogan: it’s a competitive advantage. Treat it as seriously as any other feature of your product

7. Don’t Be Afraid of Risk

Obviously unnecessary risks are unnecessary, and some risks aren’t worth taking. However, large companies that have given guidance to Wall Street with a $0.01 range on their earning-per-share are inherently going to be very risk-averse. Use risk-tolerance to open up opportunities, and adjust your tolerance level as you scale. In your first year, there are likely an infinite number of ways your business may vaporize; don’t be too worried about taking a risk that might have a 20% downside when the upside is hockey stick growth.

Using consumer-grade hard drives in our servers may have caused pain and suffering for us years down-the-line, but they were priced at approximately 50% of enterprise drives. Giants wouldn’t have considered the option. Turns out, the consumer drives performed great for us.

Lesson Learned: Use calculated risks as an advantage.

8. Be Open

The larger a company grows, the more it wants to hide information. Some of this is driven by regulatory requirements as a public company. But most of this is cultural. Sharing something might cause a problem, so let’s not. All external communication is treated as a critical press release, with rounds and rounds of editing by multiple teams and approvals. However, customers are often desperate for information. Moreover, sharing information builds trust, understanding, and advocates.

I started blogging at Backblaze before we launched. When we blogged about our Storage Pod and open-sourced the design, many thought we were crazy to share this information. But it was transformative for us, establishing Backblaze as a tech thought leader in storage and giving people a sense of how we were able to provide our service at such a low cost.

Over the years we’ve developed a culture of being open internally and externally, on our blog and with the press, and in communities such as Hacker News and Reddit. Often we’ve been asked, “why would you share that!?” — but it’s the continual openness that builds trust. And that culture of openness is incredibly challenging for the giants.

Lesson Learned: Overshare to build trust and brand where giants won’t.

9. Be Human

As companies scale, typically a smaller percent of founders and executives interact with customers. The people who build the company become more hidden, the language feels “corporate,” and customers start to feel they’re interacting with the cliche “faceless, nameless corporation.” Use your humanity to your advantage. From day one the Backblaze About page listed all the founders, and my email address. While contacting us shouldn’t be the first path for a customer support question, I wanted it to be clear that we stand behind the service we offer; if we’re doing something wrong — I want to know it.

To scale it’s important to have processes and procedures, but sometimes a situation falls outside of a well-established process. While we want our employees to follow processes, they’re still encouraged to be human and “try to do the right thing.” How to you strike this balance? Simon Sinek gives a good talk about it: make your employees feel safe. If employees feel safe they’ll be human.

If your customer is a consumer, they’ll appreciate being treated as a human. Even if your customer is a corporation, the purchasing decision-makers are still people.

Lesson Learned: Being human is the ultimate antithesis to the faceless corporation.

Build Culture to Sustain Your Advantages at Scale

Presumably the goal is not to always be competing with giants, but to one day become a giant. Does this mean you’ll lose all of these advantages? Some, yes — but not all. Some of these advantages are cultural, and if you build these into the culture from the beginning, and fight to keep them as you scale, you can keep them as you become a giant.

Tesla still comes across as human, with Elon Musk frequently interacting with people on Twitter. Apple continues to provide great service through their Genius Bar. And, worst case, if you lose these at scale, you’ll still have the other advantages of being a giant such as money, people, scale, resources, and access.

Of course, some new startup will be gunning for you with grand ambitions, so just be sure not to get complacent. 😉

The post How to Compete with Giants appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Say Hello to the New Atlassian

Post Syndicated from Chris De Santis original https://www.anchor.com.au/blog/2017/09/hello-new-atlassian/

Who is Atlassian?

Atlassian is an Australian IT company that develops enterprise software, with its best-known products being its issue-tracking app, Jira, and team collaboration and wiki product, Confluence.

In December 2015, Atlassian went public and made their initial public offering (IPO) under the symbol TEAM, valuing them at $4.37 billion. In summary, they big.

What happened?

A facelift

It’s a nice sunny day in Sydney in mid-September of 2017, and Atlassian, after 15 years of consistency, has rebranded, changing their look and feel for a brighter and funner one, compared to the dreary previous look.New Atlassian Branding VideoIt’s a hell of a lot simpler and, as they show in the above video, it’s going to be used with a lot more creativity and flair in mind—it’s flexible in a sense that they can use it in a lot more ways than before, with a lot more colours than before.

Atlassian Logo ComparisonThe blues they’re using now work super-well with the logos on a white background, whereas the white logos on their new champion, brand colour blue can go both ways: some can see it as a bold, daring step which is quite attractive, while others can see it as off-putting and not very user-friendly.

New Atlassian Logo Versions

What’s it all mean?

Symbolism

In his announcement blog, Atlassian Co-Founder & Co-CEO, Mike Cannon-Brookes, mentions that the branding change reflects their newly-shifted focus on the concept of teamwork. He continues to explain that their previous logo depicted the sky-holding Greek titan Atlas and symbolised legendary service and support. But, while it has become renown, they’re shifting their focus on the concept of teamwork—why focus on something you’ve already done right, right?

Atlassian Logo EvolutionThe new logo contains more symbolism than meets the eye, as can be interpreted as:

  • Two people high-fiving
  • A mountain to scale
  • The letter “A” (seen as two pillars reinforcing each other)
Product logos

Atlassian has created and acquired many products in their adventure so far, and they all seemed to have a similar art style, but something always felt off about their consistency. Well, needless to say, this was addressed with Atlassian’s very own “identity system”, which is a pretty cool term for a consistent logo-look for 14+ products, to fit them under one brand.

New Atlassian Product LogosThe result is a set of unique marks that “still feel very related to each other”. Whereas, I also see a new set of “unknown” Pokémon.

Typeface

New Atlassian TypefaceTo add a cherry on top, Atlassian will be using their own custom-made typeface called Charlie Sans, specifically designed to balance legibility with personality–that’s probably the best way to describe it. Otherwise, I’d say, out of purely-constructive criticism, that there isn’t much difference between itself and any of the other staple fonts; i.e. Arial, Verdana, etc. Then again, I’m not a professional designer.

It doesn’t look as distinct as their previous typeface, but, to be fair, it does look very slick next to the new product logos.

Well…

What do you think about it all?

 

Image credits: Atlassian

The post Say Hello to the New Atlassian appeared first on AWS Managed Services by Anchor.

AWS Hot Startups – August 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-august-2017/

There’s no doubt about it – Artificial Intelligence is changing the world and how it operates. Across industries, organizations from startups to Fortune 500s are embracing AI to develop new products, services, and opportunities that are more efficient and accessible for their consumers. From driverless cars to better preventative healthcare to smart home devices, AI is driving innovation at a fast rate and will continue to play a more important role in our everyday lives.

This month we’d like to highlight startups using AI solutions to help companies grow. We are pleased to feature:

  • SignalBox – a simple and accessible deep learning platform to help businesses get started with AI.
  • Valossa – an AI video recognition platform for the media and entertainment industry.
  • Kaliber – innovative applications for businesses using facial recognition, deep learning, and big data.

SignalBox (UK)

In 2016, SignalBox founder Alain Richardt was hearing the same comments being made by developers, data scientists, and business leaders. They wanted to get into deep learning but didn’t know where to start. Alain saw an opportunity to commodify and apply deep learning by providing a platform that does the heavy lifting with an easy-to-use web interface, blueprints for common tasks, and just a single-click to productize the models. With SignalBox, companies can start building deep learning models with no coding at all – they just select a data set, choose a network architecture, and go. SignalBox also offers step-by-step tutorials, tips and tricks from industry experts, and consulting services for customers that want an end-to-end AI solution.

SignalBox offers a variety of solutions that are being used across many industries for energy modeling, fraud detection, customer segmentation, insurance risk modeling, inventory prediction, real estate prediction, and more. Existing data science teams are using SignalBox to accelerate their innovation cycle. One innovative UK startup, Energi Mine, recently worked with SignalBox to develop deep networks that predict anomalous energy consumption patterns and do time series predictions on energy usage for businesses with hundreds of sites.

SignalBox uses a variety of AWS services including Amazon EC2, Amazon VPC, Amazon Elastic Block Store, and Amazon S3. The ability to rapidly provision EC2 GPU instances has been a critical factor in their success – both in terms of keeping their operational expenses low, as well as speed to market. The Amazon API Gateway has allowed for operational automation, giving SignalBox the ability to control its infrastructure.

To learn more about SignalBox, visit here.

Valossa (Finland)

As students at the University of Oulu in Finland, the Valossa founders spent years doing research in the computer science and AI labs. During that time, the team witnessed how the world was moving beyond text, with video playing a greater role in day-to-day communication. This spawned an idea to use technology to automatically understand what an audience is viewing and share that information with a global network of content producers. Since 2015, Valossa has been building next generation AI applications to benefit the media and entertainment industry and is moving beyond the capabilities of traditional visual recognition systems.

Valossa’s AI is capable of analyzing any video stream. The AI studies a vast array of data within videos and converts that information into descriptive tags, categories, and overviews automatically. Basically, it sees, hears, and understands videos like a human does. The Valossa AI can detect people, visual and auditory concepts, key speech elements, and labels explicit content to make moderating and filtering content simpler. Valossa’s solutions are designed to provide value for the content production workflow, from media asset management to end-user applications for content discovery. AI-annotated content allows online viewers to jump directly to their favorite scenes or search specific topics and actors within a video.

Valossa leverages AWS to deliver the industry’s first complete AI video recognition platform. Using Amazon EC2 GPU instances, Valossa can easily scale their computation capacity based on customer activity. High-volume video processing with GPU instances provides the necessary speed for time-sensitive workflows. The geo-located Availability Zones in EC2 allow Valossa to bring resources close to their customers to minimize network delays. Valossa also uses Amazon S3 for video ingestion and to provide end-user video analytics, which makes managing and accessing media data easy and highly scalable.

To see how Valossa works, check out www.WhatIsMyMovie.com or enable the Alexa Skill, Valossa Movie Finder. To try the Valossa AI, sign up for free at www.valossa.com.

Kaliber (San Francisco, CA)

Serial entrepreneurs Ray Rahman and Risto Haukioja founded Kaliber in 2016. The pair had previously worked in startups building smart cities and online privacy tools, and teamed up to bring AI to the workplace and change the hospitality industry. Our world is designed to appeal to our senses – stores and warehouses have clearly marked aisles, products are colorfully packaged, and we use these designs to differentiate one thing from another. We tell each other apart by our faces, and previously that was something only humans could measure or act upon. Kaliber is using facial recognition, deep learning, and big data to create solutions for business use. Markets and companies that aren’t typically associated with cutting-edge technology will be able to use their existing camera infrastructure in a whole new way, making them more efficient and better able to serve their customers.

Computer video processing is rapidly expanding, and Kaliber believes that video recognition will extend to far more than security cameras and robots. Using the clients’ network of in-house cameras, Kaliber’s platform extracts key data points and maps them to actionable insights using their machine learning (ML) algorithm. Dashboards connect users to the client’s BI tools via the Kaliber enterprise APIs, and managers can view these analytics to improve their real-world processes, taking immediate corrective action with real-time alerts. Kaliber’s Real Metrics are aimed at combining the power of image recognition with ML to ultimately provide a more meaningful experience for all.

Kaliber uses many AWS services, including Amazon Rekognition, Amazon Kinesis, AWS Lambda, Amazon EC2 GPU instances, and Amazon S3. These services have been instrumental in helping Kaliber meet the needs of enterprise customers in record time.

Learn more about Kaliber here.

Thanks for reading and we’ll see you next month!

-Tina

 

Pimoroni is 5 now!

Post Syndicated from guru original https://www.raspberrypi.org/blog/pimoroni-is-5-now/

Long read written by Pimoroni’s Paul Beech, best enjoyed over a cup o’ grog.

Every couple of years, I’ve done a “State of the Fleet” update here on the Raspberry Pi blog to tell everyone how the Sheffield Pirates are doing. Half a decade has gone by in a blink, but reading back over the previous posts shows that a lot has happened in that time!

TL;DR We’re an increasingly medium-sized design/manufacturing/e-commerce business with workshops in Sheffield, UK, and Essen, Germany, and we employ almost 40 people. We’re totally lovely. Thanks for supporting us!

 

We’ve come a long way, baby

I’m sitting looking out the window at Sheffield-on-Sea and feeling pretty lucky about how things are going. In the morning, I’ll be flying east for Maker Faire Tokyo with Niko (more on him later), and to say hi to some amazing people in Shenzhen (and to visit Huaqiangbei, of course). This is after I’ve already visited this year’s Maker Faires in New York, San Francisco, and Berlin.

Pimoroni started out small, but we’ve grown like weeds, and we’re steadily sauntering towards becoming a medium-sized business. That’s thanks to fantastic support from the people who buy our stuff and spread the word. In return, we try to be nice, friendly, and human in everything we do, and to make exciting things, ideally with our own hands here in Sheffield.

Pimoroni soldering

Handmade with love

We’ve made it onto a few ‘fastest-growing’ lists, and we’re in the top 500 of the Inc. 5000 Europe list. Adafruit did it first a few years back, and we’ve never gone wrong when we’ve followed in their footsteps.

The slightly weird nature of Pimoroni means we get listed as either a manufacturing or e-commerce business. In reality, we’re about four or five companies in one shell, which is very much against the conventions of “how business is done”. However, having seen what Adafruit, SparkFun, and Seeed do, we’re more than happy to design, manufacture, and sell our stuff in-house, as well as stocking the best stuff from across the maker community.

Pimoroni stocks

Product and process

The whole process of expansion has not been without its growing pains. We’re just under 40 people strong now, and have an outpost in Germany (also hilariously far from the sea for piratical activities). This means we’ve had to change things quickly to improve and automate processes, so that the wheels won’t fall off as things get bigger. Process optimization is incredibly interesting to a geek, especially the making sure that things are done well, that mistakes are easy to spot and to fix, and that nothing is missed.

At the end of 2015, we had a step change in how busy we were, and our post room and support started to suffer. As a consequence, we implemented measures to become more efficient, including small but important things like checking in parcels with a barcode scanner attached to a Raspberry Pi. That Pi has been happily running on the same SD card for a couple of years now without problems 😀

Pimoroni post room

Going postal?

We also hired a full-time support ninja, Matt, to keep the experience of getting stuff from us light and breezy and to ensure that any problems are sorted. He’s had hugely positive impact already by making the emails and replies you see more friendly. Of course, he’s also started using the laser cutters for tinkering projects. It’d be a shame to work at Pimoroni and not get to use all the wonderful toys, right?

Employing all the people

You can see some of the motley crew we employ here and there on the Pimoroni website. And if you drop by at the Raspberry Pi Birthday Party, Pi Wars, Maker Faires, Deer Shed Festival, or New Scientist Live in September, you’ll be seeing new Pimoroni faces as we start to engage with people more about what we do. On top of that, we’re starting to make proper videos (like Sandy’s soldering guide), as opposed to the 101 episodes of Bilge Tank we recorded in a rather off-the-cuff and haphazard fashion. Although that’s the beauty of Bilge Tank, right?

Pimoroni soldering

Such soldering setup

As Emma, Sandy, Lydia, and Tanya gel as a super creative team, we’re starting to create more formal educational resources, and to make kits that are suitable for a wider audience. Things like our Pi Zero W kits are products of their talents.

Emma is our new Head of Marketing. She’s really ‘The Only Marketing Person Who Would Ever Fit In At Pimoroni’, having been a core part of the Sheffield maker scene since we hung around with one Ben Nuttall, in the dark days before Raspberry Pi was a thing.

Through a series of fortunate coincidences, Niko and his equally talented wife Mena were there when we cut the first Pibow in 2012. They immediately pitched in to help us buy our second laser cutter so we could keep up with demand. They have been supporting Pimoroni with sourcing in East Asia, and now Niko has become a member of the Pirates’ Council and the Head of Engineering as we’re increasing the sophistication and scale of the things we do. The Unicorn HAT HD is one of his masterpieces.

Pimoroni devices

ALL the HATs!

We see ourselves as a wonderful island of misfit toys, and it feels good to have the best toy shop ever, and to support so many lovely people. Business is about more than just profits.

Where do we go to, me hearties?

So what are our plans? At the moment we’re still working absolutely flat-out as demand from wholesalers, retailers, and customers increases. We thought Raspberry Pi was big, but it turns out it’s just getting started. Near the end of 2016, it seemed to reach a whole new level of popularityand still we continue to meet people to whom we have to explain what a Pi is. It’s a good problem to have.

We need a bigger space, but it’s been hard to find somewhere suitable in Sheffield that won’t mean we’re stuck on an industrial estate miles from civilisation. That would be bad for the crewwe like having world-class burritos on our doorstep.

The good news is, it looks like our search is at an end! Just in time for the arrival of our ‘Super-Turbo-Death-Star’ new production line, which will enable to make devices in a bigger, better, faster, more ‘Now now now!’ fashion \o/

Pimoroni warehouse

Spacious, but not spacious enough!

We’ve got lots of treasure in the pipeline, but we want to pick up the pace of development even more and create many new HATs, pHATs, and SHIMs, e.g. for environmental sensing and audio applications. Picade will also be getting some love to make it slicker and more hackable.

We’re also starting to flirt with adding more engineering and production capabilities in-house. The plan is to try our hand at anodising, powder-coating, and maybe even injection-moulding if we get the space and find the right machine. Learning how to do things is amazing, and we love having an idea and being able to bring it to life in almost no time at all.

Pimoroni production

This is where the magic happens

Fanks!

There are so many people involved in supporting our success, and some people we love for just existing and doing wonderful things that make us want to do better. The biggest shout-outs go to Liz, Eben, Gordon, James, all the Raspberry Pi crew, and Limor and pt from Adafruit, for being the most supportive guiding lights a young maker company could ever need.

A note from us

It is amazing for us to witness the growth of businesses within the Raspberry Pi ecosystem. Pimoroni is a wonderful example of an organisation that is creating opportunities for makers within its local community, and the company is helping to reinvigorate Sheffield as the heart of making in the UK.

If you’d like to take advantage of the great products built by the Pirates, Monkeys, Robots, and Ninjas of Sheffield, you should do it soon: Pimoroni are giving everyone 20% off their homemade tech until 6 August.

Pimoroni, from all of us here at Pi Towers (both in the UK and USA), have a wonderful birthday, and many a grog on us!

The post Pimoroni is 5 now! appeared first on Raspberry Pi.