Tag Archives: industrial

AWS IoT 1-Click – Use Simple Devices to Trigger Lambda Functions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-1-click-use-simple-devices-to-trigger-lambda-functions/

We announced a preview of AWS IoT 1-Click at AWS re:Invent 2017 and have been refining it ever since, focusing on simplicity and a clean out-of-box experience. Designed to make IoT available and accessible to a broad audience, AWS IoT 1-Click is now generally available, along with new IoT buttons from AWS and AT&T.

I sat down with the dev team a month or two ago to learn about the service so that I could start thinking about my blog post. During the meeting they gave me a pair of IoT buttons and I started to think about some creative ways to put them to use. Here are a few that I came up with:

Help Request – Earlier this month I spent a very pleasant weekend at the HackTillDawn hackathon in Los Angeles. As the participants were hacking away, they occasionally had questions about AWS, machine learning, Amazon SageMaker, and AWS DeepLens. While we had plenty of AWS Solution Architects on hand (decked out in fashionable & distinctive AWS shirts for easy identification), I imagined an IoT button for each team. Pressing the button would alert the SA crew via SMS and direct them to the proper table.

Camera ControlTim Bray and I were in the AWS video studio, prepping for the first episode of Tim’s series on AWS Messaging. Minutes before we opened the Twitch stream I realized that we did not have a clean, unobtrusive way to ask the camera operator to switch to a closeup view. Again, I imagined that a couple of IoT buttons would allow us to make the request.

Remote Dog Treat Dispenser – My dog barks every time a stranger opens the gate in front of our house. While it is great to have confirmation that my Ring doorbell is working, I would like to be able to press a button and dispense a treat so that Luna stops barking!

Homes, offices, factories, schools, vehicles, and health care facilities can all benefit from IoT buttons and other simple IoT devices, all managed using AWS IoT 1-Click.

All About AWS IoT 1-Click
As I said earlier, we have been focusing on simplicity and a clean out-of-box experience. Here’s what that means:

Architects can dream up applications for inexpensive, low-powered devices.

Developers don’t need to write any device-level code. They can make use of pre-built actions, which send email or SMS messages, or write their own custom actions using AWS Lambda functions.

Installers don’t have to install certificates or configure cloud endpoints on newly acquired devices, and don’t have to worry about firmware updates.

Administrators can monitor the overall status and health of each device, and can arrange to receive alerts when a device nears the end of its useful life and needs to be replaced, using a single interface that spans device types and manufacturers.

I’ll show you how easy this is in just a moment. But first, let’s talk about the current set of devices that are supported by AWS IoT 1-Click.

Who’s Got the Button?
We’re launching with support for two types of buttons (both pictured above). Both types of buttons are pre-configured with X.509 certificates, communicate to the cloud over secure connections, and are ready to use.

The AWS IoT Enterprise Button communicates via Wi-Fi. It has a 2000-click lifetime, encrypts outbound data using TLS, and can be configured using BLE and our mobile app. It retails for $19.99 (shipping and handling not included) and can be used in the United States, Europe, and Japan.

The AT&T LTE-M Button communicates via the LTE-M cellular network. It has a 1500-click lifetime, and also encrypts outbound data using TLS. The device and the bundled data plan is available an an introductory price of $29.99 (shipping and handling not included), and can be used in the United States.

We are very interested in working with device manufacturers in order to make even more shapes, sizes, and types of devices (badge readers, asset trackers, motion detectors, and industrial sensors, to name a few) available to our customers. Our team will be happy to tell you about our provisioning tools and our facility for pushing OTA (over the air) updates to large fleets of devices; you can contact them at [email protected].

AWS IoT 1-Click Concepts
I’m eager to show you how to use AWS IoT 1-Click and the buttons, but need to introduce a few concepts first.

Device – A button or other item that can send messages. Each device is uniquely identified by a serial number.

Placement Template – Describes a like-minded collection of devices to be deployed. Specifies the action to be performed and lists the names of custom attributes for each device.

Placement – A device that has been deployed. Referring to placements instead of devices gives you the freedom to replace and upgrade devices with minimal disruption. Each placement can include values for custom attributes such as a location (“Building 8, 3rd Floor, Room 1337”) or a purpose (“Coffee Request Button”).

Action – The AWS Lambda function to invoke when the button is pressed. You can write a function from scratch, or you can make use of a pair of predefined functions that send an email or an SMS message. The actions have access to the attributes; you can, for example, send an SMS message with the text “Urgent need for coffee in Building 8, 3rd Floor, Room 1337.”

Getting Started with AWS IoT 1-Click
Let’s set up an IoT button using the AWS IoT 1-Click Console:

If I didn’t have any buttons I could click Buy devices to get some. But, I do have some, so I click Claim devices to move ahead. I enter the device ID or claim code for my AT&T button and click Claim (I can enter multiple claim codes or device IDs if I want):

The AWS buttons can be claimed using the console or the mobile app; the first step is to use the mobile app to configure the button to use my Wi-Fi:

Then I scan the barcode on the box and click the button to complete the process of claiming the device. Both of my buttons are now visible in the console:

I am now ready to put them to use. I click on Projects, and then Create a project:

I name and describe my project, and click Next to proceed:

Now I define a device template, along with names and default values for the placement attributes. Here’s how I set up a device template (projects can contain several, but I just need one):

The action has two mandatory parameters (phone number and SMS message) built in; I add three more (Building, Room, and Floor) and click Create project:

I’m almost ready to ask for some coffee! The next step is to associate my buttons with this project by creating a placement for each one. I click Create placements to proceed. I name each placement, select the device to associate with it, and then enter values for the attributes that I established for the project. I can also add additional attributes that are peculiar to this placement:

I can inspect my project and see that everything looks good:

I click on the buttons and the SMS messages appear:

I can monitor device activity in the AWS IoT 1-Click Console:

And also in the Lambda Console:

The Lambda function itself is also accessible, and can be used as-is or customized:

As you can see, this is the code that lets me use {{*}}include all of the placement attributes in the message and {{Building}} (for example) to include a specific placement attribute.

Now Available
I’ve barely scratched the surface of this cool new service and I encourage you to give it a try (or a click) yourself. Buy a button or two, build something cool, and let me know all about it!

Pricing is based on the number of enabled devices in your account, measured monthly and pro-rated for partial months. Devices can be enabled or disabled at any time. See the AWS IoT 1-Click Pricing page for more info.

To learn more, visit the AWS IoT 1-Click home page or read the AWS IoT 1-Click documentation.

Jeff;

 

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 

The answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-1/

Before Easter, we asked you to tell us your questions for a live Q & A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. The variety of questions and comments you sent was wonderful, and while we couldn’t get to them all, we picked a handful of the most common to grill him on.

You can watch the video below — though due to this being the first pancake of our live Q&A videos, the sound is a bit iffy — or read Eben’s answers to the first five questions today. We’ll follow up with the rest in the next few weeks!

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

Any plans for 64-bit Raspbian?

Raspbian is effectively 32-bit Debian built for the ARMv6 instruction-set architecture supported by the ARM11 processor in the first-generation Raspberry Pi. So maybe the question should be: “Would we release a version of our operating environment that was built on top of 64-bit ARM Debian?”

And the answer is: “Not yet.”

When we released the Raspberry Pi 3 Model B+, we released an operating system image on the same day; the wonderful thing about that image is that it runs on every Raspberry Pi ever made. It even runs on the alpha boards from way back in 2011.

That deep backwards compatibility is really important for us, in large part because we don’t want to orphan our customers. If someone spent $35 on an older-model Raspberry Pi five or six years ago, they still spent $35, so it would be wrong for us to throw them under the bus.

So, if we were going to do a 64-bit version, we’d want to keep doing the 32-bit version, and then that would mean our efforts would be split across the two versions; and remember, we’re still a very small engineering team. Never say never, but it would be a big step for us.

For people wanting a 64-bit operating system, there are plenty of good third-party images out there, including SUSE Linux Enterprise Server.

Given that the 3B+ includes 5GHz wireless and Power over Ethernet (PoE) support, why would manufacturers continue to use the Compute Module?

It’s a form-factor thing.

Very large numbers of people are using the bigger product in an industrial context, and it’s well engineered for that: it has module certification, wireless on board, and now PoE support. But there are use cases that can’t accommodate this form factor. For example, NEC displays: we’ve had this great relationship with NEC for a couple of years now where a lot of their displays have a socket in the back that you can put a Compute Module into. That wouldn’t work with the 3B+ form factor.

Back of an NEC display with a Raspberry Pi Compute Module slotted in.

An NEC display with a Raspberry Pi Compute Module

What are some industrial uses/products Raspberry is used with?

The NEC displays are a good example of the broader trend of using Raspberry Pi in digital signage.

A Raspberry Pi running the wait time signage at The Wizarding World of Harry Potter, Universal Studios.
Image c/o thelonelyredditor1

If you see a monitor at a station, or an airport, or a recording studio, and you look behind it, it’s amazing how often you’ll find a Raspberry Pi sitting there. The original Raspberry Pi was particularly strong for multimedia use cases, so we saw uptake in signage very early on.

An array of many Raspberry Pis

Los Alamos Raspberry Pi supercomputer

Another great example is the Los Alamos National Laboratory building supercomputers out of Raspberry Pis. Many high-end supercomputers now are built using white-box hardware — just regular PCs connected together using some networking fabric — and a collection of Raspberry Pi units can serve as a scale model of that. The Raspberry Pi has less processing power, less memory, and less networking bandwidth than the PC, but it has a balanced amount of each. So if you don’t want to let your apprentice supercomputer engineers loose on your expensive supercomputer, a cluster of Raspberry Pis is a good alternative.

Why is there no power button on the Raspberry Pi?

“Once you start, where do you stop?” is a question we ask ourselves a lot.

There are a whole bunch of useful things that we haven’t included in the Raspberry Pi by default. We don’t have a power button, we don’t have a real-time clock, and we don’t have an analogue-to-digital converter — those are probably the three most common requests. And the issue with them is that they each cost a bit of money, they’re each only useful to a minority of users, and even that minority often can’t agree on exactly what they want. Some people would like a power button that is literally a physical analogue switch between the 5V input and the rest of the board, while others would like something a bit more like a PC power button, which is partway between a physical switch and a ‘shutdown’ button. There’s no consensus about what sort of power button we should add.

So the answer is: accessories. By leaving a feature off the board, we’re not taxing the majority of people who don’t want the feature. And of course, we create an opportunity for other companies in the ecosystem to create and sell accessories to those people who do want them.

Adafruit Push-button Power Switch Breakout Raspberry Pi

The Adafruit Push-button Power Switch Breakout is one of many accessories that fill in the gaps for makers.

We have this neat way of figuring out what features to include by default: we divide through the fraction of people who want it. If you have a 20 cent component that’s going to be used by a fifth of people, we treat that as if it’s a $1 component. And it has to fight its way against the $1 components that will be used by almost everybody.

Do you think that Raspberry Pi is the future of the Internet of Things?

Absolutely, Raspberry Pi is the future of the Internet of Things!

In practice, most of the viable early IoT use cases are in the commercial and industrial spaces rather than the consumer space. Maybe in ten years’ time, IoT will be about putting 10-cent chips into light switches, but right now there’s so much money to be saved by putting automation into factories that you don’t need 10-cent components to address the market. Last year, roughly 2 million $35 Raspberry Pi units went into commercial and industrial applications, and many of those are what you’d call IoT applications.

So I think we’re the future of a particular slice of IoT. And we have ten years to get our price point down to 10 cents 🙂

The post The answers to your questions for Eben Upton appeared first on Raspberry Pi.

Community profile: Dave Akerman

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-dave-akerman/

This column is from The MagPi issue 61. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

The pinned tweet on Dave Akerman’s Twitter account shows a table displaying the various components needed for a high-altitude balloon (HAB) flight. Batteries, leads, a camera and Raspberry Pi, plus an unusually themed payload. The caption reads ‘The Queen, The Duke of York, and my TARDIS”, and sums up Dave’s maker career in a heartbeat.

David Akerman on Twitter

The Queen, The Duke of York, and my TARDIS 🙂 #UKHAS #RaspberryPi

Though writing software for industrial automation pays the bills, the majority of Dave’s time is spent in the world of high-altitude ballooning and the ever-growing community that encompasses it. And, while he makes some money sending business-themed balloons to near space for the likes of Aardman Animations, Confused.com, and the BBC, Dave is best known in the Raspberry Pi community for his use of the small computer in every payload, and his work as a tutor alongside the Foundation’s staff at Skycademy events.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave continues to help others while breaking records and having a good time exploring the atmosphere.

Dave has dedicated many hours and many, many more miles to assist with the Foundation’s Skycademy programme, helping to explore high-altitude ballooning with educators from across the UK. Using a Raspberry Pi and various other pieces of lightweight tech, Dave and Foundation staff member James Robinson explored the incorporation of high-altitude ballooning into education. Through Skycademy, educators were able to learn new skills and take them to the classroom, setting off their own balloons with their students, and recording the results on Raspberry Pis.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave’s most recent flight broke a new record. On 13 August 2017, his HAB payload was able to send back the highest images taken by any amateur flight.

But education isn’t the only reason for Dave’s involvement in the HAB community. As with anyone passionate about a specific hobby, Dave strives to break records. The most recent record-breaking flight took place on 13 August 2017, when Dave’s Raspberry Pi Zero HAB sent home the highest images taken by any amateur high-altitude balloon launch: at 43014 metres. No other HAB balloon has provided images from such an altitude, and the lightweight nature of the Pi Zero definitely helped, as Dave went on to mention on Twitter a few days later.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave is recognised as being the first person to incorporate a Raspberry Pi into a HAB payload, and continues to break records with the help of the little green board. More recently, he’s been able to lighten the load by using the Raspberry Pi Zero.

When the first Pi made its way to near space, Dave tore the computer apart in order to meet the weight restriction. The Pi in the Sky board was created to add the extra features needed for the flight. Since then, the HAT has experienced a few changes.

Dave Akerman The MagPi Raspberry Pi Community Profile

The Pi in the Sky board, created specifically for HAB flights.

Dave first fell in love with high-altitude ballooning after coming across the hobby in a video shared on a photographic forum. With a lifelong interest in space thanks to watching the Moon landings as a boy, plus a talent for electronics and photography, it seems a natural progression for him. Throw in his coding skills from learning to program on a Teletype and it’s no wonder he was ready and eager to take to the skies, so to speak, and capture the curvature of the Earth. What was so great about using the Raspberry Pi was the instant gratification he got from receiving images in real time as they were taken during the flight. While other devices could control a camera and store captured images for later retrieval, thanks to the Pi Dave was able to transmit the files back down to Earth and check the progress of his balloon while attempting to break records with a flight.

Dave Akerman The MagPi Raspberry Pi Community Profile Morph

One of the many commercial flights Dave has organised featured the classic children’s TV character Morph, a creation of the Aardman Animations studio known for Wallace and Gromit. Morph took to the sky twice in his mission to reach near space, and finally succeeded in 2016.

High-altitude ballooning isn’t the only part of Dave’s life that incorporates a Raspberry Pi. Having “lost count” of how many Pis he has running tasks, Dave has also created radio receivers for APRS (ham radio data), ADS-B (aircraft tracking), and OGN (gliders), along with a time-lapse camera in his garden, and he has a few more Pi for tinkering purposes.

The post Community profile: Dave Akerman appeared first on Raspberry Pi.

New – Machine Learning Inference at the Edge Using AWS Greengrass

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-machine-learning-inference-at-the-edge-using-aws-greengrass/

What happens when you combine the Internet of Things, Machine Learning, and Edge Computing? Before I tell you, let’s review each one and discuss what AWS has to offer.

Internet of Things (IoT) – Devices that connect the physical world and the digital one. The devices, often equipped with one or more types of sensors, can be found in factories, vehicles, mines, fields, homes, and so forth. Important AWS services include AWS IoT Core, AWS IoT Analytics, AWS IoT Device Management, and Amazon FreeRTOS, along with others that you can find on the AWS IoT page.

Machine Learning (ML) – Systems that can be trained using an at-scale dataset and statistical algorithms, and used to make inferences from fresh data. At Amazon we use machine learning to drive the recommendations that you see when you shop, to optimize the paths in our fulfillment centers, fly drones, and much more. We support leading open source machine learning frameworks such as TensorFlow and MXNet, and make ML accessible and easy to use through Amazon SageMaker. We also provide Amazon Rekognition for images and for video, Amazon Lex for chatbots, and a wide array of language services for text analysis, translation, speech recognition, and text to speech.

Edge Computing – The power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.

ML Inference at the Edge
Today I would like to toss all three of these important new technologies into a blender! You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields, and homes that I mentioned.

Here are a few of the many ways that you can put Greengrass ML Inference to use:

Precision Farming – With an ever-growing world population and unpredictable weather that can affect crop yields, the opportunity to use technology to increase yields is immense. Intelligent devices that are literally in the field can process images of soil, plants, pests, and crops, taking local corrective action and sending status reports to the cloud.

Physical Security – Smart devices (including the AWS DeepLens) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.

Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment.

Greengrass ML Inference Overview
There are several different aspects to this new AWS feature. Let’s take a look at each one:

Machine Learning ModelsPrecompiled TensorFlow and MXNet libraries, optimized for production use on the NVIDIA Jetson TX2 and Intel Atom devices, and development use on 32-bit Raspberry Pi devices. The optimized libraries can take advantage of GPU and FPGA hardware accelerators at the edge in order to provide fast, local inferences.

Model Building and Training – The ability to use Amazon SageMaker and other cloud-based ML tools to build, train, and test your models before deploying them to your IoT devices. To learn more about SageMaker, read Amazon SageMaker – Accelerated Machine Learning.

Model Deployment – SageMaker models can (if you give them the proper IAM permissions) be referenced directly from your Greengrass groups. You can also make use of models stored in S3 buckets. You can add a new machine learning resource to a group with a couple of clicks:

These new features are available now and you can start using them today! To learn more read Perform Machine Learning Inference.

Jeff;

 

Pi 3B+: 48 hours later

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/3b-plus-aftermath/

Unless you’ve been AFK for the last two days, you’ll no doubt be aware of the release of the brand-spanking-new Raspberry Pi 3 Model B+. With faster connectivity, more computing power, Power over Ethernet (PoE) pins, and the same $35 price point, the new board has been a hit across all our social media accounts! So while we wind down from launch week, let’s all pull up a chair, make yet another cup of coffee, and look through some of our favourite reactions from the last 48 hours.

Twitter

Our Twitter mentions were refreshing at hyperspeed on Wednesday, as you all began to hear the news and spread the word about the newest member to the Raspberry Pi family.

Tanya Fish on Twitter

Happy Pi Day, people! New @Raspberry_Pi 3B+ is out.

News outlets, maker sites, and hobbyists published posts and articles about the new Pi’s spec upgrades and their plans for the device.

Hackster.io on Twitter

This sort of attention to detail work is exactly what I love about being involved with @Raspberry_Pi. We’re squeezing the last drops of performance out of the 40nm process node, and perfecting Pi 3 in the same way that the original B+ perfected Pi 1.” https://t.co/hEj7JZOGeZ

And I think we counted about 150 uses of this GIF on Twitter alone:

YouTube

Andy Warburton 👾 on Twitter

Is something going on with the @Raspberry_Pi today? You’d never guess from my YouTube subscriptions page… 😀

A few members of our community were lucky enough to get their hands on a 3B+ early, and sat eagerly by the YouTube publish button, waiting to release their impressions of our new board to the world. Others, with no new Pi in hand yet, posted reaction vids to the launch, discussing their plans for the upgraded Pi and comparing statistics against its predecessors.

New Raspberry Pi 3 B+ (2018) Review and Speed Tests

Happy Pi Day World! There is a new Raspberry Pi 3, the B+! In this video I will review the new Pi 3 B+ and do some speed tests. Let me know in the comments if you are getting one and what you are planning on making with it!

Long-standing community members such as The Raspberry Pi Guy, Alex “RasPi.TV” Eames, and Michael Horne joined Adafruit, element14, and RS Components (whose team produced the most epic 3B+ video we’ve seen so far), and makers Tinkernut and Estefannie Explains It All in sharing their thoughts, performance tests, and baked goods on the big day.

What’s new on the Raspberry Pi 3 B+

It’s Pi day! Sorry, wondrous Mathematical constant, this day is no longer about you. The Raspberry Pi foundation just released a new version of the Raspberry Pi called the Rapsberry Pi B+.

If you have a YouTube or Vimeo channel, or if you create videos for other social media channels, and have published your impressions of the new Raspberry Pi, be sure to share a link with us so we can see what you think!

Instagram

We shared a few photos and videos on Instagram, and over 30000 of you checked out our Instagram Story on the day.

Some glamour shots of the latest member of the #RaspberryPi family – the Raspberry Pi 3 Model B+ . Will you be getting one? What are your plans for our newest Pi?

5,609 Likes, 103 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Some glamour shots of the latest member of the #RaspberryPi family – the Raspberry Pi 3 Model B+ ….”

As hot off the press (out of the oven? out of the solder bath?) Pi 3B+ boards start to make their way to eager makers’ homes, they are all broadcasting their excitement, and we love seeing what they plan to get up to with it.

The new #raspberrypi 3B+ suits the industrial setting. Check out my website for #RPI3B Vs RPI3BPlus network speed test. #NotEnoughTECH #network #test #internet

8 Likes, 1 Comments – Mat (@notenoughtech) on Instagram: “The new #raspberrypi 3B+ suits the industrial setting. Check out my website for #RPI3B Vs RPI3BPlus…”

The new Raspberry Pi 3 Model B+ is here and will be used for our Python staging server for our APIs #raspberrypi #pythoncode #googleadwords #shopify #datalayer

16 Likes, 3 Comments – Rob Edlin (@niddocks) on Instagram: “The new Raspberry Pi 3 Model B+ is here and will be used for our Python staging server for our APIs…”

In the news

Eben made an appearance on ITV Anglia on Wednesday, talking live on Facebook about the new Raspberry Pi.

ITV Anglia

As the latest version of the Raspberry Pi computer is launched in Cambridge, Dr Eben Upton talks about the inspiration of Professor Stephen Hawking and his legacy to science. Add your questions in…

He was also fortunate enough to spend the morning with some Sixth Form students from the local area.

Sascha Williams on Twitter

On a day where science is making the headlines, lovely to see the scientists of the future in our office – getting tips from fab @Raspberry_Pi founder @EbenUpton #scientists #RaspberryPi #PiDay2018 @sirissac6thform

Principal Hardware Engineer Roger Thornton will also make a live appearance online this week: he is co-hosting Hack Chat later today. And of course, you can see more of Roger and Eben in the video where they discuss the new 3B+.

Introducing the Raspberry Pi 3 Model B+

Raspberry Pi 3 Model B+ is now on sale now for $35.

It’s been a supremely busy week here at Pi Towers and across the globe in the offices of our Approved Resellers, and seeing your wonderful comments and sharing in your excitement has made it all worth it. Please keep it up, and be sure to share the arrival of your 3B+ as well as the projects into which you’ll be integrating them.

If you’d like to order a Raspberry Pi 3 Model B+, you can do so via our product page. And if you have any questions at all regarding the 3B+, the conversation is still taking place in the comments of Wednesday’s launch post, so head on over.

The post Pi 3B+: 48 hours later appeared first on Raspberry Pi.

Raspberry Pi 3 Model B+ on sale now at $35

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-3-model-bplus-sale-now-35/

Here’s a long post. We think you’ll find it interesting. If you don’t have time to read it all, we recommend you watch this video, which will fill you in with everything you need, and then head straight to the product page to fill yer boots. (We recommend the video anyway, even if you do have time for a long read. ‘Cos it’s fab.)

A BRAND-NEW PI FOR π DAY

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

If you’ve been a Raspberry Pi watcher for a while now, you’ll have a bit of a feel for how we update our products. Just over two years ago, we released Raspberry Pi 3 Model B. This was our first 64-bit product, and our first product to feature integrated wireless connectivity. Since then, we’ve sold over nine million Raspberry Pi 3 units (we’ve sold 19 million Raspberry Pis in total), which have been put to work in schools, homes, offices and factories all over the globe.

Those Raspberry Pi watchers will know that we have a history of releasing improved versions of our products a couple of years into their lives. The first example was Raspberry Pi 1 Model B+, which added two additional USB ports, introduced our current form factor, and rolled up a variety of other feedback from the community. Raspberry Pi 2 didn’t get this treatment, of course, as it was superseded after only one year; but it feels like it’s high time that Raspberry Pi 3 received the “plus” treatment.

So, without further ado, Raspberry Pi 3 Model B+ is now on sale for $35 (the same price as the existing Raspberry Pi 3 Model B), featuring:

  • A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU
  • Dual-band 802.11ac wireless LAN and Bluetooth 4.2
  • Faster Ethernet (Gigabit Ethernet over USB 2.0)
  • Power-over-Ethernet support (with separate PoE HAT)
  • Improved PXE network and USB mass-storage booting
  • Improved thermal management

Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

Behold the shiny

Raspberry Pi 3B+ is available to buy today from our network of Approved Resellers.

New features, new chips

Roger Thornton did the design work on this revision of the Raspberry Pi. Here, he and I have a chat about what’s new.

Introducing the Raspberry Pi 3 Model B+

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

The new product is built around BCM2837B0, an updated version of the 64-bit Broadcom application processor used in Raspberry Pi 3B, which incorporates power integrity optimisations, and a heat spreader (that’s the shiny metal bit you can see in the photos). Together these allow us to reach higher clock frequencies (or to run at lower voltages to reduce power consumption), and to more accurately monitor and control the temperature of the chip.

Dual-band wireless LAN and Bluetooth are provided by the Cypress CYW43455 “combo” chip, connected to a Proant PCB antenna similar to the one used on Raspberry Pi Zero W. Compared to its predecessor, Raspberry Pi 3B+ delivers somewhat better performance in the 2.4GHz band, and far better performance in the 5GHz band, as demonstrated by these iperf results from LibreELEC developer Milhouse.

Tx bandwidth (Mb/s)Rx bandwidth (Mb/s)
Raspberry Pi 3B35.735.6
Raspberry Pi 3B+ (2.4GHz)46.746.3
Raspberry Pi 3B+ (5GHz)102102

The wireless circuitry is encapsulated under a metal shield, rather fetchingly embossed with our logo. This has allowed us to certify the entire board as a radio module under FCC rules, which in turn will significantly reduce the cost of conformance testing Raspberry Pi-based products.

We’ll be teaching metalwork next.

Previous Raspberry Pi devices have used the LAN951x family of chips, which combine a USB hub and 10/100 Ethernet controller. For Raspberry Pi 3B+, Microchip have supported us with an upgraded version, LAN7515, which supports Gigabit Ethernet. While the USB 2.0 connection to the application processor limits the available bandwidth, we still see roughly a threefold increase in throughput compared to Raspberry Pi 3B. Again, here are some typical iperf results.

Tx bandwidth (Mb/s)Rx bandwidth (Mb/s)
Raspberry Pi 3B94.195.5
Raspberry Pi 3B+315315

We use a magjack that supports Power over Ethernet (PoE), and bring the relevant signals to a new 4-pin header. We will shortly launch a PoE HAT which can generate the 5V necessary to power the Raspberry Pi from the 48V PoE supply.

There… are… four… pins!

Coming soon to a Raspberry Pi 3B+ near you

Raspberry Pi 3B was our first product to support PXE Ethernet boot. Testing it in the wild shook out a number of compatibility issues with particular switches and traffic environments. Gordon has rolled up fixes for all known issues into the BCM2837B0 boot ROM, and PXE boot is now enabled by default.

Clocking, voltages and thermals

The improved power integrity of the BCM2837B0 package, and the improved regulation accuracy of our new MaxLinear MxL7704 power management IC, have allowed us to tune our clocking and voltage rules for both better peak performance and longer-duration sustained performance.

Below 70°C, we use the improvements to increase the core frequency to 1.4GHz. Above 70°C, we drop to 1.2GHz, and use the improvements to decrease the core voltage, increasing the period of time before we reach our 80°C thermal throttle; the reduction in power consumption is such that many use cases will never reach the throttle. Like a modern smartphone, we treat the thermal mass of the device as a resource, to be spent carefully with the goal of optimising user experience.

This graph, courtesy of Gareth Halfacree, demonstrates that Raspberry Pi 3B+ runs faster and at a lower temperature for the duration of an eight‑minute quad‑core Sysbench CPU test.

Note that Raspberry Pi 3B+ does consume substantially more power than its predecessor. We strongly encourage you to use a high-quality 2.5A power supply, such as the official Raspberry Pi Universal Power Supply.

FAQs

We’ll keep updating this list over the next couple of days, but here are a few to get you started.

Are you discontinuing earlier Raspberry Pi models?

No. We have a lot of industrial customers who will want to stick with the existing products for the time being. We’ll keep building these models for as long as there’s demand. Raspberry Pi 1B+, Raspberry Pi 2B, and Raspberry Pi 3B will continue to sell for $25, $35, and $35 respectively.

What about Model A+?

Raspberry Pi 1A+ continues to be the $20 entry-level “big” Raspberry Pi for the time being. We are considering the possibility of producing a Raspberry Pi 3A+ in due course.

What about the Compute Module?

CM1, CM3 and CM3L will continue to be available. We may offer versions of CM3 and CM3L with BCM2837B0 in due course, depending on customer demand.

Are you still using VideoCore?

Yes. VideoCore IV 3D is the only publicly-documented 3D graphics core for ARM‑based SoCs, and we want to make Raspberry Pi more open over time, not less.

Credits

A project like this requires a vast amount of focused work from a large team over an extended period. Particular credit is due to Roger Thornton, who designed the board and ran the exhaustive (and exhausting) RF compliance campaign, and to the team at the Sony UK Technology Centre in Pencoed, South Wales. A partial list of others who made major direct contributions to the BCM2837B0 chip program, CYW43455 integration, LAN7515 and MxL7704 developments, and Raspberry Pi 3B+ itself follows:

James Adams, David Armour, Jonathan Bell, Maria Blazquez, Jamie Brogan-Shaw, Mike Buffham, Rob Campling, Cindy Cao, Victor Carmon, KK Chan, Nick Chase, Nigel Cheetham, Scott Clark, Nigel Clift, Dominic Cobley, Peter Coyle, John Cronk, Di Dai, Kurt Dennis, David Doyle, Andrew Edwards, Phil Elwell, John Ferdinand, Doug Freegard, Ian Furlong, Shawn Guo, Philip Harrison, Jason Hicks, Stefan Ho, Andrew Hoare, Gordon Hollingworth, Tuomas Hollman, EikPei Hu, James Hughes, Andy Hulbert, Anand Jain, David John, Prasanna Kerekoppa, Shaik Labeeb, Trevor Latham, Steve Le, David Lee, David Lewsey, Sherman Li, Xizhe Li, Simon Long, Fu Luo Larson, Juan Martinez, Sandhya Menon, Ben Mercer, James Mills, Max Passell, Mark Perry, Eric Phiri, Ashwin Rao, Justin Rees, James Reilly, Matt Rowley, Akshaye Sama, Ian Saturley, Serge Schneider, Manuel Sedlmair, Shawn Shadburn, Veeresh Shivashimper, Graham Smith, Ben Stephens, Mike Stimson, Yuree Tchong, Stuart Thomson, John Wadsworth, Ian Watch, Sarah Williams, Jason Zhu.

If you’re not on this list and think you should be, please let me know, and accept my apologies.

The post Raspberry Pi 3 Model B+ on sale now at $35 appeared first on Raspberry Pi.

The Challenges of Opening a Data Center — Part 1

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/choosing-data-center/

Backblaze storage pod in new data center

This is part one of a series. The second part will be posted later this week. Use the Join button above to receive notification of future posts in this series.

Though most of us have never set foot inside of a data center, as citizens of a data-driven world we nonetheless depend on the services that data centers provide almost as much as we depend on a reliable water supply, the electrical grid, and the highway system. Every time we send a tweet, post to Facebook, check our bank balance or credit score, watch a YouTube video, or back up a computer to the cloud we are interacting with a data center.

In this series, The Challenges of Opening a Data Center, we’ll talk in general terms about the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process. Many of the factors to consider will be similar for opening a private data center or seeking space in a public data center, but we’ll assume for the sake of this discussion that our needs are more modest than requiring a data center dedicated solely to our own use (i.e. we’re not Google, Facebook, or China Telecom).

Data center technology and management are changing rapidly, with new approaches to design and operation appearing every year. This means we won’t be able to cover everything happening in the world of data centers in our series, however, we hope our brief overview proves useful.

What is a Data Center?

A data center is the structure that houses a large group of networked computer servers typically used by businesses, governments, and organizations for the remote storage, processing, or distribution of large amounts of data.

While many organizations will have computing services in the same location as their offices that support their day-to-day operations, a data center is a structure dedicated to 24/7 large-scale data processing and handling.

Depending on how you define the term, there are anywhere from a half million data centers in the world to many millions. While it’s possible to say that an organization’s on-site servers and data storage can be called a data center, in this discussion we are using the term data center to refer to facilities that are expressly dedicated to housing computer systems and associated components, such as telecommunications and storage systems. The facility might be a private center, which is owned or leased by one tenant only, or a shared data center that offers what are called “colocation services,” and rents space, services, and equipment to multiple tenants in the center.

A large, modern data center operates around the clock, placing a priority on providing secure and uninterrrupted service, and generally includes redundant or backup power systems or supplies, redundant data communication connections, environmental controls, fire suppression systems, and numerous security devices. Such a center is an industrial-scale operation often using as much electricity as a small town.

Types of Data Centers

There are a number of ways to classify data centers according to how they will be used, whether they are owned or used by one or multiple organizations, whether and how they fit into a topology of other data centers; which technologies and management approaches they use for computing, storage, cooling, power, and operations; and increasingly visible these days: how green they are.

Data centers can be loosely classified into three types according to who owns them and who uses them.

Exclusive Data Centers are facilities wholly built, maintained, operated and managed by the business for the optimal operation of its IT equipment. Some of these centers are well-known companies such as Facebook, Google, or Microsoft, while others are less public-facing big telecoms, insurance companies, or other service providers.

Managed Hosting Providers are data centers managed by a third party on behalf of a business. The business does not own data center or space within it. Rather, the business rents IT equipment and infrastructure it needs instead of investing in the outright purchase of what it needs.

Colocation Data Centers are usually large facilities built to accommodate multiple businesses within the center. The business rents its own space within the data center and subsequently fills the space with its IT equipment, or possibly uses equipment provided by the data center operator.

Backblaze, for example, doesn’t own its own data centers but colocates in data centers owned by others. As Backblaze’s storage needs grow, Backblaze increases the space it uses within a given data center and/or expands to other data centers in the same or different geographic areas.

Availability is Key

When designing or selecting a data center, an organization needs to decide what level of availability is required for its services. The type of business or service it provides likely will dictate this. Any organization that provides real-time and/or critical data services will need the highest level of availability and redundancy, as well as the ability to rapidly failover (transfer operation to another center) when and if required. Some organizations require multiple data centers not just to handle the computer or storage capacity they use, but to provide alternate locations for operation if something should happen temporarily or permanently to one or more of their centers.

Organizations operating data centers that can’t afford any downtime at all will typically operate data centers that have a mirrored site that can take over if something happens to the first site, or they operate a second site in parallel to the first one. These data center topologies are called Active/Passive, and Active/Active, respectively. Should disaster or an outage occur, disaster mode would dictate immediately moving all of the primary data center’s processing to the second data center.

While some data center topologies are spread throughout a single country or continent, others extend around the world. Practically, data transmission speeds put a cap on centers that can be operated in parallel with the appearance of simultaneous operation. Linking two data centers located apart from each other — say no more than 60 miles to limit data latency issues — together with dark fiber (leased fiber optic cable) could enable both data centers to be operated as if they were in the same location, reducing staffing requirements yet providing immediate failover to the secondary data center if needed.

This redundancy of facilities and ensured availability is of paramount importance to those needing uninterrupted data center services.

Active/Passive Data Centers

Active/Active Data Centers

LEED Certification

Leadership in Energy and Environmental Design (LEED) is a rating system devised by the United States Green Building Council (USGBC) for the design, construction, and operation of green buildings. Facilities can achieve ratings of certified, silver, gold, or platinum based on criteria within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation and design.

Green certification has become increasingly important in data center design and operation as data centers require great amounts of electricity and often cooling water to operate. Green technologies can reduce costs for data center operation, as well as make the arrival of data centers more amenable to environmentally-conscious communities.

The ACT, Inc. data center in Iowa City, Iowa was the first data center in the U.S. to receive LEED-Platinum certification, the highest level available.

ACT Data Center exterior

ACT Data Center exterior

ACT Data Center interior

ACT Data Center interior

Factors to Consider When Selecting a Data Center

There are numerous factors to consider when deciding to build or to occupy space in a data center. Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services can affect costs, risk, security and other factors that need to be taken into consideration.

The size of the data center will be dictated by the business requirements of the owner or tenant. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows staff access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (i.e. one “U” or “RU” rack unit measuring 44.50 millimeters or 1.75 inches), to Backblaze’s Storage Pod design that fits a 4U chassis, to large freestanding storage silos that occupy many square feet of floor space.

Location

Location will be one of the biggest factors to consider when selecting a data center and encompasses many other factors that should be taken into account, such as geological risks, neighboring uses, and even local flight paths. Access to suitable available power at a suitable price point is often the most critical factor and the longest lead time item, followed by broadband service availability.

With more and more data centers available providing varied levels of service and cost, the choices increase each year. Data center brokers can be employed to find a data center, just as one might use a broker for home or other commercial real estate.

Websites listing available colocation space, such as upstack.io, or entire data centers for sale or lease, are widely used. A common practice is for a customer to publish its data center requirements, and the vendors compete to provide the most attractive bid in a reverse auction.

Business and Customer Proximity

The center’s closeness to a business or organization may or may not be a factor in the site selection. The organization might wish to be close enough to manage the center or supervise the on-site staff from a nearby business location. The location of customers might be a factor, especially if data transmission speeds and latency are important, or the business or customers have regulatory, political, tax, or other considerations that dictate areas suitable or not suitable for the storage and processing of data.

Climate

Local climate is a major factor in data center design because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling, which can total as much as 50% or more of a center’s power costs. The topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Nevertheless, data centers are located in both extremely cold regions and extremely hot ones, with innovative approaches used in both extremes to maintain desired temperatures within the center.

Geographic Stability and Extreme Weather Events

A major obvious factor in locating a data center is the stability of the actual site as regards weather, seismic activity, and the likelihood of weather events such as hurricanes, as well as fire or flooding.

Backblaze’s Sacramento data center describes its location as one of the most stable geographic locations in California, outside fault zones and floodplains.

Sacramento Data Center

Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category 5 hurricane winds.

Equinix Data Center in Miami

Equinix “NAP of the Americas” Data Center in Miami

Most data centers don’t have the extreme protection or history of the Bahnhof data center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described “Bond villain” ambiance.

Bahnhof Data Center under White Mountain in Stockholm

Usually, the data center owner or tenant will want to take into account the balance between cost and risk in the selection of a location. The Ideal quadrant below is obviously favored when making this compromise.

Cost vs Risk in selecting a data center

Cost = Construction/lease, power, bandwidth, cooling, labor, taxes
Risk = Environmental (seismic, weather, water, fire), political, economic

Risk mitigation also plays a strong role in pricing. The extent to which providers must implement special building techniques and operating technologies to protect the facility will affect price. When selecting a data center, organizations must make note of the data center’s certification level on the basis of regulatory requirements in the industry. These certifications can ensure that an organization is meeting necessary compliance requirements.

Power

Electrical power usually represents the largest cost in a data center. The cost a service provider pays for power will be affected by the source of the power, the regulatory environment, the facility size and the rate concessions, if any, offered by the utility. At higher level tiers, battery, generator, and redundant power grids are a required part of the picture.

Fault tolerance and power redundancy are absolutely necessary to maintain uninterrupted data center operation. Parallel redundancy is a safeguard to ensure that an uninterruptible power supply (UPS) system is in place to provide electrical power if necessary. The UPS system can be based on batteries, saved kinetic energy, or some type of generator using diesel or another fuel. The center will operate on the UPS system with another UPS system acting as a backup power generator. If a power outage occurs, the additional UPS system power generator is available.

Many data centers require the use of independent power grids, with service provided by different utility companies or services, to prevent against loss of electrical service no matter what the cause. Some data centers have intentionally located themselves near national borders so that they can obtain redundant power from not just separate grids, but from separate geopolitical sources.

Higher redundancy levels required by a company will of invariably lead to higher prices. If one requires high availability backed by a service-level agreement (SLA), one can expect to pay more than another company with less demanding redundancy requirements.

Stay Tuned for Part 2 of The Challenges of Opening a Data Center

That’s it for part 1 of this post. In subsequent posts, we’ll take a look at some other factors to consider when moving into a data center such as network bandwidth, cooling, and security. We’ll take a look at what is involved in moving into a new data center (including stories from Backblaze’s experiences). We’ll also investigate what it takes to keep a data center running, and some of the new technologies and trends affecting data center design and use. You can discover all posts on our blog tagged with “Data Center” by following the link https://www.backblaze.com/blog/tag/data-center/.

The second part of this series on The Challenges of Opening a Data Center will be posted later this week. Use the Join button above to receive notification of future posts in this series.

The post The Challenges of Opening a Data Center — Part 1 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Community Profile: Dr. Lucy Rogers

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-lucy-rogers/

This column is from The MagPi issue 58. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

Dr Lucy Rogers calls herself a Transformer. “I transform simple electronics into cool gadgets, I transform science into plain English, I transform problems into opportunities. I am also a catalyst. I am interested in everything around me, and can often see ways of putting two ideas from very different fields together into one package. If I cannot do this myself, I connect the people who can.”

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Among many other projects, Dr Lucy Rogers currently focuses much of her attention on reducing the damage from space debris

It’s a pretty wide range of interests and skills for sure. But it only takes a brief look at Lucy’s résumé to realise that she means it. When she says she’s interested in everything around her, this interest reaches from electronics to engineering, wearable tech, space, robotics, and robotic dinosaurs. And she can be seen talking about all of these things across various companies’ social media, such as IBM, websites including the Women’s Engineering Society, and books, including her own.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

With her bright LED boots, Lucy was one of the wonderful Pi community members invited to join us and HRH The Duke of York at St James’s Palace just over a year ago

When not attending conferences as guest speaker, tinkering with electronics, or creating engaging IoT tutorials, she can be found retrofitting Raspberry Pis into the aforementioned robotic dinosaurs at Blackgang Chine Land of Imagination, writing, and judging battling bots for the BBC’s Robot Wars.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

First broadcast in the UK between 1998 and 2004, Robot Wars was revived in 2016 with a new look and new judges, including Dr Lucy Rogers. Competitors battle their home-brew robots, and Lucy, together with the other two judges, awards victories among the carnage of robotic remains

Lucy graduated from Lancaster University with a degree in Mechanical Engineering. After that, she spent seven years at Rolls-Royce Industrial Power Group as a graduate trainee before becoming a chartered engineer and earning her PhD in bubbles.

Bubbles?

“Foam formation in low‑expansion fire-fighting equipment. I investigated the equipment to determine how the bubbles were formed,” she explains. Obviously. Bubbles!

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy graduated from the Singularity University Graduate Studies Program in 2011, focusing on how robotics, nanotech, medicine, and various technologies can tackle the challenges facing the world

She then went on to become a fellow of the Royal Astronomical Society (RAS) in 2005 and, later, a fellow of both the Institution of Mechanical Engineers (IMechE) and British Interplanetary Society. As a member of the Association of British Science Writers, Lucy wrote It’s ONLY Rocket Science: an Introduction in Plain English.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

In It’s Only Rocket Science: An Introduction in Plain English Lucy explains that ‘hard to understand’ isn’t the same as ‘impossible to understand’, and takes her readers through the journey of building a rocket, leaving Earth, and travelling the cosmos

As a standout member of the industry, and all-round fun person to be around, Lucy has quickly established herself as a valued member of the Pi community.

In 2014, with the help of Neil Ford and Andy Stanford-Clark, Lucy worked with the UK’s oldest amusement park, Blackgang Chine Land of Imagination, on the Isle of Wight, with the aim of updating its animatronic dinosaurs. The original Blackgang Chine dinosaurs had a limited range of behaviour: able to roar, move their heads, and stomp a foot in a somewhat repetitive action.

When she contacted Raspberry Pi back in the November of that same year, the team were working on more creative, varied behaviours, giving each dinosaur a new Raspberry Pi-sized brain. This later evolved into a very successful dino-hacking Raspberry Jam.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy, Neil Ford, and Andy Stanford-Clark used several Raspberry Pis and Node-RED to visualise flows of events when updating the robotic dinosaurs at Blackgang Chine. They went on to create the successful WightPi Raspberry Jam event, where visitors could join in with the unique hacking opportunity.

Given her love for tinkering with tech, and a love for stand-up comedy that can be uncovered via a quick YouTube search, it’s no wonder that Lucy was asked to help judge the first round of the ‘Make us laugh’ Pioneers challenge for Raspberry Pi. Alongside comedian Bec Hill, Code Club UK director Maria Quevedo, and the face of the first challenge, Owen Daughtery, Lucy lent her expertise to help name winners in the various categories of the teens event, and offered her support to future Pioneers.

The post Community Profile: Dr. Lucy Rogers appeared first on Raspberry Pi.

The Raspberry Pi PiServer tool

Post Syndicated from Gordon Hollingworth original https://www.raspberrypi.org/blog/piserver/

As Simon mentioned in his recent blog post about Raspbian Stretch, we have developed a new piece of software called PiServer. Use this tool to easily set up a network of client Raspberry Pis connected to a single x86-based server via Ethernet. With PiServer, you don’t need SD cards, you can control all clients via the server, and you can add and configure user accounts — it’s ideal for the classroom, your home, or an industrial setting.

PiServer diagram

Client? Server?

Before I go into more detail, let me quickly explain some terms.

  • Server — the server is the computer that provides the file system, boot files, and password authentication to the client(s)
  • Client — a client is a computer that retrieves boot files from the server over the network, and then uses a file system the server has shared. More than one client can connect to a server, but all clients use the same file system.
  • User – a user is a user name/password combination that allows someone to log into a client to access the file system on the server. Any user can log into any client with their credentials, and will always see the same server and share the same file system. Users do not have sudo capability on a client, meaning they cannot make significant changes to the file system and software.

I see no SD cards

Last year we described how the Raspberry Pi 3 Model B can be booted without an SD card over an Ethernet network from another computer (the server). This is called network booting or PXE (pronounced ‘pixie’) booting.

Why would you want to do this?

  • A client computer (the Raspberry Pi) doesn’t need any permanent storage (an SD card) to boot.
  • You can network a large number of clients to one server, and all clients are exactly the same. If you log into one of the clients, you will see the same file system as if you logged into any other client.
  • The server can be run on an x86 system, which means you get to take advantage of the performance, network, and disk speed on the server.

Sounds great, right? Of course, for the less technical, creating such a network is very difficult. For example, there’s setting up all the required DHCP and TFTP servers, and making sure they behave nicely with the rest of the network. If you get this wrong, you can break your entire network.

PiServer to the rescue

To make network booting easy, I thought it would be nice to develop an application which did everything for you. Let me introduce: PiServer!

PiServer has the following functionalities:

  • It automatically detects Raspberry Pis trying to network boot, so you don’t have to work out their Ethernet addresses.
  • It sets up a DHCP server — the thing inside the router that gives all network devices an IP address — either in proxy mode or in full IP mode. No matter the mode, the DHCP server will only reply to the Raspberry Pis you have specified, which is important for network safety.
  • It creates user names and passwords for the server. This is great for a classroom full of Pis: just set up all the users beforehand, and everyone gets to log in with their passwords and keep all their work in a central place. Moreover, users cannot change the software, so educators have control over which programs their learners can use.
  • It uses a slightly altered Raspbian build which allows separation of temporary spaces, doesn’t have the default ‘pi’ user, and has LDAP enabled for log-in.

What can I do with PiServer?

Serve a whole classroom of Pis

In a classroom, PiServer allows all files for lessons or projects to be stored on a central x86-based computer. Each user can have their own account, and any files they create are also stored on the server. Moreover, the networked Pis doesn’t need to be connected to the internet. The teacher has centralised control over all Pis, and all Pis are user-agnostic, meaning there’s no need to match a person with a computer or an SD card.

Build a home server

PiServer could be used in the home to serve file systems for all Raspberry Pis around the house — either a single common Raspbian file system for all Pis or a different operating system for each. Hopefully, our extensive OS suppliers will provide suitable build files in future.

Use it as a controller for networked Pis

In an industrial scenario, it is possible to use PiServer to develop a network of Raspberry Pis (maybe even using Power over Ethernet (PoE)) such that the control software for each Pi is stored remotely on a server. This enables easy remote control and provisioning of the Pis from a central repository.

How to use PiServer

The client machines

So that you can use a Pi as a client, you need to enable network booting on it. Power it up using an SD card with a Raspbian Lite image, and open a terminal window. Type in

echo program_usb_boot_mode=1 | sudo tee -a /boot/config.txt

and press Return. This adds the line program_usb_boot_mode=1 to the end of the config.txt file in /boot. Now power the Pi down and remove the SD card. The next time you connect the Pi to a power source, you will be able to network boot it.

The server machine

As a server, you will need an x86 computer on which you can install x86 Debian Stretch. Refer to Simon’s blog post for additional information on this. It is possible to use a Raspberry Pi to serve to the client Pis, but the file system will be slower, especially at boot time.

Make sure your server has a good amount of disk space available for the file system — in general, we recommend at least 16Gb SD cards for Raspberry Pis. The whole client file system is stored locally on the server, so the disk space requirement is fairly significant.

Next, start PiServer by clicking on the start icon and then clicking Preferences > PiServer. This will open a graphical user interface — the wizard — that will walk you through setting up your network. Skip the introduction screen, and you should see a screen looking like this:

PiServer GUI screenshot

If you’ve enabled network booting on the client Pis and they are connected to a power source, their MAC addresses will automatically appear in the table shown above. When you have added all your Pis, click Next.

PiServer GUI screenshot

On the Add users screen, you can set up users on your server. These are pairs of user names and passwords that will be valid for logging into the client Raspberry Pis. Don’t worry, you can add more users at any point. Click Next again when you’re done.

PiServer GUI screenshot

The Add software screen allows you to select the operating system you want to run on the attached Pis. (You’ll have the option to assign an operating system to each client individually in the setting after the wizard has finished its job.) There are some automatically populated operating systems, such as Raspbian and Raspbian Lite. Hopefully, we’ll add more in due course. You can also provide your own operating system from a local file, or install it from a URL. For further information about how these operating system images are created, have a look at the scripts in /var/lib/piserver/scripts.

Once you’re done, click Next again. The wizard will then install the necessary components and the operating systems you’ve chosen. This will take a little time, so grab a coffee (or decaffeinated drink of your choice).

When the installation process is finished, PiServer is up and running — all you need to do is reboot the Pis to get them to run from the server.

Shooting troubles

If you have trouble getting clients connected to your network, there are a fewthings you can do to debug:

  1. If some clients are connecting but others are not, check whether you’ve enabled the network booting mode on the Pis that give you issues. To do that, plug an Ethernet cable into the Pi (with the SD card removed) — the LEDs on the Pi and connector should turn on. If that doesn’t happen, you’ll need to follow the instructions above to boot the Pi and edit its /boot/config.txt file.
  2. If you can’t connect to any clients, check whether your network is suitable: format an SD card, and copy bootcode.bin from /boot on a standard Raspbian image onto it. Plug the card into a client Pi, and check whether it appears as a new MAC address in the PiServer GUI. If it does, then the problem is a known issue, and you can head to our forums to ask for advice about it (the network booting code has a couple of problems which we’re already aware of). For a temporary fix, you can clone the SD card on which bootcode.bin is stored for all your clients.

If neither of these things fix your problem, our forums are the place to find help — there’s a host of people there who’ve got PiServer working. If you’re sure you have identified a problem that hasn’t been addressed on the forums, or if you have a request for a functionality, then please add it to the GitHub issues.

The post The Raspberry Pi PiServer tool appeared first on Raspberry Pi.

AWS Direct Connect Update – Ten New Locations Added in Late 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-direct-connect-update-ten-new-locations-added-in-late-2017/

Happy 2018! I am looking forward to getting back to my usual routine, working with our teams to learn about their upcoming launches and then writing blog posts to bring the news to you. Right now I am still catching up on a few launches and announcements from late 2017.

First on the list for today is our most recent round of new cities for AWS Direct Connect. AWS customers all over the world use Direct Connect to create dedicated network connections from their premises to AWS in order to reduce their network costs, increase throughput, and to pursue a more consistent network experience.

We added ten new locations to our Direct Connect roster in December, all of which offer both 1 Gbps and 10 Gbps connectivity, along with partner-supplied options for speeds below 1 Gbps. Here are the newest locations, along withe the data centers and associated AWS Regions:

  • Bangalore, India – NetMagic DC2Asia Pacific (Mumbai).
  • Cape Town, South Africa – Teraco Ct1EU (Ireland).
  • Johannesburg, South Africa – Teraco JB1EU (Ireland).
  • London, UK – Telehouse North TwoEU (London).
  • Miami, Florida, US – Equinix MI1US East (Northern Virginia).
  • Minneapolis, Minnesota, US – Cologix MIN3US East (Ohio)
  • Ningxia, China – Shapotou IDC – China (Ningxia).
  • Ningxia, China – Industrial Park IDC – China (Ningxia).
  • Rio de Janeiro, Brazil – Equinix RJ2South America (São Paulo).
  • Tokyo, Japan – AT Tokyo ChuoAsia Pacific (Tokyo).

You can use these new locations in conjunction with the AWS Direct Connect Gateway to set up connectivity that spans Virtual Private Clouds (VPCs) spread across multiple AWS Regions (this does not apply to the AWS Regions in China).

If you are interested in putting Direct Connect to use, be sure to check out our ever-growing list of Direct Connect Partners.

Jeff;

Supporting Conservancy Makes a Difference

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2017/12/31/donate-conservancy.html

Earlier this year, in
February, I wrote a blog post encouraging people to donate
to where I
work, Software Freedom Conservancy. I’ve not otherwise blogged too much
this year. It’s been a rough year for many reasons, and while I
personally and Conservancy in general have accomplished some very
important work this year, I’m reminded as always that more resources do
make things easier.

I understand the urge, given how bad the larger political crises have
gotten, to want to give to charities other than those related to software
freedom. There are important causes out there that have become more urgent
this year. Here’s three issues which have become shockingly more acute
this year:

  • making sure the USA keeps it commitment
    to immigrants to allow them make a new life here just like my own ancestors
    did,
  • assuring that the great national nature reserves are maintained and
    left pristine for generations to come,
  • assuring that we have zero tolerance abusive behavior —
    particularly by those in power against people who come to them for help and
    job opportunities.

These are just three of the many issues this year that I’ve seen get worse,
not better. I am glad that I know and support people who work on these
issues, and I urge everyone to work on these issues, too.

Nevertheless, as I plan my primary donations this year, I’m again, as I
always do, giving to the FSF and my
own employer, Software
Freedom Conservancy
. The reason is simple: software freedom is still
an essential cause and it is frankly one that most people don’t understand
(yet). I wrote almost
two years ago about the phenomenon I dubbed Kuhn’s
Paradox
. Simply put: it keeps getting more and more difficult
to avoid proprietary software in a normal day’s tasks, even while the
number of lines of code licensed freely gets larger every day.

As long as that paradox remains true, I see software freedom as urgent. I
know that we’re losing ground on so many other causes, too. But those of
you who read my blog are some of the few people in the world that
understand that software freedom is under threat and needs the urgent work
that the very few software-freedom-related organizations,
like the FSF
and Software Freedom
Conservancy
are doing. I hope you’ll donate now to both of them. For
my part, I gave $120 myself to FSF as part of the monthly Associate
Membership program, and in a few minutes, I’m going to give $400 to
Conservancy. I’ll be frank: if you work in technology in an industrialized
country, I’m quite sure you can afford that level of money, and I suspect
those amounts are less than most of you spent on technology equipment
and/or network connectivity charges this year. Make a difference for us
and give to the cause of software freedom at least as much a you’re giving
to large technology companies.

Finally, a good reason to give to smaller charities like FSF and
Conservancy is that your donation makes a bigger difference. I do think
bigger organizations, such as (to pick an example of an organization I used
to give to) my local NPR station does important work. However, I was
listening this week to my local NPR station, and they said their goal
for that day was to raise $50,000. For Conservancy, that’s closer
to a goal we have for entire fundraising season, which for this year was
$75,000. The thing is: NPR is an important part of USA society, but it’s
one that nearly everyone understands. So few people understand the threats
looming from proprietary software, and they may not understand at all until
it’s too late — when all their devices are locked down, DRM is
fully ubiquitous, and no one is allowed to tinker with the software on
their devices and learn the wonderful art of computer programming. We are
at real risk of reaching that distopia before 90% of the world’s
population understands the threat!

Thus, giving to organizations in the area of software freedom is just
going to have a bigger and more immediate impact than more general causes
that more easily connect with people. You’re giving to prevent a future
that not everyone understands yet, and making an impact on our
work to help explain the dangers to the larger population.

2017 Holiday Gift Guide — Backblaze Style

Post Syndicated from Yev original https://www.backblaze.com/blog/2017-holiday-gift-guide-backblaze-style/


Here at Backblaze we have a lot of folks who are all about technology. With the holiday season fast approaching, you might have all of your gift buying already finished — but if not, we put together a list of things that the employees here at Backblaze are pretty excited about giving (and/or receiving) this year.

Smart Homes:

It’s no secret that having a smart home is the new hotness, and many of the items below can be used to turbocharge your home’s ascent into the future:

Raspberry Pi
The holidays are all about eating pie — well why not get a pie of a different type for the DIY fan in your life!

Wyze Cam
An inexpensive way to keep a close eye on all your favorite people…and intruders!

Snooz
Have trouble falling asleep? Try this portable white noise machine. Also great for the office!

Amazon Echo Dot
Need a cheap way to keep track of your schedule or play music? The Echo Dot is a great entry into the smart home of your dreams!

Google Wifi
These little fellows make it easy to Wifi-ify your entire home, even if it’s larger than the average shoe box here in Silicon Valley. Google Wifi acts as a mesh router and seamlessly covers your whole dwelling. Have a mansion? Buy more!

Google Home
Like the Amazon Echo Dot, this is the Google variant. It’s more expensive (similar to the Amazon Echo) but has better sound quality and is tied into the Google ecosystem.

Nest Thermostat
This is a smart thermostat. What better way to score points with the in-laws than installing one of these bad boys in their home — and then making it freezing cold randomly in the middle of winter from the comfort of your couch!

Wearables:

Homes aren’t the only things that should be smart. Your body should also get the chance to be all that it can be:

Apple AirPods
You’ve seen these all over the place, and the truth is they do a pretty good job of making sounds appear in your ears.

Bose SoundLink Wireless Headphones
If you like over-the-ear headphones, these noise canceling ones work great, are wireless and lovely. There’s no better way to ignore people this holiday season!

Garmin Fenix 5 Watch
This watch is all about fitness. If you enjoy fitness. This watch is the fitness watch for your fitness needs.

Apple Watch
The Apple Watch is a wonderful gadget that will light up any movie theater this holiday season.

Nokia Steel Health Watch
If you’re into mixing analogue and digital, this is a pretty neat little gadget.

Fossil Smart Watch
This stylish watch is a pretty neat way to dip your toe into smartwatches and activity trackers.

Pebble Time Steel Smart Watch
Some people call this the greatest smartwatch of all time. Those people might be named Yev. This watch is great at sending you notifications from your phone, and not needing to be charged every day. Bellissimo!

Random Goods:

A few of the holiday gift suggestions that we got were a bit off-kilter, but we do have a lot of interesting folks in the office. Hopefully, you might find some of these as interesting as they do:

Wireless Qi Charger
Wireless chargers are pretty great in that you don’t have to deal with dongles. There are even kits to make your electronics “wirelessly chargeable” which is pretty great!

Self-Heating Coffee Mug
Love coffee? Hate lukewarm coffee? What if your coffee cup heated itself? Brilliant!

Yeast Stirrer
Yeast. It makes beer. And bread! Sometimes you need to stir it. What cooler way to stir your yeast than with this industrial stirrer?

Toto Washlet
This one is self explanatory. You know the old rhyme: happy butts, everyone’s happy!

Good luck out there this holiday season!

blog-giftguide-present

The post 2017 Holiday Gift Guide — Backblaze Style appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.

Raspberry Pi clusters come of age

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-clusters-come-of-age/

In today’s guest post, Bruce Tulloch, CEO and Managing Director of BitScope Designs, discusses the uses of cluster computing with the Raspberry Pi, and the recent pilot of the Los Alamos National Laboratory 3000-Pi cluster built with the BitScope Blade.

Raspberry Pi cluster

High-performance computing and Raspberry Pi are not normally uttered in the same breath, but Los Alamos National Laboratory is building a Raspberry Pi cluster with 3000 cores as a pilot before scaling up to 40 000 cores or more next year.

That’s amazing, but why?

I was asked this question more than any other at The International Conference for High-Performance Computing, Networking, Storage and Analysis in Denver last week, where one of the Los Alamos Raspberry Pi Cluster Modules was on display at the University of New Mexico’s Center for Advanced Research Computing booth.

The short answer to this question is: the Raspberry Pi cluster enables Los Alamos National Laboratory (LANL) to conduct exascale computing R&D.

The Pi cluster breadboard

Exascale refers to computing systems at least 50 times faster than the most powerful supercomputers in use today. The problem faced by LANL and similar labs building these things is one of scale. To get the required performance, you need a lot of nodes, and to make it work, you need a lot of R&D.

However, there’s a catch-22: how do you write the operating systems, networks stacks, launch and boot systems for such large computers without having one on which to test it all? Use an existing supercomputer? No — the existing large clusters are fully booked 24/7 doing science, they cost millions of dollars per year to run, and they may not have the architecture you need for your next-generation machine anyway. Older machines retired from science may be available, but at this scale they cost far too much to use and are usually very hard to maintain.

The Los Alamos solution? Build a “model supercomputer” with Raspberry Pi!

Think of it as a “cluster development breadboard”.

The idea is to design, develop, debug, and test new network architectures and systems software on the “breadboard”, but at a scale equivalent to the production machines you’re currently building. Raspberry Pi may be a small computer, but it can run most of the system software stacks that production machines use, and the ratios of its CPU speed, local memory, and network bandwidth scale proportionately to the big machines, much like an architect’s model does when building a new house. To learn more about the project, see the news conference and this interview with insideHPC at SC17.

Traditional Raspberry Pi clusters

Like most people, we love a good cluster! People have been building them with Raspberry Pi since the beginning, because it’s inexpensive, educational, and fun. They’ve been built with the original Pi, Pi 2, Pi 3, and even the Pi Zero, but none of these clusters have proven to be particularly practical.

That’s not stopped them being useful though! I saw quite a few Raspberry Pi clusters at the conference last week.

One tiny one that caught my eye was from the people at openio.io, who used a small Raspberry Pi Zero W cluster to demonstrate their scalable software-defined object storage platform, which on big machines is used to manage petabytes of data, but which is so lightweight that it runs just fine on this:

Raspberry Pi Zero cluster

There was another appealing example at the ARM booth, where the Berkeley Labs’ singularity container platform was demonstrated running very effectively on a small cluster built with Raspberry Pi 3s.

Raspberry Pi 3 cluster demo at a conference stall

My show favourite was from the Edinburgh Parallel Computing Center (EPCC): Nick Brown used a cluster of Pi 3s to explain supercomputers to kids with an engaging interactive application. The idea was that visitors to the stand design an aircraft wing, simulate it across the cluster, and work out whether an aircraft that uses the new wing could fly from Edinburgh to New York on a full tank of fuel. Mine made it, fortunately!

Raspberry Pi 3 cluster demo at a conference stall

Next-generation Raspberry Pi clusters

We’ve been building small-scale industrial-strength Raspberry Pi clusters for a while now with BitScope Blade.

When Los Alamos National Laboratory approached us via HPC provider SICORP with a request to build a cluster comprising many thousands of nodes, we considered all the options very carefully. It needed to be dense, reliable, low-power, and easy to configure and to build. It did not need to “do science”, but it did need to work in almost every other way as a full-scale HPC cluster would.

Some people argue Compute Module 3 is the ideal cluster building block. It’s very small and just as powerful as Raspberry Pi 3, so one could, in theory, pack a lot of them into a very small space. However, there are very good reasons no one has ever successfully done this. For a start, you need to build your own network fabric and I/O, and cooling the CM3s, especially when densely packed in a cluster, is tricky given their tiny size. There’s very little room for heatsinks, and the tiny PCBs dissipate very little excess heat.

Instead, we saw the potential for Raspberry Pi 3 itself to be used to build “industrial-strength clusters” with BitScope Blade. It works best when the Pis are properly mounted, powered reliably, and cooled effectively. It’s important to avoid using micro SD cards and to connect the nodes using wired networks. It has the added benefit of coming with lots of “free” USB I/O, and the Pi 3 PCB, when mounted with the correct air-flow, is a remarkably good heatsink.

When Gordon announced netboot support, we became convinced the Raspberry Pi 3 was the ideal candidate when used with standard switches. We’d been making smaller clusters for a while, but netboot made larger ones practical. Assembling them all into compact units that fit into existing racks with multiple 10 Gb uplinks is the solution that meets LANL’s needs. This is a 60-node cluster pack with a pair of managed switches by Ubiquiti in testing in the BitScope Lab:

60-node Raspberry Pi cluster pack

Two of these packs, built with Blade Quattro, and one smaller one comprising 30 nodes, built with Blade Duo, are the components of the Cluster Module we exhibited at the show. Five of these modules are going into Los Alamos National Laboratory for their pilot as I write this.

Bruce Tulloch at a conference stand with a demo of the Raspberry Pi cluster for LANL

It’s not only research clusters like this for which Raspberry Pi is well suited. You can build very reliable local cloud computing and data centre solutions for research, education, and even some industrial applications. You’re not going to get much heavy-duty science, big data analytics, AI, or serious number crunching done on one of these, but it is quite amazing to see just how useful Raspberry Pi clusters can be for other purposes, whether it’s software-defined networks, lightweight MaaS, SaaS, PaaS, or FaaS solutions, distributed storage, edge computing, industrial IoT, and of course, education in all things cluster and parallel computing. For one live example, check out Mythic Beasts’ educational compute cloud, built with Raspberry Pi 3.

For more information about Raspberry Pi clusters, drop by BitScope Clusters.

I’ll read and respond to your thoughts in the comments below this post too.

Editor’s note:

Here is a photo of Bruce wearing a jetpack. Cool, right?!

Bruce Tulloch wearing a jetpack

The post Raspberry Pi clusters come of age appeared first on Raspberry Pi.

[$] A report from the Realtime Summit

Post Syndicated from jake original https://lwn.net/Articles/738001/rss

The 2017
Realtime Summit
(RT-Summit) was hosted by the Czech Technical University on
Saturday, October 21 in Prague, just before the Embedded Linux
Conference. It
was attended by more than 50 individuals with backgrounds ranging from
academic to
industrial, and some local students daring enough to spend a day with that
group. Guest author Mathieu Poirier provides summaries of some of the
talks from the summit.

Да побутнем бъдещето

Post Syndicated from Йовко Ламбрев original https://yovko.net/push-the-future/

Да побутнем бъдещето

Представете си, че сме няколко години напред в бъдещето. Но не повече от пръстите на едната ви ръка. Искате да произведете някакъв продукт. Влизате във фабриката с проектната си документация, записана на някакъв цифров носител… Или не – дори нямате съвсем прецизна такава, но с помощта на инженерния екип тя скоро става готова, като междувременно сте изяснили всички въпроси, свързани с това какви точно материали да бъдат вложени, какви техни специфики да бъдат използвани или евентуални проблеми и слаби места да бъдат избегнати. Как всичко да се направи при най-оптимална цена и бързина на производство. Повечето от тези решения ви подсказва самата информационна система на фабриката, защото разполага с натрупани данни и статистика, а machine learning алгоритмите стават все по-добри. Съобразени са регулационните изисквания, ако продуктът има отношения към тях. Уточнени са доставчиците на материали, цените им, дори разполагате с прогноза как би се отразила върху вашата цена някоя промяна на пазарите на суровини и материали. Знаете кои и какви са възможните най-подходящи заместители. Знаете подробно как ще се рециклира продуктът ви, след като приключи експлотационният му период. Имате най-детайлни количествени сметки и при всяка промяна в изделието те се опресняват автоматично. Накрая слагате очила за виртуална реалност и разглеждате продукта си „на живо“ още преди изобщо той да е влязъл в производство.

Ако всичко е както трябва – пускате поръчката за продукция, фабриката се преконфигурира съобразно новото изделие, а това с всяка следваща година ще се случва все по-автоматично и бързо. Транспортни роботи ще зареждат от склад нужните материали, колаборативните им „колеги“ (коботи) по поточните линии ще обслужват машините и производството на новото изделие (сменяйки например изхабените инструменти, зареждайки заготовки, опаковайки и т.н.), а накрая, вече в склада за готова продукция, ще финишират подредени, с етикети или маркирани с RFID, NFC или QR бройките завършени изделия, готови за експедиция. Таговете на съответната маркировка вероятно ще са в blockchain база, за проследяемост и доказване на оригинално производство и произход. Хора все още ще са нужни, но повечето ще са в инженерния отдел или в, да го наречем, контролния център на фабриката. Сред машините ще бъдат все по-малко и по-малко. А подобно производството за все повече неща ще е все по-възможно да бъде дори денонощно.

По време на целия процес ще можете да следите бройките, темпото, разхода на суровини и енергия – и не в някакви ужасни таблици с мърдащи числа, а чрез удобни, човешки интерфейси и визуализации, включително отдалечено чрез смартфон или таблет, защото всъщност няма какво толкова да правите във фабриката… Системите за анализиране на данните от датчиците и следене на параметрите на машините ще „предвиждат“ и подсказват за евентуални претоварвания на машини и инструменти, за потенциални проблеми и брак…

„Мечтая да е възможно фабриката ми да е продължение на нервната ми система, да мога да я почувствам и да взимам решения на тази база“, ми каза наскоро един индустриалец.

И нека тук спрем да си представяме, защото… това изобщо не е фантастика. Технологиите, които са нужни това да бъде възможно, са вече около нас. В индустрията обаче има много препъни камъни, заложени основно от различни вендори, в стремеж да запазят пазарни сегменти за себе си. Много машини не споделят никакви данни. Често липсват стандарти (или пък те не се ползват), които да спомагат интеграцията между компоненти от различни производители. Доставчиците на информационни системи се опитват да продават това, което вече са разработили, и твърде малко се интересуват от реалните нужди и проблеми в производствения сектор. След десетилетия, в които се извършва автоматизация и цифровизация (да, формално третата индустриална революция започва през 70-те години на миналия век), информационните технологии продължават да са фокусирани предимно в enablement и поддръжка, и твърде малко в реални, практични и ползотворни, специфични за сегмента иновации. В света на ИТ продължава да вирее горделивото, но безпочвено очакване индустрията да се напасне към тях, вместо технологиите да са пригодни за индустрията.

Но в крайна сметка иновациите се случват – при това лавинообразно. Някои са по-значими от други или пък изчакват своя момент да заблестят. И това съвсем не е самоцелно или извън контекст.

Такава модерна, дигитална фабрика е напълно възможна. От днешна гледна точка е по-лесно да си я представим като плавна еволюция на съществуваща традиционна такава, отколкото да се планира и построи от нулата. И при това не са нужни твърде много време, твърде много хора или някакви огромни усилия. Трябва да се огледат и оценят налични платформи и технологии заедно с възможните интеграции помежду им. Да се отсеят тези с най-добър потенциал и да се тестват в реална среда. Да се подберат или доработят приложения. И нещата ще започнат да се получават – не изведнъж, но всичко това е напълно реалистично – и ако се стартира сега, да започнат да се виждат реални резултати между 2020 и 2025 година. При това в България, в Пловдив, в Търново или Габрово. Или навсякъде другаде, но преди няколко дни с един от визионерите в индустрията около Пловдив обсъждахме, че ако успеем да го направим тук, значи може да се направи навсякъде. И да се мултиплексира колкото е нужно.

Нужни са няколко души с капацитет и готовност да се фокусират в темата – в идеалния случай ще са хора, малко по-широкопрофилни като натрупан опит, които не се страхуват да мислят извън рамките на текущата си тясна специалност. Такива, които да са с нагласа бързо да скачат и навлизат в нова територия, софтуер или платформа. Да имат способността да вникват под повърхността на техническата документация, за да преценят потенциала на една система. Важен е интеграторският подход – погледът към общата картинка и крайната цел всичко да работи заедно. Потенциалът на една машина или система извън контекста на общата интеграция, ако тя е твърде трудна, скъпа или невъзможна, е с пренебрежимо значение. Всякакъв ИТ опит ще е от полза, но в идеалния случай комбинацията от ИТ и инженерство (електроника и/или автоматизация) ще е брилянтната сплав. Иначе всяко окей по някоя (или повече) от следните посоки ще е плюс:

  • аналитично и критично мислене (креативно и out-of-the-box)
  • някой от стандартните езици за програмиране като C или Java
  • някоя и друга база-данни (поне SQL), както и HTML, и REST
  • опит с програмиране на контролери
  • IoT или IIoT (Industrial IoT), MQTT (или други M2M протоколи)
  • комфортно ползване на различни операционни системи (минимум Linux и Windows)
  • PLM (product lifecycle management), но не PLCM (product life-cycle management в маркетинг смисъл)
  • machine learning
  • всякакъв друг опит в/от индустрията
  • умения за кратко и фокусирано изразяване и писане
  • готовност за споделяне на знания и работа в екип
  • прагматичен и практически-ориентиран подход към проблемите
  • използване на инструменти за управление на задачи, проекти и лични ангажименти (календар, trello, todoist и др.)

Английският език е нужен като минимум на работно ниво, защото макар и не непрекъснато, ще се работи в многоезична среда и този език се явява най-малкото общо кратно за общуване и документиране.

И ако това по-горе зазвучи претенциозно в нечии уши, бързам да уточня, че няма да правим никакви революции и иновации – просто ще свършим малко полезна работа. В идеалния случай просто ще побутнем леко еволюцията напред. 🙂

Такива неща вече се правят в една или друга степен. Започват да се появяват и вендори, които ще твърдят, че могат да продадат най-подходящото цялостно решение. Истината е, че много от тях просто се опитват да вкарат колкото могат повече клиенти в собствената си екосистема. Няма универсални решения – във всяко индустриално производство има много специфики, които трябва да се имат наум. Решенията е добре да са „ушити“ по мярка и интегрирани, с подбор на най-оптималните компоненти от различни вендори.

Затова размахвам знаменце, че търся колеги и партньори. Имам подадени ръце и готовност за съвместна работа с представители на индустрията около Пловдив (в момента ми е най-лесно и на мен, и на тях, да разсъждаваме в рамките на Пловдив). Основната тяхна тревога е, че няма да се намерят нужните хора. Затова искам да проведа като начало един тур от разговори с тези, които биха се заинтересували да работим заедно по темата, а след това ще обсъждаме следващи стъпки. Имам няколко души наум, с които ще говоря лично, но по-голямата част от тях са ангажирани с други неща. Затова, пишейки този текст, се надявам, че ще се намерим и с нови колеги.

Аз лично вярвам, че успехът е свързан с партньорство и колаборация, а не е заключен в ревнивото пазене на двадесетте реда код, които си написал така или иначе пак като импровизация върху нечий друг труд, положен преди теб. Знанието, затворено между двете ти уши, просто остарява бързо и не върши никаква работа, ако не си го споделил с останалите. Времето на парцелираното познание и ексклузивни умения приключи неотдавна. Сега можем да постигнем нещо смислено само чрез взаимни усилия и екипна работа.

В случай че проявявате интерес, моля свържете се с мен и споделете каквото прецените за важно за ваш предишен опит или нагласа към темата. С най-интересните хора ще се постарая да се видим на живо възможно най-скоро.

Да побутнем бъдещето

Post Syndicated from Йовко Ламбрев original https://yovko.net/push-the-future/

Да побутнем бъдещето

Представете си, че сме няколко години напред в бъдещето. Но не повече от пръстите на едната ви ръка. Искате да произведете някакъв продукт. Влизате във фабриката с проектната си документация, записана на някакъв цифров носител… Или не – дори нямате съвсем прецизна такава, но с помощта на инженерния екип тя скоро става готова, като междувременно сте изяснили всички въпроси, свързани с това какви точно материали да бъдат вложени, какви техни специфики да бъдат използвани или евентуални проблеми и слаби места да бъдат избегнати. Как всичко да се направи при най-оптимална цена и бързина на производство. Повечето от тези решения ви подсказва самата информационна система на фабриката, защото разполага с натрупани данни и статистика, а machine learning алгоритмите стават все по-добри. Съобразени са регулационните изисквания, ако продуктът има отношения към тях. Уточнени са доставчиците на материали, цените им, дори разполагате с прогноза как би се отразила върху вашата цена някоя промяна на пазарите на суровини и материали. Знаете кои и какви са възможните най-подходящи заместители. Знаете подробно как ще се рециклира продуктът ви, след като приключи експлотационният му период. Имате най-детайлни количествени сметки и при всяка промяна в изделието те се опресняват автоматично. Накрая слагате очила за виртуална реалност и разглеждате продукта си „на живо“ още преди изобщо той да е влязъл в производство.

Ако всичко е както трябва – пускате поръчката за продукция, фабриката се преконфигурира съобразно новото изделие, а това с всяка следваща година ще се случва все по-автоматично и бързо. Транспортни роботи ще зареждат от склад нужните материали, колаборативните им „колеги“ (коботи) по поточните линии ще обслужват машините и производството на новото изделие (сменяйки например изхабените инструменти, зареждайки заготовки, опаковайки и т.н.), а накрая, вече в склада за готова продукция, ще финишират подредени, с етикети или маркирани с RFID, NFC или QR бройките завършени изделия, готови за експедиция. Таговете на съответната маркировка вероятно ще са в blockchain база, за проследяемост и доказване на оригинално производство и произход. Хора все още ще са нужни, но повечето ще са в инженерния отдел или в, да го наречем, контролния център на фабриката. Сред машините ще бъдат все по-малко и по-малко. А подобно производството за все повече неща ще е все по-възможно да бъде дори денонощно.

По време на целия процес ще можете да следите бройките, темпото, разхода на суровини и енергия – и не в някакви ужасни таблици с мърдащи числа, а чрез удобни, човешки интерфейси и визуализации, включително отдалечено чрез смартфон или таблет, защото всъщност няма какво толкова да правите във фабриката… Системите за анализиране на данните от датчиците и следене на параметрите на машините ще „предвиждат“ и подсказват за евентуални претоварвания на машини и инструменти, за потенциални проблеми и брак…

„Мечтая да е възможно фабриката ми да е продължение на нервната ми система, да мога да я почувствам и да взимам решения на тази база“, ми каза наскоро един индустриалец.

И нека тук спрем да си представяме, защото… това изобщо не е фантастика. Технологиите, които са нужни това да бъде възможно, са вече около нас. В индустрията обаче има много препъни камъни, заложени основно от различни вендори, в стремеж да запазят пазарни сегменти за себе си. Много машини не споделят никакви данни. Често липсват стандарти (или пък те не се ползват), които да спомагат интеграцията между компоненти от различни производители. Доставчиците на информационни системи се опитват да продават това, което вече са разработили, и твърде малко се интересуват от реалните нужди и проблеми в производствения сектор. След десетилетия, в които се извършва автоматизация и цифровизация (да, формално третата индустриална революция започва през 70-те години на миналия век), информационните технологии продължават да са фокусирани предимно в enablement и поддръжка, и твърде малко в реални, практични и ползотворни, специфични за сегмента иновации. В света на ИТ продължава да вирее горделивото, но безпочвено очакване индустрията да се напасне към тях, вместо технологиите да са пригодни за индустрията.

Но в крайна сметка иновациите се случват – при това лавинообразно. Някои са по-значими от други или пък изчакват своя момент да заблестят. И това съвсем не е самоцелно или извън контекст.

Такава модерна, дигитална фабрика е напълно възможна. От днешна гледна точка е по-лесно да си я представим като плавна еволюция на съществуваща традиционна такава, отколкото да се планира и построи от нулата. И при това не са нужни твърде много време, твърде много хора или някакви огромни усилия. Трябва да се огледат и оценят налични платформи и технологии заедно с възможните интеграции помежду им. Да се отсеят тези с най-добър потенциал и да се тестват в реална среда. Да се подберат или доработят приложения. И нещата ще започнат да се получават – не изведнъж, но всичко това е напълно реалистично – и ако се стартира сега, да започнат да се виждат реални резултати между 2020 и 2025 година. При това в България, в Пловдив, в Търново или Габрово. Или навсякъде другаде, но преди няколко дни с един от визионерите в индустрията около Пловдив обсъждахме, че ако успеем да го направим тук, значи може да се направи навсякъде. И да се мултиплексира колкото е нужно.

Нужни са няколко души с капацитет и готовност да се фокусират в темата – в идеалния случай ще са хора, малко по-широкопрофилни като натрупан опит, които не се страхуват да мислят извън рамките на текущата си тясна специалност. Такива, които да са с нагласа бързо да скачат и навлизат в нова територия, софтуер или платформа. Да имат способността да вникват под повърхността на техническата документация, за да преценят потенциала на една система. Важен е интеграторският подход – погледът към общата картинка и крайната цел всичко да работи заедно. Потенциалът на една машина или система извън контекста на общата интеграция, ако тя е твърде трудна, скъпа или невъзможна, е с пренебрежимо значение. Всякакъв ИТ опит ще е от полза, но в идеалния случай комбинацията от ИТ и инженерство (електроника и/или автоматизация) ще е брилянтната сплав. Иначе всяко окей по някоя (или повече) от следните посоки ще е плюс:

  • аналитично и критично мислене (креативно и out-of-the-box)
  • някой от стандартните езици за програмиране като C или Java
  • някоя и друга база-данни (поне SQL), както и HTML, и REST
  • опит с програмиране на контролери
  • IoT или IIoT (Industrial IoT), MQTT (или други M2M протоколи)
  • комфортно ползване на различни операционни системи (минимум Linux и Windows)
  • PLM (product lifecycle management), но не PLCM (product life-cycle management в маркетинг смисъл)
  • machine learning
  • всякакъв друг опит в/от индустрията
  • умения за кратко и фокусирано изразяване и писане
  • готовност за споделяне на знания и работа в екип
  • прагматичен и практически-ориентиран подход към проблемите
  • използване на инструменти за управление на задачи, проекти и лични ангажименти (календар, trello, todoist и др.)

Английският език е нужен като минимум на работно ниво, защото макар и не непрекъснато, ще се работи в многоезична среда и този език се явява най-малкото общо кратно за общуване и документиране.

И ако това по-горе зазвучи претенциозно в нечии уши, бързам да уточня, че няма да правим никакви революции и иновации – просто ще свършим малко полезна работа. В идеалния случай просто ще побутнем леко еволюцията напред. 🙂

Такива неща вече се правят в една или друга степен. Започват да се появяват и вендори, които ще твърдят, че могат да продадат най-подходящото цялостно решение. Истината е, че много от тях просто се опитват да вкарат колкото могат повече клиенти в собствената си екосистема. Няма универсални решения – във всяко индустриално производство има много специфики, които трябва да се имат наум. Решенията е добре да са „ушити“ по мярка и интегрирани, с подбор на най-оптималните компоненти от различни вендори.

Затова размахвам знаменце, че търся колеги и партньори. Имам подадени ръце и готовност за съвместна работа с представители на индустрията около Пловдив (в момента ми е най-лесно и на мен, и на тях, да разсъждаваме в рамките на Пловдив). Основната тяхна тревога е, че няма да се намерят нужните хора. Затова искам да проведа като начало един тур от разговори с тези, които биха се заинтересували да работим заедно по темата, а след това ще обсъждаме следващи стъпки. Имам няколко души наум, с които ще говоря лично, но по-голямата част от тях са ангажирани с други неща. Затова, пишейки този текст, се надявам, че ще се намерим и с нови колеги.

Аз лично вярвам, че успехът е свързан с партньорство и колаборация, а не е заключен в ревнивото пазене на двадесетте реда код, които си написал така или иначе пак като импровизация върху нечий друг труд, положен преди теб. Знанието, затворено между двете ти уши, просто остарява бързо и не върши никаква работа, ако не си го споделил с останалите. Времето на парцелираното познание и ексклузивни умения приключи неотдавна. Сега можем да постигнем нещо смислено само чрез взаимни усилия и екипна работа.

В случай че проявявате интерес, моля свържете се с мен и споделете каквото прецените за важно за ваш предишен опит или нагласа към темата. С най-интересните хора ще се постарая да се видим на живо възможно най-скоро.

Да побутнем бъдещето

Post Syndicated from Йовко Ламбрев original https://yovko.net/push-the-future/

Представете си, че сме няколко години напред в бъдещето. Но не повече от пръстите на едната ви ръка. Искате да произведете някакъв продукт. Влизате във фабриката с проектната си документация, записана на някакъв цифров носител… Или не – дори нямате съвсем прецизна такава, но с помощта на инженерния екип тя скоро става готова, като междувременно сте изяснили всички въпроси, свързани с това какви точно материали да бъдат вложени, какви техни специфики да бъдат използвани или евентуални проблеми и слаби места да бъдат избегнати. Как всичко да се направи при най-оптимална цена и бързина на производство. Повечето от тези решения ви подсказва самата информационна система на фабриката, защото разполага с натрупани данни и статистика, а machine learning алгоритмите стават все по-добри. Съобразени са регулационните изисквания, ако продуктът има отношения към тях. Уточнени са доставчиците на материали, цените им, дори разполагате с прогноза как би се отразила върху вашата цена някоя промяна на пазарите на суровини и материали. Знаете кои и какви са възможните най-подходящи заместители. Знаете подробно как ще се рециклира продуктът ви, след като приключи експлотационният му период. Имате най-детайлни количествени сметки и при всяка промяна в изделието те се опресняват автоматично. Накрая слагате очила за виртуална реалност и разглеждате продукта си „на живо“ още преди изобщо той да е влязъл в производство.

Ако всичко е както трябва – пускате поръчката за продукция, фабриката се преконфигурира съобразно новото изделие, а това с всяка следваща година ще се случва все по-автоматично и бързо. Транспортни роботи ще зареждат от склад нужните материали, колаборативните им „колеги“ (коботи) по поточните линии ще обслужват машините и производството на новото изделие (сменяйки например изхабените инструменти, зареждайки заготовки, опаковайки и т.н.), а накрая, вече в склада за готова продукция, ще финишират подредени, с етикети или маркирани с RFID, NFC или QR бройките завършени изделия, готови за експедиция. Таговете на съответната маркировка вероятно ще са в blockchain база, за проследяемост и доказване на оригинално производство и произход. Хора все още ще са нужни, но повечето ще са в инженерния отдел или в, да го наречем, контролния център на фабриката. Сред машините ще бъдат все по-малко и по-малко. А подобно производството за все повече неща ще е все по-възможно да бъде дори денонощно.

По време на целия процес ще можете да следите бройките, темпото, разхода на суровини и енергия – и не в някакви ужасни таблици с мърдащи числа, а чрез удобни, човешки интерфейси и визуализации, включително отдалечено чрез смартфон или таблет, защото всъщност няма какво толкова да правите във фабриката… Системите за анализиране на данните от датчиците и следене на параметрите на машините ще „предвиждат“ и подсказват за евентуални претоварвания на машини и инструменти, за потенциални проблеми и брак…

„Мечтая да е възможно фабриката ми да е продължение на нервната ми система, да мога да я почувствам и да взимам решения на тази база“, ми каза наскоро един индустриалец.

И нека тук спрем да си представяме, защото… това изобщо не е фантастика. Технологиите, които са нужни това да бъде възможно, са вече около нас. В индустрията обаче има много препъни камъни, заложени основно от различни вендори, в стремеж да запазят пазарни сегменти за себе си. Много машини не споделят никакви данни. Често липсват стандарти (или пък те не се ползват), които да спомагат интеграцията между компоненти от различни производители. Доставчиците на информационни системи се опитват да продават това, което вече са разработили, и твърде малко се интересуват от реалните нужди и проблеми в производствения сектор. След десетилетия, в които се извършва автоматизация и цифровизация (да, формално третата индустриална революция започва през 70-те години на миналия век), информационните технологии продължават да са фокусирани предимно в enablement и поддръжка, и твърде малко в реални, практични и ползотворни, специфични за сегмента иновации. В света на ИТ продължава да вирее горделивото, но безпочвено очакване индустрията да се напасне към тях, вместо технологиите да са пригодни за индустрията.

Но в крайна сметка иновациите се случват – при това лавинообразно. Някои са по-значими от други или пък изчакват своя момент да заблестят. И това съвсем не е самоцелно или извън контекст.

Такава модерна, дигитална фабрика е напълно възможна. От днешна гледна точка е по-лесно да си я представим като плавна еволюция на съществуваща традиционна такава, отколкото да се планира и построи от нулата. И при това не са нужни твърде много време, твърде много хора или някакви огромни усилия. Трябва да се огледат и оценят налични платформи и технологии заедно с възможните интеграции помежду им. Да се отсеят тези с най-добър потенциал и да се тестват в реална среда. Да се подберат или доработят приложения. И нещата ще започнат да се получават – не изведнъж, но всичко това е напълно реалистично – и ако се стартира сега, да започнат да се виждат реални резултати между 2020 и 2025 година. При това в България, в Пловдив, в Търново или Габрово. Или навсякъде другаде, но преди няколко дни с един от визионерите в индустрията около Пловдив обсъждахме, че ако успеем да го направим тук, значи може да се направи навсякъде. И да се мултиплексира колкото е нужно.

Нужни са няколко души с капацитет и готовност да се фокусират в темата – в идеалния случай ще са хора, малко по-широкопрофилни като натрупан опит, които не се страхуват да мислят извън рамките на текущата си тясна специалност. Такива, които да са с нагласа бързо да скачат и навлизат в нова територия, софтуер или платформа. Да имат способността да вникват под повърхността на техническата документация, за да преценят потенциала на една система. Важен е интеграторският подход – погледът към общата картинка и крайната цел всичко да работи заедно. Потенциалът на една машина или система извън контекста на общата интеграция, ако тя е твърде трудна, скъпа или невъзможна, е с пренебрежимо значение. Всякакъв ИТ опит ще е от полза, но в идеалния случай комбинацията от ИТ и инженерство (електроника и/или автоматизация) ще е брилянтната сплав. Иначе всяко окей по някоя (или повече) от следните посоки ще е плюс:

  • аналитично и критично мислене (креативно и out-of-the-box)
  • някой от стандартните езици за програмиране като C или Java
  • някоя и друга база-данни (поне SQL), както и HTML, и REST
  • опит с програмиране на контролери
  • IoT или IIoT (Industrial IoT), MQTT (или други M2M протоколи)
  • комфортно ползване на различни операционни системи (минимум Linux и Windows)
  • PLM (product lifecycle management), но не PLCM (product life-cycle management в маркетинг смисъл)
  • machine learning
  • всякакъв друг опит в/от индустрията
  • умения за кратко и фокусирано изразяване и писане
  • готовност за споделяне на знания и работа в екип
  • прагматичен и практически-ориентиран подход към проблемите
  • използване на инструменти за управление на задачи, проекти и лични ангажименти (календар, trello, todoist и др.)

Английският език е нужен като минимум на работно ниво, защото макар и не непрекъснато, ще се работи в многоезична среда и този език се явява най-малкото общо кратно за общуване и документиране.

И ако това по-горе зазвучи претенциозно в нечии уши, бързам да уточня, че няма да правим никакви революции и иновации – просто ще свършим малко полезна работа. В идеалния случай просто ще побутнем леко еволюцията напред. 🙂

Такива неща вече се правят в една или друга степен. Започват да се появяват и вендори, които ще твърдят, че могат да продадат най-подходящото цялостно решение. Истината е, че много от тях просто се опитват да вкарат колкото могат повече клиенти в собствената си екосистема. Няма универсални решения – във всяко индустриално производство има много специфики, които трябва да се имат наум. Решенията е добре да са „ушити“ по мярка и интегрирани, с подбор на най-оптималните компоненти от различни вендори.

Затова размахвам знаменце, че търся колеги и партньори. Имам подадени ръце и готовност за съвместна работа с представители на индустрията около Пловдив (в момента ми е най-лесно и на мен, и на тях, да разсъждаваме в рамките на Пловдив). Основната тяхна тревога е, че няма да се намерят нужните хора. Затова искам да проведа като начало един тур от разговори с тези, които биха се заинтересували да работим заедно по темата, а след това ще обсъждаме следващи стъпки. Имам няколко души наум, с които ще говоря лично, но по-голямата част от тях са ангажирани с други неща. Затова, пишейки този текст, се надявам, че ще се намерим и с нови колеги.

Аз лично вярвам, че успехът е свързан с партньорство и колаборация, а не е заключен в ревнивото пазене на двадесетте реда код, които си написал така или иначе пак като импровизация върху нечий друг труд, положен преди теб. Знанието, затворено между двете ти уши, просто остарява бързо и не върши никаква работа, ако не си го споделил с останалите. Времето на парцелираното познание и ексклузивни умения приключи неотдавна. Сега можем да постигнем нещо смислено само чрез взаимни усилия и екипна работа.

В случай че проявявате интерес, моля свържете се с мен (чрез формуляра по-долу) и споделете каквото прецените за важно за ваш предишен опит или нагласа към темата. С най-интересните хора ще се постарая да се видим на живо възможно най-скоро.