Tag Archives: testing

Raspberry Pi 3 Model B+ on sale now at $35

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-3-model-bplus-sale-now-35/

Here’s a long post. We think you’ll find it interesting. If you don’t have time to read it all, we recommend you watch this video, which will fill you in with everything you need, and then head straight to the product page to fill yer boots. (We recommend the video anyway, even if you do have time for a long read. ‘Cos it’s fab.)


Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

If you’ve been a Raspberry Pi watcher for a while now, you’ll have a bit of a feel for how we update our products. Just over two years ago, we released Raspberry Pi 3 Model B. This was our first 64-bit product, and our first product to feature integrated wireless connectivity. Since then, we’ve sold over nine million Raspberry Pi 3 units (we’ve sold 19 million Raspberry Pis in total), which have been put to work in schools, homes, offices and factories all over the globe.

Those Raspberry Pi watchers will know that we have a history of releasing improved versions of our products a couple of years into their lives. The first example was Raspberry Pi 1 Model B+, which added two additional USB ports, introduced our current form factor, and rolled up a variety of other feedback from the community. Raspberry Pi 2 didn’t get this treatment, of course, as it was superseded after only one year; but it feels like it’s high time that Raspberry Pi 3 received the “plus” treatment.

So, without further ado, Raspberry Pi 3 Model B+ is now on sale for $35 (the same price as the existing Raspberry Pi 3 Model B), featuring:

  • A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU
  • Dual-band 802.11ac wireless LAN and Bluetooth 4.2
  • Faster Ethernet (Gigabit Ethernet over USB 2.0)
  • Power-over-Ethernet support (with separate PoE HAT)
  • Improved PXE network and USB mass-storage booting
  • Improved thermal management

Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

Behold the shiny

Raspberry Pi 3B+ is available to buy today from our network of Approved Resellers.

New features, new chips

Roger Thornton did the design work on this revision of the Raspberry Pi. Here, he and I have a chat about what’s new.

Introducing the Raspberry Pi 3 Model B+

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

The new product is built around BCM2837B0, an updated version of the 64-bit Broadcom application processor used in Raspberry Pi 3B, which incorporates power integrity optimisations, and a heat spreader (that’s the shiny metal bit you can see in the photos). Together these allow us to reach higher clock frequencies (or to run at lower voltages to reduce power consumption), and to more accurately monitor and control the temperature of the chip.

Dual-band wireless LAN and Bluetooth are provided by the Cypress CYW43455 “combo” chip, connected to a Proant PCB antenna similar to the one used on Raspberry Pi Zero W. Compared to its predecessor, Raspberry Pi 3B+ delivers somewhat better performance in the 2.4GHz band, and far better performance in the 5GHz band, as demonstrated by these iperf results from LibreELEC developer Milhouse.

Tx bandwidth (Mb/s) Rx bandwidth (Mb/s)
Raspberry Pi 3B 35.7 35.6
Raspberry Pi 3B+ (2.4GHz) 46.7 46.3
Raspberry Pi 3B+ (5GHz) 102 102

The wireless circuitry is encapsulated under a metal shield, rather fetchingly embossed with our logo. This has allowed us to certify the entire board as a radio module under FCC rules, which in turn will significantly reduce the cost of conformance testing Raspberry Pi-based products.

We’ll be teaching metalwork next.

Previous Raspberry Pi devices have used the LAN951x family of chips, which combine a USB hub and 10/100 Ethernet controller. For Raspberry Pi 3B+, Microchip have supported us with an upgraded version, LAN7515, which supports Gigabit Ethernet. While the USB 2.0 connection to the application processor limits the available bandwidth, we still see roughly a threefold increase in throughput compared to Raspberry Pi 3B. Again, here are some typical iperf results.

Tx bandwidth (Mb/s) Rx bandwidth (Mb/s)
Raspberry Pi 3B 94.1 95.5
Raspberry Pi 3B+ 315 315

We use a magjack that supports Power over Ethernet (PoE), and bring the relevant signals to a new 4-pin header. We will shortly launch a PoE HAT which can generate the 5V necessary to power the Raspberry Pi from the 48V PoE supply.

There… are… four… pins!

Coming soon to a Raspberry Pi 3B+ near you

Raspberry Pi 3B was our first product to support PXE Ethernet boot. Testing it in the wild shook out a number of compatibility issues with particular switches and traffic environments. Gordon has rolled up fixes for all known issues into the BCM2837B0 boot ROM, and PXE boot is now enabled by default.

Clocking, voltages and thermals

The improved power integrity of the BCM2837B0 package, and the improved regulation accuracy of our new MaxLinear MxL7704 power management IC, have allowed us to tune our clocking and voltage rules for both better peak performance and longer-duration sustained performance.

Below 70°C, we use the improvements to increase the core frequency to 1.4GHz. Above 70°C, we drop to 1.2GHz, and use the improvements to decrease the core voltage, increasing the period of time before we reach our 80°C thermal throttle; the reduction in power consumption is such that many use cases will never reach the throttle. Like a modern smartphone, we treat the thermal mass of the device as a resource, to be spent carefully with the goal of optimising user experience.

This graph, courtesy of Gareth Halfacree, demonstrates that Raspberry Pi 3B+ runs faster and at a lower temperature for the duration of an eight‑minute quad‑core Sysbench CPU test.

Note that Raspberry Pi 3B+ does consume substantially more power than its predecessor. We strongly encourage you to use a high-quality 2.5A power supply, such as the official Raspberry Pi Universal Power Supply.


We’ll keep updating this list over the next couple of days, but here are a few to get you started.

Are you discontinuing earlier Raspberry Pi models?

No. We have a lot of industrial customers who will want to stick with the existing products for the time being. We’ll keep building these models for as long as there’s demand. Raspberry Pi 1B+, Raspberry Pi 2B, and Raspberry Pi 3B will continue to sell for $25, $35, and $35 respectively.

What about Model A+?

Raspberry Pi 1A+ continues to be the $20 entry-level “big” Raspberry Pi for the time being. We are considering the possibility of producing a Raspberry Pi 3A+ in due course.

What about the Compute Module?

CM1, CM3 and CM3L will continue to be available. We may offer versions of CM3 and CM3L with BCM2837B0 in due course, depending on customer demand.

Are you still using VideoCore?

Yes. VideoCore IV 3D is the only publicly-documented 3D graphics core for ARM‑based SoCs, and we want to make Raspberry Pi more open over time, not less.


A project like this requires a vast amount of focused work from a large team over an extended period. Particular credit is due to Roger Thornton, who designed the board and ran the exhaustive (and exhausting) RF compliance campaign, and to the team at the Sony UK Technology Centre in Pencoed, South Wales. A partial list of others who made major direct contributions to the BCM2837B0 chip program, CYW43455 integration, LAN7515 and MxL7704 developments, and Raspberry Pi 3B+ itself follows:

James Adams, David Armour, Jonathan Bell, Maria Blazquez, Jamie Brogan-Shaw, Mike Buffham, Rob Campling, Cindy Cao, Victor Carmon, KK Chan, Nick Chase, Nigel Cheetham, Scott Clark, Nigel Clift, Dominic Cobley, Peter Coyle, John Cronk, Di Dai, Kurt Dennis, David Doyle, Andrew Edwards, Phil Elwell, John Ferdinand, Doug Freegard, Ian Furlong, Shawn Guo, Philip Harrison, Jason Hicks, Stefan Ho, Andrew Hoare, Gordon Hollingworth, Tuomas Hollman, EikPei Hu, James Hughes, Andy Hulbert, Anand Jain, David John, Prasanna Kerekoppa, Shaik Labeeb, Trevor Latham, Steve Le, David Lee, David Lewsey, Sherman Li, Xizhe Li, Simon Long, Fu Luo Larson, Juan Martinez, Sandhya Menon, Ben Mercer, James Mills, Max Passell, Mark Perry, Eric Phiri, Ashwin Rao, Justin Rees, James Reilly, Matt Rowley, Akshaye Sama, Ian Saturley, Serge Schneider, Manuel Sedlmair, Shawn Shadburn, Veeresh Shivashimper, Graham Smith, Ben Stephens, Mike Stimson, Yuree Tchong, Stuart Thomson, John Wadsworth, Ian Watch, Sarah Williams, Jason Zhu.

If you’re not on this list and think you should be, please let me know, and accept my apologies.

The post Raspberry Pi 3 Model B+ on sale now at $35 appeared first on Raspberry Pi.

PipeCam: the low-cost underwater camera

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pipecam-low-cost-underwater-camera/

Fred Fourie is building a low-cost underwater camera for shallow deployment, and his prototypes are already returning fascinating results. You can build your own PipeCam, and explore the undiscovered depths with a Raspberry Pi and off-the-shelf materials.

PipeCam underwater Raspberry Pi Camera

Materials and build

In its latest iteration, PipeCam consists of a 110mm PVC waste pipe with fittings and a 10mm perspex window at one end. Previous prototypes have also used plumbing materials for the body, but this latest version employs heavy-duty parts that deliver the good seal this project needs.

PipeCam underwater Raspberry Pi Camera

In testing, Fred and a friend determined that the rig could withstand 4 bar of pressure. This is enough to protect the tech inside at the depths Fred plans for, and a significant performance improvement on previous prototypes.

PipeCam underwater Raspberry Pi Camera
PipeCam underwater Raspberry Pi Camera

Inside the pipe are a Raspberry Pi 3, a camera module, and a real-time clock add-on board. A 2.4Ah rechargeable lead acid battery powers the set-up via a voltage regulator.

Using foam and fibreboard, Fred made a mount that holds everything in place and fits snugly inside the pipe.

PipeCam underwater Raspberry Pi Camera
PipeCam underwater Raspberry Pi Camera
PipeCam underwater Raspberry Pi Camera

PipeCam will be subject to ocean currents, not to mention the attentions of sea creatures, so it’s essential to make sure that everything is held securely inside the pipe – something Fred has learned from previous versions of the project.


It’s straightforward to write time-lapse code for a Raspberry Pi using Python and one of our free online resources, but Fred has more ambitious plans for PipeCam. As well as a Python script to control the camera, Fred made a web page to display the health of the device. It shows battery level and storage availability, along with the latest photo taken by the camera. He also made adjustments to the camera’s exposure settings using raspistill. You can see the effect in this side-by-side comparison of the default python-picam image and the edited raspistill one.

PipeCam underwater Raspberry Pi Camera
PipeCam underwater Raspberry Pi Camera

Underwater testing

Fred has completed the initial first test of PipeCam, running the device under water for an hour in two-metre deep water off the coast near his home. And the results? Well, see for yourself:

PipeCam underwater Raspberry Pi Camera
PipeCam underwater Raspberry Pi Camera
PipeCam underwater Raspberry Pi Camera

PipeCam is a work in progress, and you can read Fred’s build log at the project’s Hackaday.io page, so be sure to follow along.

The post PipeCam: the low-cost underwater camera appeared first on Raspberry Pi.

Needed: Associate Front End Developer

Post Syndicated from Yev original https://www.backblaze.com/blog/needed-associate-front-end-developer/

Want to work at a company that helps customers in over 150 countries around the world protect the memories they hold dear? Do you want to challenge yourself with a business that serves consumers, SMBs, Enterprise, and developers?

If all that sounds interesting, you might be interested to know that Backblaze is looking for an Associate Front End Developer!

Backblaze is a 10 year old company. Providing great customer experiences is the “secret sauce” that enables us to successfully compete against some of technology’s giants. We’ll finish the year at ~$20MM ARR and are a profitable business. This is an opportunity to have your work shine at scale in one of the fastest growing verticals in tech — Cloud Storage.

You will utilize HTML, ReactJS, CSS and jQuery to develop intuitive, elegant user experiences. As a member of our Front End Dev team, you will work closely with our web development, software design, and marketing teams.

On a day to day basis, you must be able to convert image mockups to HTML or ReactJS – There’s some production work that needs to get done. But you will also be responsible for helping build out new features, rethink old processes, and enabling third party systems to empower our marketing, sales, and support teams.

Our Associate Front End Developer must be proficient in:

  • HTML, CSS, Javascript (ES5)
  • jQuery, Bootstrap (with responsive targets)
  • Understanding of ensuring cross-browser compatibility and browser security for features
  • Basic SEO principles and ensuring that applications will adhere to them
  • Familiarity with ES2015+, ReactJS, unit testing
  • Learning about third party marketing and sales tools through reading documentation. Our systems include Google Tag Manager, Google Analytics, Salesforce, and Hubspot.
  • React Flux, Redux, SASS, Node experience is a plus

We’re looking for someone that is:

  • Passionate about building friendly, easy to use Interfaces and APIs.
  • Likes to work closely with other engineers, support, and marketing to help customers.
  • Is comfortable working independently on a mutually agreed upon prioritization queue (we don’t micromanage, we do make sure tasks are reasonably defined and scoped).
  • Diligent with quality control. Backblaze prides itself on giving our team autonomy to get work done, do the right thing for our customers, and keep a pace that is sustainable over the long run. As such, we expect everyone that checks in code that is stable. We also have a small QA team that operates as a secondary check when needed.

Backblaze Employees Have:

  • Good attitude and willingness to do whatever it takes to get the job done.
  • Strong desire to work for a small fast, paced company.
  • Desire to learn and adapt to rapidly changing technologies and work environment.
  • Comfort with well behaved pets in the office.

This position is located in San Mateo, California. Regular attendance in the office is expected.

Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

If this sounds like you…
Send an email to: jobscontact@backblaze.com with:

  1. Associate Front End Dev in the subject line
  2. Your resume attached
  3. An overview of your relevant experience

The post Needed: Associate Front End Developer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Auto Scaling is now available for Amazon SageMaker

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/auto-scaling-is-now-available-for-amazon-sagemaker/

Kumar Venkateswar, Product Manager on the AWS ML Platforms Team, shares details on the announcement of Auto Scaling with Amazon SageMaker.

With Amazon SageMaker, thousands of customers have been able to easily build, train and deploy their machine learning (ML) models. Today, we’re making it even easier to manage production ML models, with Auto Scaling for Amazon SageMaker. Instead of having to manually manage the number of instances to match the scale that you need for your inferences, you can now have SageMaker automatically scale the number of instances based on an AWS Auto Scaling Policy.

SageMaker has made managing the ML process easier for many customers. We’ve seen customers take advantage of managed Jupyter notebooks and managed distributed training. We’ve seen customers deploying their models to SageMaker hosting for inferences, as they integrate machine learning with their applications. SageMaker makes this easy –  you don’t have to think about patching the operating system (OS) or frameworks on your inference hosts, and you don’t have to configure inference hosts across Availability Zones. You just deploy your models to SageMaker, and it handles the rest.

Until now, you have needed to specify the number and type of instances per endpoint (or production variant) to provide the scale that you need for your inferences. If your inference volume changes, you can change the number and/or type of instances that back each endpoint to accommodate that change, without incurring any downtime. In addition to making it easy to change provisioning, customers have asked us how we can make managing capacity for SageMaker even easier.

With Auto Scaling for Amazon SageMaker, in the SageMaker console, the AWS Auto Scaling API, and the AWS SDK, this becomes much easier. Now, instead of having to closely monitor inference volume, and change the endpoint configuration in response, customers can configure a scaling policy to be used by AWS Auto Scaling. Auto Scaling adjusts the number of instances up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values defined in the policy. In this way, customers can automatically adjust their inference capacity to maintain predictable performance at a low cost. You simply specify the target inference throughput per instance and provide upper and lower bounds for the number of instances for each production variant. SageMaker will then monitor throughput per instance using Amazon CloudWatch alarms, and then it will adjust provisioned capacity up or down as needed.

After you configure the endpoint with Auto Scaling, SageMaker will continue to monitor your deployed models to automatically adjust the instance count. SageMaker will keep throughput within desired levels, in response to changes in application traffic. This makes it easier to manage models in production, and it can help reduce the cost of deployed models, as you no longer have to provision sufficient capacity in order to manage your peak load. Instead, you configure the limits to accommodate your minimum expected traffic and the maximum peak, and Amazon SageMaker will work within those limits to minimize cost.

How do you get started? Open the SageMaker console. For existing endpoints, you first access the endpoint to modify the settings.

Then, scroll to the Endpoint runtime settings section, select the variant, and choose Configure auto scaling.

First, configure the minimum and maximum number of instances.

Next, choose the throughput per instance at which you want to add an additional instance, given previous load testing.

You can optionally set cool down periods for scaling in or out, to avoid oscillation during periods of wide fluctuation in workload. If not, SageMaker will assume default values.

And that’s it! You now have an endpoint that will automatically scale with increasing inferences.

You pay for the capacity used at regular SageMaker pay-as-you-go pricing, so you no longer have to pay for unused capacity during relative idle periods!

Auto Scaling in Amazon SageMaker is available today in the US East (N. Virginia & Ohio), EU (Ireland), and U.S. West (Oregon) AWS regions. To learn more, see the Amazon SageMaker Auto Scaling documentation.

Kumar Venkateswar is a Product Manager in the AWS ML Platforms team, which includes Amazon SageMaker, Amazon Machine Learning, and the AWS Deep Learning AMIs. When not working, Kumar plays the violin and Magic: The Gathering.







The Challenges of Opening a Data Center — Part 2

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/factors-for-choosing-data-center/

Rows of storage pods in a data center

This is part two of a series on the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process.

In Part 1 of this series, we looked at the different types of data centers, the importance of location in planning a data center, data center certification, and the single most expensive factor in running a data center, power.

In Part 2, we continue to look at factors that need to considered both by those interested in a dedicated data center and those seeking to colocate in an existing center.

Power (continued from Part 1)

In part 1, we began our discussion of the power requirements of data centers.

As we discussed, redundancy and failover is a chief requirement for data center power. A redundantly designed power supply system is also a necessity for maintenance, as it enables repairs to be performed on one network, for example, without having to turn off servers, databases, or electrical equipment.

Power Path

The common critical components of a data center’s power flow are:

  • Utility Supply
  • Generators
  • Transfer Switches
  • Distribution Panels
  • Uninterruptible Power Supplies (UPS)
  • PDUs

Utility Supply is the power that comes from one or more utility grids. While most of us consider the grid to be our primary power supply (hats off to those of you who manage to live off the grid), politics, economics, and distribution make utility supply power susceptible to outages, which is why data centers must have autonomous power available to maintain availability.

Generators are used to supply power when the utility supply is unavailable. They convert mechanical energy, usually from motors, to electrical energy.

Transfer Switches are used to transfer electric load from one source or electrical device to another, such as from one utility line to another, from a generator to a utility, or between generators. The transfer could be manually activated or automatic to ensure continuous electrical power.

Distribution Panels get the power where it needs to go, taking a power feed and dividing it into separate circuits to supply multiple loads.

A UPS, as we touched on earlier, ensures that continuous power is available even when the main power source isn’t. It often consists of batteries that can come online almost instantaneously when the current power ceases. The power from a UPS does not have to last a long time as it is considered an emergency measure until the main power source can be restored. Another function of the UPS is to filter and stabilize the power from the main power supply.

Data Center UPS

Data center UPSs

PDU stands for the Power Distribution Unit and is the device that distributes power to the individual pieces of equipment.


After power, the networking connections to the data center are of prime importance. Can the data center obtain and maintain high-speed networking connections to the building? With networking, as with all aspects of a data center, availability is a primary consideration. Data center designers think of all possible ways service can be interrupted or lost, even briefly. Details such as the vulnerabilities in the route the network connections make from the core network (the backhaul) to the center, and where network connections enter and exit a building, must be taken into consideration in network and data center design.

Routers and switches are used to transport traffic between the servers in the data center and the core network. Just as with power, network redundancy is a prime factor in maintaining availability of data center services. Two or more upstream service providers are required to ensure that availability.

How fast a customer can transfer data to a data center is affected by: 1) the speed of the connections the data center has with the outside world, 2) the quality of the connections between the customer and the data center, and 3) the distance of the route from customer to the data center. The longer the length of the route and the greater the number of packets that must be transferred, the more significant a factor will be played by latency in the data transfer. Latency is the delay before a transfer of data begins following an instruction for its transfer. Generally latency, not speed, will be the most significant factor in transferring data to and from a data center. Packets transferred using the TCP/IP protocol suite, which is the conceptual model and set of communications protocols used on the internet and similar computer networks, must be acknowledged when received (ACK’d) and requires a communications roundtrip for each packet. If the data is in larger packets, the number of ACKs required is reduced, so latency will be a smaller factor in the overall network communications speed.

Latency generally will be less significant for data storage transfers than for cloud computing. Optimizations such as multi-threading, which is used in Backblaze’s Cloud Backup service, will generally improve overall transfer throughput if sufficient bandwidth is available.

Those interested in testing the overall speed and latency of their connection to Backblaze’s data centers can use the Check Your Bandwidth tool on our website.
Data center telecommunications equipment

Data center telecommunications equipment

Data center under floor cable runs

Data center under floor cable runs


Computer, networking, and power generation equipment generates heat, and there are a number of solutions employed to rid a data center of that heat. The location and climate of the data center is of great importance to the data center designer because the climatic conditions dictate to a large degree what cooling technologies should be deployed that in turn affect the power used and the cost of using that power. The power required and cost needed to manage a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Innovation is strong in this area and many new approaches to efficient and cost-effective cooling are used in the latest data centers.

Switch's uninterruptible, multi-system, HVAC Data Center Cooling Units

Switch’s uninterruptible, multi-system, HVAC Data Center Cooling Units

There are three primary ways data center cooling can be achieved:

Room Cooling cools the entire operating area of the data center. This method can be suitable for small data centers, but becomes more difficult and inefficient as IT equipment density and center size increase.

Row Cooling concentrates on cooling a data center on a row by row basis. In its simplest form, hot aisle/cold aisle data center design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. The rows composed of rack fronts are called cold aisles. Typically, cold aisles face air conditioner output ducts. The rows the heated exhausts pour into are called hot aisles. Typically, hot aisles face air conditioner return ducts.

Rack Cooling tackles cooling on a rack by rack basis. Air-conditioning units are dedicated to specific racks. This approach allows for maximum densities to be deployed per rack. This works best in data centers with fully loaded racks, otherwise there would be too much cooling capacity, and the air-conditioning losses alone could exceed the total IT load.


Data Centers are high-security facilities as they house business, government, and other data that contains personal, financial, and other secure information about businesses and individuals.

This list contains the physical-security considerations when opening or co-locating in a data center:

Layered Security Zones. Systems and processes are deployed to allow only authorized personnel in certain areas of the data center. Examples include keycard access, alarm systems, mantraps, secure doors, and staffed checkpoints.

Physical Barriers. Physical barriers, fencing and reinforced walls are used to protect facilities. In a colocation facility, one customers’ racks and servers are often inaccessible to other customers colocating in the same data center.

Backblaze racks secured in the data center

Backblaze racks secured in the data center

Monitoring Systems. Advanced surveillance technology monitors and records activity on approaching driveways, building entrances, exits, loading areas, and equipment areas. These systems also can be used to monitor and detect fire and water emergencies, providing early detection and notification before significant damage results.

Top-tier providers evaluate their data center security and facilities on an ongoing basis. Technology becomes outdated quickly, so providers must stay-on-top of new approaches and technologies in order to protect valuable IT assets.

To pass into high security areas of a data center requires passing through a security checkpoint where credentials are verified.

Data Center security

The gauntlet of cameras and steel bars one must pass before entering this data center

Facilities and Services

Data center colocation providers often differentiate themselves by offering value-added services. In addition to the required space, power, cooling, connectivity and security capabilities, the best solutions provide several on-site amenities. These accommodations include offices and workstations, conference rooms, and access to phones, copy machines, and office equipment.

Additional features may consist of kitchen facilities, break rooms and relaxation lounges, storage facilities for client equipment, and secure loading docks and freight elevators.

Moving into A Data Center

Moving into a data center is a major job for any organization. We wrote a post last year, Desert To Data in 7 Days — Our New Phoenix Data Center, about what it was like to move into our new data center in Phoenix, Arizona.

Desert To Data in 7 Days — Our New Phoenix Data Center

Visiting a Data Center

Our Director of Product Marketing Andy Klein wrote a popular post last year on what it’s like to visit a data center called A Day in the Life of a Data Center.

A Day in the Life of a Data Center

Would you Like to Know More about The Challenges of Opening and Running a Data Center?

That’s it for part 2 of this series. If readers are interested, we could write a post about some of the new technologies and trends affecting data center design and use. Please let us know in the comments.

Here's a tip!Here’s a tip on finding all the posts tagged with data center on our blog. Just follow https://www.backblaze.com/blog/tag/data-center/.

Don’t miss future posts on data centers and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post The Challenges of Opening a Data Center — Part 2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

OTON GLASS: turning text to speech

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/oton-glass/

With OTON GLASS, users are able to capture text with a blink and have it read back to them in their chosen language. It’s wonderful tool for people with dyslexia or poor vision, or for travellers abroad.


A wearable device for people who have difficulty reading.


Inspired by his father’s dyslexia, Keisuke Shimakage of the Media Creation Research Department at the Institute of Advanced Media Arts and Sciences, Japan, began to develop OTON GLASS:

I was determined to develop OTON GLASS because of my father’s dyslexia experience. In 2012, my father had a brain tumor, and developed dyslexia after his operation — the catalyst for OTON GLASS. Fortunately, he recovered fully after rehabilitation. However, many people have congenital dyslexia regardless of their health.

Assembling a team of engineers and designers, Keisuke got to work.

A collage images illustrating the history of developing OTON GLASS — OTON GLASS RASPBERRY PI GLASSES FOR DYSLEXIC USERS

The OTON GLASS device includes a Raspberry Pi 3, two cameras, and an earphone. One camera on the inside of the frame tracks the user’s eyes, and when it detects the blinked trigger, the outward-facing camera captures an image of what the user is looking at. This image is then processed by the Raspberry Pi via a program that performs optical character recognition. If the Pi detects written words, it converts them to speech, which the earphone plays back for the user.

A collage of images and text explaining how OTON GLASS works — OTON GLASS RASPBERRY PI GLASSES FOR DYSLEXIC USERS

The initial prototype of OTON GLASS had a 15-second delay between capturing text and replaying audio. This was cut down to three seconds in the team’s second prototype, designed in CAD software and housed within a 3D-printed case. The makers were then able to do real-world testing of the prototype to collect feedback from dyslexic users, and continued to upgrade the device based on user opinions.

Awards buzz

OTON GLASS is on its way to public distribution this year, and is currently doing the rounds at various trade and tech shows throughout Japan. Models are also available for trial at the Japan Blind Party Association, Kobe Eye Centre, and Nippon Keihan Library. In 2016, the device was runner-up for the James Dyson Award, and it has also garnered attention at various other awards shows and in the media. We’re looking forward to getting out hands on OTON GLASS, and we can’t wait to find out where team will take this device in the future.

The post OTON GLASS: turning text to speech appeared first on Raspberry Pi.

Copyright Trolls Target Up to 22,000 Norwegians for Movie Piracy

Post Syndicated from Andy original https://torrentfreak.com/copyright-trolls-target-up-to-22000-norwegians-for-movie-piracy-180220/

Last January it was revealed that after things had become tricky in the US, the copyright trolls behind the action movie London Has Fallen were testing out the Norwegian market.

Reports emerged of letters being sent out to local Internet users by Danish law firm Njord Law, each demanding a cash payment of 2,700 NOK (around US$345). Failure to comply, the company claimed, could result in a court case and damages of around $12,000.

The move caused outrage locally, with consumer advice groups advising people not to pay and even major anti-piracy groups distancing themselves from the action. However, in May 2017 it appeared that progress had been made in stopping the advance of the trolls when another Njord Law case running since 2015 hit the rocks.

The law firm previously sent a request to the Oslo District Court on behalf of entertainment company Scanbox asking ISP Telenor to hand over subscribers’ details. In May 2016, Scanbox won its case and Telenor was ordered to hand over the information.

On appeal, however, the tables were turned when it was decided that evidence supplied by the law firm failed to show that sharing carried out by subscribers was substantial.

Undeterred, Njord Law took the case all the way to the Supreme Court. The company lost when a panel of judges found that the evidence presented against Telenor’s customers wasn’t good enough to prove infringement beyond a certain threshold. But Njord Law still wasn’t done.

More than six months on, the ruling from the Supreme Court only seems to have provided the company with a template. If the law firm could show that the scale of sharing exceeds the threshold set by Norway’s highest court, then disclosure could be obtained. That appears to be the case now.

In a ruling handed down by the Oslo District Court in January, it’s revealed that Njord Law and its partners handed over evidence which shows 23,375 IP addresses engaged in varying amounts of infringing behavior over an extended period. The ISP they have targeted is being kept secret by the court but is believed to be Telenor.

Using information supplied by German anti-piracy outfit MaverickEye (which is involved in numerous copyright troll cases globally), Njord Law set out to show that the conduct of the alleged pirates had been exceptional for a variety of reasons, categorizing them variously (but non-exclusively) as follows:

– IP addresses involved in BitTorrent swarm sizes greater than 10,000 peers/pirates
– IP addresses that have shared at least two of the plaintiffs’ movies
– IP addresses making available the plaintiffs’ movies on at least two individual days
– IP addresses that made available at least ten movies in total
– IP addresses that made available different movies on at least ten individual days
– IP addresses that made available movies from businesses and public institutions

While rejecting some categories, the court was satisfied that 21,804 IP addresses of the 23,375 IP addresses presented by Njord Law met or exceeded the criteria for disclosure. It’s still not clear how many of these IP addresses identify unique subscribers but many thousands are expected.

“For these users, it has been established that the gravity, extent, and harm of the infringement are so great that consideration for the rights holder’s interests in accessing information identifying the [allegedly infringing] subscribers is greater than the consideration of the subscribers’,” the court writes in its ruling.

“Users’ confidence that their private use of the Internet is protected from public access is a generally important factor, but not in this case where illegal file sharing has been proven. Nor has there been any information stating that the offenders in the case are children or anything else which implies that disclosure of information about the holder of the subscriber should be problematic.”

While the ISP (Telenor) will now have to spend time and resources disclosing its subscribers’ personal details to the law firm, it will be compensated for its efforts. The Oslo District Court has ordered Njord Law to pay costs of NOK 907,414 (US$115,822) plus NOK 125 (US$16.00) for every IP address and associated details it receives.

The decision can be appealed but when contacted by Norwegian publication Nettavisen, Telenor declined to comment on the case.

There is now the question of what Njord Law will do with the identities it obtains. It seems very likely that it will ask for a sum of money to make a potential lawsuit go away but it will still need to take an individual subscriber to court in order to extract payment, if they refuse to pay.

This raises the challenge of proving that the subscriber is the actual infringer when it could be anyone in a household. But that battle will have to wait until another day.

The full decision of the Oslo District Court can be found here (Norwegian)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Running ActiveMQ in a Hybrid Cloud Environment with Amazon MQ

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/running-activemq-in-a-hybrid-cloud-environment-with-amazon-mq/

This post courtesy of Greg Share, AWS Solutions Architect

Many organizations, particularly enterprises, rely on message brokers to connect and coordinate different systems. Message brokers enable distributed applications to communicate with one another, serving as the technological backbone for their IT environment, and ultimately their business services. Applications depend on messaging to work.

In many cases, those organizations have started to build new or “lift and shift” applications to AWS. In some cases, there are applications, such as mainframe systems, too costly to migrate. In these scenarios, those on-premises applications still need to interact with cloud-based components.

Amazon MQ is a managed message broker service for ActiveMQ that enables organizations to send messages between applications in the cloud and on-premises to enable hybrid environments and application modernization. For example, you can invoke AWS Lambda from queues and topics managed by Amazon MQ brokers to integrate legacy systems with serverless architectures. ActiveMQ is an open-source message broker written in Java that is packaged with clients in multiple languages, Java Message Server (JMS) client being one example.

This post shows you can use Amazon MQ to integrate on-premises and cloud environments using the network of brokers feature of ActiveMQ. It provides configuration parameters for a one-way duplex connection for the flow of messages from an on-premises ActiveMQ message broker to Amazon MQ.

ActiveMQ and the network of brokers

First, look at queues within ActiveMQ and then at the network of brokers as a mechanism to distribute messages.

The network of brokers behaves differently from models such as physical networks. The key consideration is that the production (sending) of a message is disconnected from the consumption of that message. Think of the delivery of a parcel: The parcel is sent by the supplier (producer) to the end customer (consumer). The path it took to get there is of little concern to the customer, as long as it receives the package.

The same logic can be applied to the network of brokers. Here’s how you build the flow from a simple message to a queue and build toward a network of brokers. Before you look at setting up a hybrid connection, I discuss how a broker processes messages in a simple scenario.

When a message is sent from a producer to a queue on a broker, the following steps occur:

  1. A message is sent to a queue from the producer.
  2. The broker persists this in its store or journal.
  3. At this point, an acknowledgement (ACK) is sent to the producer from the broker.

When a consumer looks to consume the message from that same queue, the following steps occur:

  1. The message listener (consumer) calls the broker, which creates a subscription to the queue.
  2. Messages are fetched from the message store and sent to the consumer.
  3. The consumer acknowledges that the message has been received before processing it.
  4. Upon receiving the ACK, the broker sets the message as having been consumed. By default, this deletes it from the queue.
    • You can set the consumer to ACK after processing by setting up transaction management or handle it manually using Session.CLIENT_ACKNOWLEDGE.

Static propagation

I now introduce the concept of static propagation with the network of brokers as the mechanism for message transfer from on-premises brokers to Amazon MQ.  Static propagation refers to message propagation that occurs in the absence of subscription information. In this case, the objective is to transfer messages arriving at your selected on-premises broker to the Amazon MQ broker for consumption within the cloud environment.

After you configure static propagation with a network of brokers, the following occurs:

  1. The on-premises broker receives a message from a producer for a specific queue.
  2. The on-premises broker sends (statically propagates) the message to the Amazon MQ broker.
  3. The Amazon MQ broker sends an acknowledgement to the on-premises broker, which marks the message as having been consumed.
  4. Amazon MQ holds the message in its queue ready for consumption.
  5. A consumer connects to Amazon MQ broker, subscribes to the queue in which the message resides, and receives the message.
  6. Amazon MQ broker marks the message as having been consumed.

Getting started

The first step is creating an Amazon MQ broker.

  1. Sign in to the Amazon MQ console and launch a new Amazon MQ broker.
  2. Name your broker and choose Next step.
  3. For Broker instance type, choose your instance size:
  4. For Deployment mode, enter one of the following:
    Single-instance broker for development and test implementations (recommended)
    Active/standby broker for high availability in production environments
  5. Scroll down and enter your user name and password.
  6. Expand Advanced Settings.
  7. For VPC, Subnet, and Security Group, pick the values for the resources in which your broker will reside.
  8. For Public Accessibility, choose Yes, as connectivity is internet-based. Another option would be to use private connectivity between your on-premises network and the VPC, an example being an AWS Direct Connect or VPN connection. In that case, you could set Public Accessibility to No.
  9. For Maintenance, leave the default value, No preference.
  10. Choose Create Broker. Wait several minutes for the broker to be created.

After creation is complete, you see your broker listed.

For connectivity to work, you must configure the security group where Amazon MQ resides. For this post, I focus on the OpenWire protocol.

For Openwire connectivity, allow port 61617 access for Amazon MQ from your on-premises ActiveMQ broker source IP address. For alternate protocols, see the Amazon MQ broker configuration information for the ports required:

OpenWire – ssl://xxxxxxx.xxx.com:61617
AMQP – amqp+ssl:// xxxxxxx.xxx.com:5671
STOMP – stomp+ssl:// xxxxxxx.xxx.com:61614
MQTT – mqtt+ssl:// xxxxxxx.xxx.com:8883
WSS – wss:// xxxxxxx.xxx.com:61619

Configuring the network of brokers

Configuring the network of brokers with static propagation occurs on the on-premises broker by applying changes to the following file:
<activemq install directory>/conf activemq.xml

Network connector

This is the first configuration item required to enable a network of brokers. It is only required on the on-premises broker, which initiates and creates the connection with Amazon MQ. This connection, after it’s established, enables the flow of messages in either direction between the on-premises broker and Amazon MQ. The focus of this post is the uni-directional flow of messages from the on-premises broker to Amazon MQ.

The default activemq.xml file does not include the network connector configuration. Add this with the networkConnector element. In this scenario, edit the on-premises broker activemq.xml file to include the following information between <systemUsage> and <transportConnectors>:

                name="Q:source broker name->target broker name"
                uri="static:(ssl:// aws mq endpoint:61617)" 
                    <queue physicalName="queuename"/>
                      <queue physicalName=">" />

The highlighted components are the most important elements when configuring your on-premises broker.

  • name – Name of the network bridge. In this case, it specifies two things:
    • That this connection relates to an ActiveMQ queue (Q) as opposed to a topic (T), for reference purposes.
    • The source broker and target broker.
  • duplex –Setting this to false ensures that messages traverse uni-directionally from the on-premises broker to Amazon MQ.
  • uri –Specifies the remote endpoint to which to connect for message transfer. In this case, it is an Openwire endpoint on your Amazon MQ broker. This information could be obtained from the Amazon MQ console or via the API.
  • username and password – The same username and password configured when creating the Amazon MQ broker, and used to access the Amazon MQ ActiveMQ console.
  • networkTTL – Number of brokers in the network through which messages and subscriptions can pass. Leave this setting at the current value, if it is already included in your broker connection.
  • staticallyIncludedDestinations > queue physicalName – The destination ActiveMQ queue for which messages are destined. This is the queue that is propagated from the on-premises broker to the Amazon MQ broker for message consumption.

After the network connector is configured, you must restart the ActiveMQ service on the on-premises broker for the changes to be applied.

Verify the configuration

There are a number of places within the ActiveMQ console of your on-premises and Amazon MQ brokers to browse to verify that the configuration is correct and the connection has been established.

On-premises broker

Launch the ActiveMQ console of your on-premises broker and navigate to Network. You should see an active network bridge similar to the following:

This identifies that the connection between your on-premises broker and your Amazon MQ broker is up and running.

Now navigate to Connections and scroll to the bottom of the page. Under the Network Connectors subsection, you should see a connector labeled with the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

Amazon MQ broker

Launch the ActiveMQ console of your Amazon MQ broker and navigate to Connections. Scroll to the Connections openwire subsection and you should see a connection specified that references the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

If you configured the uri: for AMQP, STOMP, MQTT, or WSS as opposed to Openwire, you would see this connection under the corresponding section of the Connections page.

Testing your message flow

The setup described outlines a way for messages produced on premises to be propagated to the cloud for consumption in the cloud. This section provides steps on verifying the message flow.

Verify that the queue has been created

After you specify this queue name as staticallyIncludedDestinations > queue physicalName: and your ActiveMQ service starts, you see the following on your on-premises ActiveMQ console Queues page.

As you can see, no messages have been sent but you have one consumer listed. If you then choose Active Consumers under the Views column, you see Active Consumers for TestingQ.

This is telling you that your Amazon MQ broker is a consumer of your on-premises broker for the testing queue.

Produce and send a message to the on-premises broker

Now, produce a message on an on-premises producer and send it to your on-premises broker to a queue named TestingQ. If you navigate back to the queues page of your on-premises ActiveMQ console, you see that the messages enqueued and messages dequeued column count for your TestingQ queue have changed:

What this means is that the message originating from the on-premises producer has traversed the on-premises broker and propagated immediately to the Amazon MQ broker. At this point, the message is no longer available for consumption from the on-premises broker.

If you access the ActiveMQ console of your Amazon MQ broker and navigate to the Queues page, you see the following for the TestingQ queue:

This means that the message originally sent to your on-premises broker has traversed the network of brokers unidirectional network bridge, and is ready to be consumed from your Amazon MQ broker. The indicator is the Number of Pending Messages column.

Consume the message from an Amazon MQ broker

Connect to the Amazon MQ TestingQ queue from a consumer within the AWS Cloud environment for message consumption. Log on to the ActiveMQ console of your Amazon MQ broker and navigate to the Queue page:

As you can see, the Number of Pending Messages column figure has changed to 0 as that message has been consumed.

This diagram outlines the message lifecycle from the on-premises producer to the on-premises broker, traversing the hybrid connection between the on-premises broker and Amazon MQ, and finally consumption within the AWS Cloud.


This post focused on an ActiveMQ-specific scenario for transferring messages within an ActiveMQ queue from an on-premises broker to Amazon MQ.

For other on-premises brokers, such as IBM MQ, another approach would be to run ActiveMQ on-premises broker and use JMS bridging to IBM MQ, while using the approach in this post to forward to Amazon MQ. Yet another approach would be to use Apache Camel for more sophisticated routing.

I hope that you have found this example of hybrid messaging between an on-premises environment in the AWS Cloud to be useful. Many customers are already using on-premises ActiveMQ brokers, and this is a great use case to enable hybrid cloud scenarios.

To learn more, see the Amazon MQ website and Developer Guide. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.


Sharing Secrets with AWS Lambda Using AWS Systems Manager Parameter Store

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services

Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.

With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.

This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.

Solution overview

This example is made up of the following components:

  • An AWS SAM template that defines:
    • A Lambda function and its permissions
    • An unencrypted Parameter Store parameter that the Lambda function loads
    • A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
  • Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.

Launch the AWS SAM template

To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.

SAM template resources

The following sections show the code for the resources defined in the template.
Lambda function

    Type: 'AWS::Serverless::Function'
      FunctionName: 'ParameterStoreBlogFunctionDev'
      Description: 'Integrating lambda with Parameter Store'
      Handler: 'lambda_function.lambda_handler'
      Role: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
      CodeUri: './code'
          ENV: 'dev'
          APP_CONFIG_PATH: 'parameterStoreBlog'
          AWS_XRAY_TRACING_NAME: 'ParameterStoreBlogFunctionDev'
      Runtime: 'python3.6'
      Timeout: 5
      Tracing: 'Active'

    Type: AWS::IAM::Role
        Version: '2012-10-17'
            Effect: Allow
                - 'lambda.amazonaws.com'
              - 'sts:AssumeRole'
        - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
          PolicyName: 'ParameterStoreBlogDevParameterAccess'
            Version: '2012-10-17'
                Effect: Allow
                  - 'ssm:GetParameter*'
                Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/parameterStoreBlog*'
          PolicyName: 'ParameterStoreBlogDevXRayAccess'
            Version: '2012-10-17'
                Effect: Allow
                  - 'xray:PutTraceSegments'
                  - 'xray:PutTelemetryRecords'
                Resource: '*'

In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.

The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*.
Parameter Store parameter

    Type: AWS::SSM::Parameter
      Name: '/dev/parameterStoreBlog/appConfig'
      Description: 'Sample dev config values for my app'
      Type: String
      Value: '{"key1": "value1","key2": "value2","key3": "value3"}'

This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access.
KMS encryption key

    Type: AWS::KMS::Alias
      AliasName: 'alias/ParameterStoreBlogKeyDev'
      TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey

    Type: AWS::KMS::Key
      Description: 'Encryption key for secret config values for the Parameter Store blog post'
      Enabled: True
      EnableKeyRotation: False
        Version: '2012-10-17'
        Id: 'key-default-1'
            Sid: 'Allow administration of the key & encryption of new values'
            Effect: Allow
                - !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
              - 'kms:Create*'
              - 'kms:Encrypt'
              - 'kms:Describe*'
              - 'kms:Enable*'
              - 'kms:List*'
              - 'kms:Put*'
              - 'kms:Update*'
              - 'kms:Revoke*'
              - 'kms:Disable*'
              - 'kms:Get*'
              - 'kms:Delete*'
              - 'kms:ScheduleKeyDeletion'
              - 'kms:CancelKeyDeletion'
            Resource: '*'
            Sid: 'Allow use of the key'
            Effect: Allow
              AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey*'
              - 'kms:DescribeKey'
            Resource: '*'

This YAML code creates an encryption key with a key policy with two statements.

The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.

The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.

Lambda function

Here I walk you through the Lambda function code.

import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all

# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None

class MyApp:
    def __init__(self, config):
        Construct new MyApp with configuration
        :param config: application configuration
        self.config = config

    def get_config(self):
        return self.config

def load_config(ssm_parameter_path):
    Load configparser from config stored in SSM Parameter Store
    :param ssm_parameter_path: Path to app config in SSM Parameter Store
    :return: ConfigParser holding loaded config
    configuration = configparser.ConfigParser()
        # Get all parameters for this app
        param_details = client.get_parameters_by_path(

        # Loop through the returned parameters and populate the ConfigParser
        if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
            for param in param_details.get('Parameters'):
                param_path_array = param.get('Name').split("/")
                section_position = len(param_path_array) - 1
                section_name = param_path_array[section_position]
                config_values = json.loads(param.get('Value'))
                config_dict = {section_name: config_values}
                print("Found configuration: " + str(config_dict))

        print("Encountered an error loading config from SSM.")
        return configuration

def lambda_handler(event, context):
    global app
    # Initialize app if it doesn't yet exist
    if app is None:
        print("Loading config and creating new MyApp...")
        config = load_config(full_config_path)
        app = MyApp(config)

    return "MyApp config is " + str(app.get_config()._sections)

Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.

Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.

The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.

Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.

To confirm that everything was created successfully, test the function in the Lambda console.

  1. Open the Lambda console.
  2. In the navigation pane, choose Functions.
  3. In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
  4. On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.

After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.

Create an encrypted parameter

You currently have a simple, unencrypted parameter and a Lambda function that can access it.

Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.

To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.

  1. In the Systems Manager console, under Shared Resources, choose Parameter Store.
  2. Choose Create Parameter.
    • For Name, enter /dev/parameterStoreBlog/appSecrets.
    • For Type, select Secure String.
    • For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
    • For Value, enter {"secretKey": "secretValue"}.
    • Choose Create Parameter.
  3. If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
  4. In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.

If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.

Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.

Profiling the impact of querying Parameter Store using AWS X-Ray

By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.

From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.

This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results.
In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.

Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.

Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.

Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.

Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.

While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.

In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.

After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:

  • From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
  • The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
  • Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.


Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.

In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.

Backblaze Hard Drive Stats for 2017

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-for-2017/

Backbalze Drive Stats 2017 Review

Beginning in April 2013, Backblaze has recorded and saved daily hard drive statistics from the drives in our data centers. Each entry consists of the date, manufacturer, model, serial number, status (operational or failed), and all of the SMART attributes reported by that drive. As of the end of 2017, there are about 88 million entries totaling 23 GB of data. You can download this data from our website if you want to do your own research, but for starters here’s what we found.


At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We’ll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track. Along the way we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

Hard Drive Reliability Statistics for Q4 2017

At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.

Hard Drive Annualized Failure Rates for Q4 2017

A few things to remember when viewing this chart:

  • The failure rate listed is for just Q4 2017. If a drive model has a failure rate of 0%, it means there were no drive failures of that model during Q4 2017.
  • There were 62 drives (91,305 minus 91,243) that were not included in the list above because we did not have at least 45 of a given drive model. The most common reason we would have fewer than 45 drives of one model is that we needed to replace a failed drive and we had to purchase a different model as a replacement because the original model was no longer available. We use 45 drives of the same model as the minimum number to qualify for reporting quarterly, yearly, and lifetime drive statistics.
  • Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure.
  • AFR stands for Annualized Failure Rate, which is the projected failure rate for a year based on the data from this quarter only.

Bulking Up and Adding On Storage

Looking back over 2017, we not only added new drives, we “bulked up” by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:

Backblaze Drive Population by Drive Size

For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.

2017 Hard Drive Failure Stats

Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.

Hard Drive Annualized Failure Rates

The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.

Let’s take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.

Annual Hard Drive Failure Rates by Year

The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:

  • The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
  • While it looks like the failure rates for the 3 TB WDC drives have also decreased, you’ll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
  • The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That’s impressive, but with only 45 drives in use for each model, not statistically useful.
  • The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.

A Few More Numbers

To save you countless hours of looking, we’ve culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.

  • 116,833 — The number of hard drives for which we have data from April 2013 through the end of December 2017. Currently there are 91,305 drives (data drives) in operation. This means 25,528 drives have either failed or been removed from service due for some other reason — typically migration.
  • 29,844 — The number of hard drives that were installed in 2017. This includes new drives, migrations, and failure replacements.
  • 81.76 — The number of hard drives that were installed each day in 2017. This includes new drives, migrations, and failure replacements.
  • 95,638 — The number of drives installed since we started keeping records in April 2013 through the end of December 2017.
  • 55.41 — The average number of hard drives installed per day from April 2013 to the end of December 2017. The installations can be new drives, migration replacements, or failure replacements.
  • 1,508 — The number of hard drives that were replaced as failed in 2017.
  • 4.13 — The average number of hard drives that have failed each day in 2017.
  • 6,795 — The number of hard drives that have failed from April 2013 until the end of December 2017.
  • 3.94 — The average number of hard drives that have failed each day from April 2013 until the end of December 2017.

Can’t Get Enough Hard Drive Stats?

We’ll be presenting the webinar “Backblaze Hard Drive Stats for 2017” on Thursday February 9, 2017 at 10:00 Pacific time. The webinar will dig deeper into the quarterly, yearly, and lifetime hard drive stats and include the annual and lifetime stats by drive size and manufacturer. You will need to subscribe to the Backblaze BrightTALK channel to view the webinar. Sign up today.

As a reminder, the complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone — it is free.

Good luck and let us know if you find anything interesting.

The post Backblaze Hard Drive Stats for 2017 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

2017 Weather Station round-up

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/2017-weather-station/

As we head into 2018 and start looking forward to longer days in the Northern hemisphere, I thought I’d take a look back at last year’s weather using data from Raspberry Pi Oracle Weather Stations. One of the great things about the kit is that as well as uploading all its readings to the shared online Oracle database, it stores them locally on the Pi in a MySQL or MariaDB database. This means you can use the power of SQL queries coupled with Python code to do automatic data analysis.

Soggy Surrey

My Weather Station has only been installed since May, so I didn’t have a full 52 weeks of my own data to investigate. Still, my station recorded more than 70000 measurements. Living in England, the first thing I wanted to know was: which was the wettest month? Unsurprisingly, both in terms of average daily rainfall and total rainfall, the start of the summer period — exactly when I went on a staycation — was the soggiest:

What about the global Weather Station community?

Even soggier Bavaria

Here things get slightly trickier. Although we have a shiny Oracle database full of all participating schools’ sensor readings, some of the data needs careful interpretation. Many kits are used as part of the school curriculum and do not always record genuine outdoor conditions. Nevertheless, it appears that Adalbert Stifter Gymnasium in Bavaria, Germany, had an even wetter 2017 than my home did:

View larger map

Where the wind blows

The records Robert-Dannemann Schule in Westerstede, Germany, is a good example of data which was most likely collected while testing and investigating the weather station sensors, rather than in genuine external conditions. Unless this school’s Weather Station was transported to a planet which suffers from extreme hurricanes, it wasn’t actually subjected to wind speeds above 1000km/h in November. Dismissing these and all similarly suspect records, I decided to award the ‘Windiest location of the year’ prize to CEIP Noalla-Telleiro, Spain.

View larger map

This school is right on the coast, and is subject to some strong and squally weather systems.

Weather Station at CEIP Noalla - Telleiro

Weather Station at CEIP Noalla-Telleiro

They’ve mounted their wind vane and anemometer nice and high, so I can see how they were able to record such high wind velocities.

A couple of Weather Stations have recently been commissioned in equally exposed places — it will be interesting to see whether they will record even higher speeds during 2018.

Highs and lows

After careful analysis and a few disqualifications (a couple of Weather Stations in contention for this category were housed indoors), the ‘Hottest location’ award went to High School of Chalastra in Thessaloniki, Greece. There were a couple of Weather Stations (the one at The Marwadi Education Foundation in India, for example) that reported higher average temperatures than Chalastra’s 24.54 ºC. However, they had uploaded far fewer readings and their data coverage of 2017 was only partial.

View larger map

At the other end of the thermometer, the location with the coldest average temperature is École de la Rose Sauvage in Calgary, Canada, with a very chilly 9.9 ºC.

Ecole de la Rose sauvage Weather Station

Weather Station at École de la Rose Sauvage

I suspect this school has a good chance of retaining the title: their lowest 2017 temperature of -24 ºC is likely to be beaten in 2018 due to extreme weather currently bringing a freezing start to the year in that part of the world.

View larger map

Analyse your own Weather Station data

If you have an Oracle Raspberry Pi Weather Station and would like to perform an annual review of your local data, you can use this Python script as a starting point. It will display a monthly summary of the temperature and rainfall for 2017, and you should be able to customise the code to focus on other sensor data or on a particular time of year. We’d love to see your results, so please share your findings with [email protected], and we’ll send you some limited-edition Weather Station stickers.

The post 2017 Weather Station round-up appeared first on Raspberry Pi.

Spiegelbilder Studio’s giant CRT video walls

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/crt-video-walls/

After getting in contact with us to share their latest build with us, we invited Matvey Fridman of Germany-based production company Spiegelbilder Studio to write a guest blog post about their CRT video walls created for the band STRANDKØNZERT.



CRT video wall

About a year ago, we had the idea of building a huge video wall out of old TVs to use in a music video. It took some time, but half a year later we found ourselves in a studio actually building this thing using 30 connected computers, 24 of which were Raspberry Pis.

STRANDKØNZERT CRT video wall Raspberry Pi

How we did it

After weeks and months of preproduction and testing, we decided on two consecutive days to build the wall, create the underlying IP network, run a few tests, and then film the artists’ performance in front of it. We actually had 32 Pis (a mixed bag of first, second, and third generation models) and even more TVs ready to go, since we didn’t know what the final build would actually look like. We ended up using 29 separate screens of various sizes hooked up to 24 separate Pis — the remaining five TVs got a daisy-chained video signal out of other monitors for a cool effect. Each Pi had to run a free software called PiWall.

STRANDKØNZERT CRT video wall Raspberry Pi

Since the TVs only had analogue video inputs, we had to get special composite breakout cables and then adapt the RCA connectors to either SCART, S-Video, or BNC.

STRANDKØNZERT CRT video wall Raspberry Pi

As soon as we had all of that running, we connected every Pi to a 48-port network switch that we’d hooked up to a Windows PC acting as a DHCP server to automatically assign IP addresses and handle the multicast addressing. To make remote control of the Raspberry Pis easier, a separate master Linux PC and two MacBook laptops, each with SSH enabled and a Samba server running, joined the network as well.

STRANDKØNZERT CRT video wall Raspberry Pi

The MacBook laptops were used to drop two files containing the settings on each Pi. The .pitile file was unique to every Pi and contained their respective IDs. The .piwall file contained the same info for all Pis: the measurements and positions of every single screen to help the software split up the video signal coming in through the network. After every Pi got the command to start the PiWall software, which specifies the UDP multicast address and settings to be used to receive the video stream, the master Linux PC was tasked with streaming the video file to these UDP addresses. Now every TV was showing its section of the video, and we could begin filming.

STRANDKØNZERT CRT video wall Raspberry Pi

The whole process and the contents of the files and commands are summarised in the infographic below. A lot of trial and error was involved in the making of this project, but it all worked out well in the end. We hope you enjoy the craft behind the music video even though the music is not for everybody 😉


You can follow Spiegelbilder Studio on Facebook, Twitter, and Instagram. And if you enjoyed the music video, be sure to follow STRANDKØNZERT too.

The post Spiegelbilder Studio’s giant CRT video walls appeared first on Raspberry Pi.