Tag Archives: Waves

Randomly generated, thermal-printed comics

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/random-comic-strip-generation-vomit-comic-robot/

Python code creates curious, wordless comic strips at random, spewing them from the thermal printer mouth of a laser-cut body reminiscent of Disney Pixar’s WALL-E: meet the Vomit Comic Robot!

The age of the thermal printer!

Thermal printers allow you to instantly print photos, data, and text using a few lines of code, with no need for ink. More and more makers are using this handy, low-maintenance bit of kit for truly creative projects, from Pierre Muth’s tiny PolaPi-Zero camera to the sound-printing Waves project by Eunice Lee, Matthew Zhang, and Bomani McClendon (and our own Secret Santa Babbage).

Vomiting robots

Interaction designer and developer Cadin Batrack, whose background is in game design and interactivity, has built the Vomit Comic Robot, which creates “one-of-a-kind comics on demand by processing hand-drawn images through a custom software algorithm.”

The robot is made up of a Raspberry Pi 3, a USB thermal printer, and a handful of LEDs.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

At the press of a button, Processing code selects one of a set of Cadin’s hand-drawn empty comic grids and then randomly picks images from a library to fill in the gaps.

Vomit Comic Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

Each image is associated with data that allows the code to fit it correctly into the available panels. Cadin says about the concept behing his build:

Although images are selected and placed randomly, the comic panel format suggests relationships between elements. Our minds create a story where there is none in an attempt to explain visuals created by a non-intelligent machine.

The Raspberry Pi saves the final image as a high-resolution PNG file (so that Cadin can sell prints on thick paper via Etsy), and a Python script sends it to be vomited up by the thermal printer.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

For more about the Vomit Comic Robot, check out Cadin’s blog. If you want to recreate it, you can find the info you need in the Imgur album he has put together.

We ❤ cute robots

We have a soft spot for cute robots here at Pi Towers, and of course we make no exception for the Vomit Comic Robot. If, like us, you’re a fan of adorable bots, check out Mira, the tiny interactive robot by Alonso Martinez, and Peeqo, the GIF bot by Abhishek Singh.

Mira Alfonso Martinez Raspberry Pi

The post Randomly generated, thermal-printed comics appeared first on Raspberry Pi.

ISPs Win Landmark Case to Protect Privacy of Alleged Pirates

Post Syndicated from Andy original https://torrentfreak.com/isps-win-landmark-case-protect-privacy-alleged-pirates-180508/

With waves of piracy settlement letters being sent out across the world, the last line of defense for many accused Internet users has been their ISPs.

In a number of regions, notably the United States, Europe, and the UK, most ISPs have given up the fight, handing subscriber details over to copyright trolls with a minimum of resistance. However, there are companies out there prepared to stand up for their customers’ rights, if eventually.

Over in Denmark, Telenor grew tired of tens of thousands of requests for subscriber details filed by a local law firm on behalf of international copyright troll groups. It previously complied with demands to hand over the details of individuals behind 22,000 IP addresses, around 11% of the 200,000 total handled by ISPs in Denmark. But with no end in sight, the ISP dug in its heels.

“We think there is a fundamental legal problem because the courts do not really decide what is most important: the legal security of the public or the law firms’ commercial interests,” Telenor’s Legal Director Mette Eistrøm Krüger said last year.

Assisted by rival ISP Telia, Telenor subsequently began preparing a case to protect the interests of their customers, refusing in the meantime to comply with disclosure requests in copyright cases. But last October, the District Court ruled against the telecoms companies, ordering them to provide identities to the copyright trolls.

Undeterred, the companies took their case to the Østre Landsret, one of Denmark’s two High Courts. Yesterday their determination paid off with a resounding victory for the ISPs and security for the individuals behind approximately 4,000 IP addresses targeted by Copyright Collection Ltd via law firm Njord Law.

“In its order based on telecommunications legislation, the Court has weighed subscribers’ rights to confidentiality of information regarding their use of the Internet against the interests of rightsholders to obtain information for the purpose of prosecuting claims against the subscribers,” the Court said in a statement.

Noting that the case raised important questions of European Union law and the European Convention on Human Rights, the High Court said that after due consideration it would overrule the decision of the District Court. The rights of the copyright holders do not trump the individuals right to privacy, it said.

“The telecommunications companies are therefore not required to disclose the names and addresses of their subscribers,” the Court ruled.

Telenor welcomed the decision, noting that it had received countless requests from law firms to disclose the identities of thousands of subscribers but had declined to hand them over, a decision that has now been endorsed by the High Court.

“This is an important victory for our right to protect our customers’ data,” said Telenor Denmark’s Legal Director, Mette Eistrøm Krüger.

“At Telenor we protect our customers’ data and trust – therefore it has been our conviction that we cannot be forced into almost automatically submitting personal data on our customers simply to support some private actors who are driven by commercial interests.”

Noting that it’s been putting up a fight since 2016 against handing over customers’ data for purposes other than investigating serious crime, Telenor said that the clarity provided by the decision is most welcome.

“We and other Danish telecom companies are required to log customer data for the police to fight serious crime and terrorism – but the legislation has just been insufficient in relation to the use of logged data,” Krüger said.

“Therefore I am pleased that with this judgment the High Court has stated that customers’ legal certainty is most important in these cases.”

The decision was also welcomed by Telia Denmark, with Legal Director Lasse Andersen describing the company as being “really really happy” with “a big win.”

“It is a victory for our customers and for all telecom companies’ customers,” Andersen said.

“They can now feel confident that the data that we collect about them cannot be disclosed for purposes other than the terms under which they are collected as determined by the jurisdiction.

“Therefore, anyone and everybody cannot claim our data. We are pleased that throughout the process we have determined that we will not hand over our data to anyone other than the police with a court order,” Andersen added.

But as the ISPs celebrate, the opposite is true for Njord Law and its copyright troll partners.

“It is a sad message to the Danish film and television industry that the possibilities for self-investigating illegal file sharing are complicated and that the work must be left to the police’s scarce resources,” said Jeppe Brogaard Clausen of Njord Law.

While the ISPs finally stood up for users in these cases, Telenor in particular wishes to emphasize that supporting the activities of pirates is not its aim. The company says it does not support illegal file-sharing “in any way” and is actively working with anti-piracy outfit Rights Alliance to prevent unauthorized downloading of movies and other content.

The full decision of the Østre Landsret can be found here (Danish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cloud Empire: Meet the Rebel Alliance

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/cloud-empire-meet-the-rebel-alliance/

Cloud Empire: Meet the Rebel Alliance

Last week Backblaze made the exciting announcement that through partnerships with Packet and ServerCentral, cloud computing is available to Backblaze B2 Cloud Storage customers.

Those of you familiar with cloud computing will understand the significance of this news. We are now offering the least expensive cloud storage + cloud computing available anywhere. You no longer have to submit to the lock-in tactics and exorbitant prices charged by the other big players in the cloud services biz.

As Robin Harris wrote in ZDNet about last week’s computing partners announcement, Cloud Empire: Meet the Rebel Alliance.

We understand that some of our cloud backup and storage customers might be unfamiliar with cloud computing. Backblaze made its name in cloud backup and object storage, and that’s what our customers know us for. In response to customers requests, we’ve directly connected our B2 cloud object storage with cloud compute providers. This adds the ability to use and run programs on data once it’s in the B2 cloud, opening up a world of new uses for B2. Just some of the possibilities include media transcoding and rendering, web hosting, application development and testing, business analytics, disaster recovery, on-demand computing capacity (cloud bursting), AI, and mobile and IoT applications.

The world has been moving to a multi-cloud / hybrid cloud world, and customers are looking for more choices than those offered by the existing cloud players. Our B2 compute partnerships build on our mission to offer cloud storage that’s astonishingly easy and low-cost. They enable our customers to move into a more flexible and affordable cloud services ecosystem that provides a greater variety of choices and costs far less. We believe we are helping to fulfill the promise of the internet by allowing customers to choose the best-of-breed services from the best vendors.

If You’re Not Familiar with Cloud Computing, Here’s a Quick Overview

Cloud computing is another component of cloud services, like object storage, that replicates in the cloud a basic function of a computer system. Think of services that operate in a cloud as an infinitely scalable version of what happens on your desktop computer. In your desktop computer you have computing/processing (CPU), fast storage (like an SSD), data storage (like your disk drive), and memory (RAM). Their counterparts in the cloud are computing (CPU), block storage (fast storage), object storage (data storage), and processing memory (RAM).

Computer building blocks

CPU, RAM, fast internal storage, and a hard drive are the basic building blocks of a computer
They also are the basic building blocks of cloud computing

Some customers require only some of these services, such as cloud storage. B2 as a standalone service has proven to be an outstanding solution for those customers interested in backing up or archiving data. There are many customers that would like additional capabilities, such as performing operations on that data once it’s in the cloud. They need object storage combined with computing.

With the just announced compute partnerships, Backblaze is able to offer computing services to anyone using B2. A direct connection between Backblaze’s and our partners’ data centers means that our customers can process data stored in B2 with high speed, low latency, and zero data transfer costs.

Backblaze, Packet and Server Central cloud compute workflow diagram

Cloud service providers package up CPU, storage, and memory into services that you can rent on an hourly basis
You can scale up and down and add or remove services as you need them

How Does Computing + B2 Work?

Those wanting to use B2 with computing will need to sign up for accounts with Backblaze and either Packet or ServerCentral. Packet customers need only select “SJC1” as their region and then get started. The process is also simple for ServerCentral customers — they just need to register with a ServerCentral account rep.

The direct connection between B2 and our compute partners means customers will experience very low latency (less than 10ms) between services. Even better, all data transfers between B2 and the compute partner are free. When combined with Backblaze B2, customers can obtain cloud computing services for as little as 50% of the cost of Amazon’s Elastic Compute Cloud (EC2).

Opening Up the Cloud “Walled Garden”

Traditionally, cloud vendors charge fees for customers to move data outside the “walled garden” of that particular vendor. These fees reach upwards of $0.12 per gigabyte (GB) for data egress. This large fee for customers accessing their own data restricts users from using a multi-cloud approach and taking advantage of less expensive or better performing options. With free transfers between B2 and Packet or ServerCentral, customers now have a predictable, scalable solution for computing and data storage while avoiding vendor lock in. Dropbox made waves when they saved $75 million by migrating off of AWS. Adding computing to B2 helps anyone interested in moving some or all of their computing off of AWS and thereby cutting their AWS bill by 50% or more.

What are the Advantages of Cloud Storage + Computing?

Using computing and storage in the cloud provide a number of advantages over using in-house resources.

  1. You don’t have to purchase the actual hardware, software licenses, and provide space and IT resources for the systems.
  2. Cloud computing is available with just a few minutes notice and you only pay for whatever period of time you need. You avoid having additional hardware on your balance sheet.
  3. Resources are in the cloud and can provide online services to customers, mobile users, and partners located anywhere in the world.
  4. You can isolate the work on these systems from your normal production environment, making them ideal for testing and trying out new applications and development projects.
  5. Computing resources scale when you need them to, providing temporary or ongoing extra resources for expected or unexpected demand.
  6. They can provide redundant and failover services when and if your primary systems are unavailable for whatever reason.

Where Can I Learn More?

We encourage B2 customers to explore the options available at our partner sites, Packet and ServerCentral. They are happy to help customers understand what services are available and how to get started.

We are excited to see what you build! And please tell us in the comments what you are doing or have planned with B2 + computing.

P.S. May the force be with all of us!

The post Cloud Empire: Meet the Rebel Alliance appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

e-paper pocket money tracker using Monzo pots

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/monzo-money-tracker/

Jason Barnett used the pots feature of the Monzo banking API to create a simple e-paper display so that his kids can keep track of their pocket money.

Monzo ePaper Pot Jason Barnett Raspberry Pi

Monzo

For those outside the UK: Monzo is a smartphone-based bank that allows costumers to manage their money and payment cards via an app, removing the bank clerk middleman.

In the Monzo banking app, users can set up pots, which allow them to organise their money into various, you guessed it, pots. You want to put aside holiday funds, budget your food shopping, or, like Jason, manage your kids’ pocket money? Using pots is an easy way to do it.

Jason’s Monzo Pot ePaper tracker

After failed attempts at keeping track of his sons’ pocket money via a scrap of paper stuck to the fridge, Jason decided to try a new approach.

He started his build by installing Stretch Lite to the SD card of his Raspberry Pi Zero W. “The Pi will be running headless (without screen, mouse or keyboard)”, he explains on his blog, “so there is no need for a full-fat Raspbian image.” While Stretch Lite was downloading, he set up the Waveshare ePaper HAT on his Zero W. He notes that Pimoroni’s “Inky pHAT would be easiest,” but his tutorial is specific to the Waveshare device.

Monzo ePaper Pot Jason Barnett Raspberry Pi

Before ejecting the SD card, Jason updated the boot partition to allow him to access the Pi via SSH. He talks makers through that process here.

Among the libraries he installed for the project is pyMonzo, a Python wrapper for the Monzo API created by Paweł Adamczak. Monzo is still in its infancy, and the API is partly under construction. Until it’s completed, Paweł’s wrapper offers a more stable way to use it.

After installing the software, it was time to set up the e-paper screen for the tracker. Jason adjusted the code for the API so that the screen reloads information every 15 minutes, displaying the up-to-date amount of pocket money in both kids’ pots.

Here is how Jason describes going to the supermarket with his sons, now that he has completed the tracker:

“Daddy, I want (insert first thing picked up here), I’ve always wanted one of these my whole life!” […] Even though you have never seen that (insert thing here) before, I can quickly open my Monzo app, flick to Account, and say “You have £3.50 in your money box”. If my boy wants it, a 2-second withdrawal is made whilst queueing, and done — he walks away with a new (again, insert whatever he wanted his whole life here) and is happy!

Jason’s blog offers a full breakdown of his project, including all necessary code and the specs for the physical build. Be sure to head over and check it out.

Have you used an API in your projects? What would you build with one?

The post e-paper pocket money tracker using Monzo pots appeared first on Raspberry Pi.

Derek Woodroffe’s steampunk tentacle hat

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/steampunk-tentacle-hat/

Halloween: that glorious time of year when you’re officially allowed to make your friends jump out of their skin with your pranks. For those among us who enjoy dressing up, Halloween is also the occasion to go all out with costumes. And so, dear reader, we present to you: a steampunk tentacle hat, created by Derek Woodroffe.

Finished Tenticle hat

Finished Tenticle hat

Extreme Electronics

Derek is an engineer who loves all things electronics. He’s part of Extreme Kits, and he runs the website Extreme Electronics. Raspberry Pi Zero-controlled Tesla coils are Derek’s speciality — he’s even been on one of the Royal Institution’s Christmas Lectures with them! Skip ahead to 15:06 in this video to see Derek in action:

Let There Be Light! // 2016 CHRISTMAS LECTURES with Saiful Islam – Lecture 1

The first Lecture from Professor Saiful Islam’s 2016 series of CHRISTMAS LECTURES, ‘Supercharged: Fuelling the future’. Watch all three Lectures here: http://richannel.org/christmas-lectures 2016 marked the 80th anniversary since the BBC first broadcast the Christmas Lectures on TV. To celebrate, chemist Professor Saiful Islam explores a subject that the lectures’ founder – Michael Faraday – addressed in the very first Christmas Lectures – energy.

Wearables

Wearables are electronically augmented items you can wear. They might take the form of spy eyeglasses, clothes with integrated sensors, or, in this case, headgear adorned with mechanised tentacles.

Why did Derek make this? We’re not entirely sure, but we suspect he’s a fan of the Cthulu mythos. In any case, we were a little astounded by his project. This is how we reacted when Derek tweeted us about it:

Raspberry Pi on Twitter

@ExtElec @extkits This is beyond incredible and completely unexpected.

In fact, we had to recover from a fit of laughter before we actually managed to type this answer.

Making a steampunk tentacle hat

Derek made the ‘skeleton’ of each tentacle out of a net curtain spring, acrylic rings, and four lengths of fishing line. Two servomotors connect to two ends of fishing line each, and pull them to move the tentacle.

net curtain spring and acrylic rings forming a mechanic tentacle skeleton - steampunk tentacle hat by Derek Woodroffe
Two servos connecting to lengths of fishing line - steampunk tentacle hat by Derek Woodroffe

Then he covered the tentacles with nylon stockings and liquid latex, glued suckers cut out of MDF onto them, and mounted them on an acrylic base. The eight motors connect to a Raspberry Pi via an I2C 8-port PWM controller board.

artificial tentacles - steampunk tentacle hat by Derek Woodroffe
8 servomotors connected to a controller board and a raspberry pi- steampunk tentacle hat by Derek Woodroffe

The Pi makes the servos pull the tentacles so that they move in sine waves in both the x and y directions, seemingly of their own accord. Derek cut open the top of a hat to insert the mounted tentacles, and he used more liquid latex to give the whole thing a slimy-looking finish.

steampunk tentacle hat by Derek Woodroffe

Iä! Iä! Cthulhu fhtagn!

You can read more about Derek’s steampunk tentacle hat here. He will be at the Beeston Raspberry Jam in November to show off his build, so if you’re in the Nottingham area, why not drop by?

Wearables for Halloween

This build is already pretty creepy, but just imagine it with a sensor- or camera-powered upgrade that makes the tentacles reach for people nearby. You’d have nightmare fodder for weeks.

With the help of the Raspberry Pi, any Halloween costume can be taken to the next level. How could Pi technology help you to win that coveted ‘Scariest costume’ prize this year? Tell us your ideas in the comments, and be sure to share pictures of you in your get-up with us on Twitter, Facebook, or Instagram.

The post Derek Woodroffe’s steampunk tentacle hat appeared first on Raspberry Pi.

Making Waves: print out sound waves with the Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/printed-sound-wave/

For fun, Eunice Lee, Matthew Zhang, and Bomani McClendon have worked together to create Waves, an audiovisual project that records people’s spoken responses to personal questions and prints them in the form of a sound wave as a gift for being truthful.

Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

What are you grateful for?

“I’m grateful for finishing this project,” admits maker Eunice Lee as she presses a button and speaks into the microphone that is part of the Waves project build. After a brief moment, her confession appears on receipt paper as a waveform, and she grins toward the camera, happy with the final piece.

Eunice testing Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

Sound wave machine

Alongside a Raspberry Pi 3, the Waves device is comprised of four tactile buttons, a standard USB microphone, and a thermal receipt printer. This type of printer has become easily available for the maker movement from suppliers such as Adafruit and Pimoroni.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Definitely more fun than a polygraph test

The trio designed four colour-coded cards that represent four questions, each of which has a matching button on the breadboard. Press the button that belongs to the question to be answered, and Python code directs the Pi to record audio via the microphone. Releasing the button stops the audio recording. “Once the recording has been saved, the script viz.py is launched,” explains Lee. “This script takes the audio file and, using Python matplotlib magic, turns it into a nice little waveform image.”

From there, the Raspberry Pi instructs the thermal printer to produce a printout of the sound wave image along with the question.

Making for fun

Eunice, Bomani, and Matt, students of design and computer science at Northwestern University in Illinois, built Waves as a side project. They wanted to make something at the intersection of art and technology and were motivated by the pure joy of creating.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Making makes people happy

They have noted improvements that can be made to increase the scope of their sound wave project. We hope to see many more interesting builds from these three, and in the meantime we invite you all to look up their code on Eunice’s GitHub to create your own Waves at home.

The post Making Waves: print out sound waves with the Raspberry Pi appeared first on Raspberry Pi.

Organizing Software Deployments to Match Failure Conditions

Post Syndicated from Nick Trebon original https://aws.amazon.com/blogs/architecture/organizing-software-deployments-to-match-failure-conditions/

Deploying new software into production will always carry some amount of risk, and failed deployments (e.g., software bugs, misconfigurations, etc.) will occasionally occur. As a service owner, the goal is to try and reduce the number of these incidents and to limit customer impact when they do occur. One method to reduce potential impact is to shape your deployment strategies around the failure conditions of your service. Thus, when a deployment fails, the service owner has more control over the blast radius as well as the scope of the impact. These strategies require an understanding of how the various components of your system interact, how those components can fail and how those failures impact your customers. This blog post discusses some of the deployment strategies that we’ve made on the Route 53 team and how these choices affect the availability of our service.

To begin, I’ll briefly describe some of the deployment procedures and the Route 53 architecture in order to provide some context for the deployment strategies that we have chosen. Hopefully, these examples will reveal strategies that could benefit your own service’s availability. Like many services, Route 53 consists of multiple environments or stages: one for active development, one for staging changes to production and the production stage itself. The natural tension with trying to reduce the number of failed deployments in production is to add more rigidity and processes that slow down the release of new code. At Route 53, we do not enforce a strict release or deployment schedule; individual developers are responsible for verifying their changes in the staging environment and pushing their changes into production. Typically, our deployments proceed in a pipelined fashion. Each step of the pipeline is referred to as a “wave” and consists of some portion of our fleet. A pipeline is a good abstraction as each wave can be thought of as an independent and separate step. After each wave of the pipeline, the change can be verified — this can include automatic, scheduled and manual testing as well as the verification of service metrics. Furthermore, we typically space out the earlier waves of production deployment at least 24 hours apart, in order to allow the changes to “bake.” Letting our software bake refers to rolling out software changes slowly to allow us to validate those changes and verify service metrics with production traffic before pushing the deployment to the next wave. The clear advantage of deploying new code to only a portion of your fleet is that it reduces the impact of a failed deployment to just the portion of the fleet containing the new code. Another benefit of our deployment infrastructure is that it provides us a mechanism to quickly “roll back” a deployment to a previous software version if any problems are detected which, in many cases, enables us to quickly mitigate a failed deployment.

Based on our experiences, we have further organized our deployments to try and match our failure conditions to further reduce impact. First, our deployment strategies are tailored to the part of the system that is the target of our deployment. We commonly refer to two main components of Route 53: the control plane and the data plane (pictured below). The control plane consists primarily of our API and DNS change propagation system. Essentially, this is the part of our system that accepts a customer request to create or delete a DNS record and then the transmission of that update to all of our DNS servers distributed across the world. The data plane consists of our fleet of DNS servers that are responsible for answering DNS queries on behalf of our customers. These servers currently reside in more than 50 locations around the world. Both of these components have their own set of failure conditions and differ in how a failed deployment will impact customers. Further, a failure of one component may not impact the other. For example, an API outage where customers are unable to create new hosted zones or records has no impact on our data plane continuing to answer queries for all records created prior to the outage. Given their distinct set of failure conditions, the control plane and data plane have their own deployment strategies, which are each discussed in more detail below.

Control Plane Deployments

The bulk of the of the control plane actually consists of two APIs. The first is our external API that is reachable from the Internet and is the entry point for customers to create, delete and view their DNS records. This external API performs authentication and authorization checks on customer requests before forwarding them to our internal API. The second, internal API supports a much larger set of operations than just the ones needed by the external API; it also includes operations required to monitor and propagate DNS changes to our DNS servers as well as other operations needed to operate and monitor the service. Failed deployments to the external API typically impact a customer’s ability to view or modify their DNS records. The availability of this API is critical as our customers may rely on the ability to update their DNS records quickly and reliably during an operational event for their own service or site.

Deployments to the external API are fairly straightforward. For increased availability, we host the external API in multiple availability zones. Each wave of deployment consists of the hosts within a single availability zone, and each host in that availability zone is deployed to individually. If any single host deployment fails, the deployment to the entire availability zone is halted automatically. Some host failures may be quickly caught and mitigated by the load balancer for our hosts in that particular availability zone, which is responsible for health checking the hosts. Hosts that fail these load balancer health checks are automatically removed from service by the load balancer. Thus, a failed deployment to just a single host would result in it being removed from service automatically and the deployment halted without any operator intervention. For other types of failed deployments that may not cause the load balancer health checks to fail, restricting waves to a single availability zone allows us to easily flip away from that availability zone as soon as the failure is detected. A similar approach could be applied to services that utilize Route 53 plus ELB in multiple regions and availability zones for their services. ELBs automatically health check their back-end instances and remove unhealthy instances from service. By creating Route 53 alias records marked to evaluate target health (see ELB documentation for how to set this up), if all instances behind an ELB are unhealthy, Route 53 will fail away from this alias and attempt to find an alternate healthy record to serve. This configuration will enable automatic failover at the DNS-level for an unhealthy region or availability zone. To enable manual failover, simply convert the alias resource record set for your ELB to either a weighted alias or associate it with a health check whose health you control. To initiate a failover, simply set the weight to 0 or fail the health check. A weighted alias also allows you the ability to slowly increase the traffic to that ELB, which can be useful for verifying your own software deployments to the back-end instances.

For our internal API, the deployment strategy is more complicated (pictured below). Here, our fleet is partitioned by the type of traffic it handles. We classify traffic into three types: (1) low-priority, long-running operations used to monitor the service (batch fleet), (2) all other operations used to operate and monitor the service (operations fleet) and (3) all customer operations (customer fleet). Deployments to the production internal API are then organized by how critical their traffic is to the service as a whole. For instance, the batch fleet is deployed to first because their operations are not critical to the running of the service and we can tolerate long outages of this fleet. Similarly, we prioritize the operations fleet below that of customer traffic as we would rather continue accepting and processing customer traffic after a failed deployment to the operations fleet. For the internal API, we have also organized our staging waves differently from our production waves. In the staging waves, all three fleets are split across two waves. This is done intentionally to allow us to verify that the code changes work in a split-world where multiple versions of the software are running simultaneously. We have found this to be useful in catching incompatibilities between software versions. Since we never deploy software in production to 100% of our fleet at the same time, our software updates must be designed to be compatible with the previous version. Finally, as with the external API, all wave deployments proceed with a single host at a time. For this API, we also include a deep application health check as part of the deployment. Similar to the load balancer health checks for the external API, if this health check fails, the entire deployment is immediately halted.

Data Plane Deployments

As mentioned earlier, our data plane consists of Route 53’s DNS servers, which are distributed across the world in more than 50 distinct locations (we refer to each location as an ‘edge location’). An important consideration with our deployment strategy is how we stripe our anycast IP space across locations. In summary, each hosted zone is assigned four delegation name servers, each of which belong to a “stripe” (i.e., one quarter of our anycast range). Generally speaking, each edge location announces only a single stripe, so each stripe is therefore announced by roughly 1/4 of our edge locations worldwide. Thus, when a resolver issues a query against each of the four delegation name servers, those queries are directed via BGP to the closest (in a network sense) edge location from each stripe. While the availability and correctness of our API is important, the availability and correctness of our data plane are even more critical. In this case, an outage directly results in an outage for our customers. Furthermore, the impact of serving even a single wrong answer on behalf of a customer is magnified by that answer being cached by both intermediate resolvers and end clients alike. Thus, deployments to our data plane are organized even more carefully to both prevent failed deployments and to reduce potential impact.

The safest way to deploy and minimize impact would be to deploy to a single edge location at a time. However, with manual deployments that are overseen by a developer, this approach is just not scalable with how frequently we deploy new software to over 50 locations (with more added each year). Thus, most of our production deployment waves consist of multiple locations; the one exception is our first wave that includes just a single location. Furthermore, this location is specifically chosen because it runs our oldest hardware, which provides us a quick notification for any unintended performance degradation. It is important to note that while the caching behavior for resolvers can cause issues if we serve an incorrect answer, they handle other types failures well. When a recursive resolver receives a query for a record that is not cached, it will typically issue queries to at least three of the four delegation name servers in parallel and it will use the first response it receives. Thus, in the event where one of our locations is black holing customer queries (i.e., not replying to DNS queries), the resolver should receive a response from one of the other delegation name servers. In this case, the only impact is to resolvers where the edge location that is not answering would have been the fastest responder. Now, that resolver will effectively be waiting for the response from the second fastest stripe. To take advantage of this resiliency, our other waves are organized such that they include edge locations that are geographically diverse, with the intent that for any single resolver, there will be nearby locations that are not included in the current deployment wave. Furthermore, to guarantee that at most a single nameserver for all customers is affected, waves are actually organized by stripe. Finally, each stripe is spread across multiple waves so that failures impact only a single name server for a portion of our customers. An example of this strategy is depicted below. A few notes: our staging environment consists of a much smaller number of edge locations than production, so single-location waves are possible. Second, each stripe is denoted by color; in this example, we see deployments spread across a blue and orange stripe. You, too, can think about organizing your deployment strategy around your failure conditions. For example, if you have a database schema used by both your production system and a warehousing system, deploy the change to the warehousing system first to ensure you haven’t broken any compatibility. You might catch problems with the warehousing system before it affects customer traffic.

Conclusions

Our team’s experience with operating Route 53 over the last 3+ years have highlighted the importance of reducing the impact from failed deployments. Over the years, we have been able to identify some of the common failure conditions and to organize our software deployments in such a way so that we increase the ease of mitigation while decreasing the potential impact to our customers.

– Nick Trebon