Tag Archives: Cloud Storage

The Life and Times of a Backblaze Hard Drive

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/life-and-times-of-a-backblaze-hard-drive/

Seagate 12 TB hard drive

Backblaze likes to talk about hard drive failures — a lot. What we haven’t talked much about is how we deal with those failures: the daily dance of temp drives, replacement drives, and all the clones that it takes to keep over 100,000 drives healthy. Let’s go behind the scenes and take a look at that dance from the eyes of one Backblaze hard drive.

After sitting still for what seemed like forever, ZCH007BZ was on the move. ZCH007BZ, let’s call him Zach, is a Seagate 12 TB hard drive. For the last few weeks, Zach and over 6,000 friends were securely sealed inside their protective cases in the ready storage area of a Backblaze data center. Being a hard disk drive, Zach’s modest dream was to be installed in a system, spin merrily, and store data for many years to come. And now the wait was nearly over, or was it?

Hard drives in wrappers

The Life of Zach

Zach was born in a factory in Singapore and shipped to the US, eventually finding his way to Backblaze, but he didn’t know that. He had sat sealed in the dark for weeks. Now Zach and boxes of other drives were removed from their protective cases and gently stacked on a cart. Zack was near the bottom of the pile, but even he could see endless columns of beautiful red boxes stacked seemingly to the sky. “Backblaze!” one of the drives on the cart whispered. All the other drives gasped with recognition. Thank goodness the noise-cancelling headphones worn by all Backblaze Data Center Techs covered the drives’ collective excitement.

While sitting in the dark, the drives had gossiped about where they were: a data center, a distribution warehouse, a Costco, or Best Buy. Backblaze came up a few times, but that was squashed — they couldn’t be that lucky. After all, Backblaze was the only place where a drive could be famous. Before Backblaze, hard drives labored in anonymity. Occasionally, one or two would be seen in a hard drive tear down article, but even that sort of exposure had died out a couple of years ago. But Backblaze publishes everything about their drives, their model numbers, their serial numbers, heck even their S.M.A.R.T. statistics. There was a rumor that hard drives worked extra hard at Backblaze because they knew they would be in the public eye. With red Backblaze Storage Pods as far as the eye could see, Zach and friends were about to find out.

Drive with guideThe cart Zach and his friends were on glided to a stop at the production build facility. This is where storage pods are filled with drives and tested before being deployed. The cart stopped by the first of twenty V6.0 Backblaze Storage Pods that together would form a Backblaze Vault. At each Storage Pod station 60 drives were unloaded from the cart. The serial number of each drive was recorded along with the Storage Pod ID and drive location in the pod. Finally, each drive was fitted with a pair of drive guides and slid into its new home as a production drive in a Backblaze Storage Pod. “Spin long and prosper,” Zach said quietly each time the lid of a Storage Pod snapped in place covering the 60 giddy hard drives inside. The process was repeated for the remaining 19 Storage Pods, and when it was done Zach remained on the cart. He would not be installed in a production system today.

The Clone Room

Zach and the remaining drives on the cart were slowly wheeled down the hall. Bewildered, they were rolled in the clone room. “What’s a clone room,” Zach asked to himself? The drives on the cart were divided into two groups, with one group being placed on the clone table, and the other being placed on the test table. Zach was on the test table.

Almost as soon as Zach was placed on the test table, the DC Tech picked him up again and placed him and several other drives into a machine. He was about to get formatted. The entire formatting process only took a few minutes for Zach, as it did for all of the other drives on the test table. Zach counted 25 drives, including himself.

Still confused and a little sore from the formatting, Zach and two other drives were picked up from the bench by a different DC Tech. She recorded their vitals — serial number, manufacturer, and model — and left the clone room with all three drives on a different cart.

Dreams of a Test Drive

Luigi, Storage Pod liftThe three drives were back on the data center floor with red Storage Pods all around. The DC Tech had maneuvered Luigi, the local Storage Pod lift unit, to hold a Storage Pod she was sliding from a data center rack. The lid was opened, the tech attached a grounding clip, and then removed one of the drives in the Storage Pod. She recorded the vitals of the removed drive. While she was doing so, Zach could hear the removed drive breathlessly mumble something about media errors, but before Zach could respond, the tech picked him up, attached drive guides to his frame and gently slide him into the Storage Pod. The tech updated her records, closed the lid, and slide the pod back into place. A few seconds later, Zach felt a jolt of electricity pass through his circuits and he and 59 other drives spun to life. Zach was now part of a production Backblaze Storage Pod.

First, Zach was introduced to the other 19 members of his tome. There are 20 drives in a tome, with each living in a separate Storage Pod. Files are divided (sharded) across these 20 drives using Backblaze’s open-sourced erasure code algorithm.

Zach’s first task was to rebuild all of the files that were stored on the drive he replaced. He’d do this by asking for pieces (shards) of all the files from the 19 other drives in his tome. He only needed 17 of the pieces to rebuild a file, but he asked everyone in case there was a problem. Rebuilding was hard work, and the other drives were often busy with reading files, performing shard integrity checks, and so on. Depending on how busy the system was, and how full the drives were, it might take Zach a couple of weeks to rebuild the files and get him up to speed with his contemporaries.

Nightmares of a Test Drive

Little did he know, but at this point, Zach was still considered a temp replacement drive. The dysfunctional drive that he replaced was making its way back to the clone room where a pair of cloning units, named Harold and Maude in this case, waited. The tech would attempt to clone the contents of the failed drive to a new drive assigned to the clone table. The primary reason for trying to clone a failed drive was recovery speed. A drive can be cloned in a couple of days, but as noted above, it can take up to a couple of weeks to rebuild a drive, especially large drives on busy systems. In short, a successful clone would speed up the recovery process.

For nearly two days straight, Zach was rebuilding. He barely had time to meet his pod neighbors, Cheryl and Carlos. Since they were not rebuilding, they had plenty of time to marvel at how hard Zach was working. He was 25 % done and going strong when the Storage Pod powered down. Moments later, the pod was slid out of the rack and the lid popped open. Zach assumed that another drive in the pod had failed, when he felt the spindly, cold fingers of the tech grab him and yank firmly. He was being replaced.

Storage Pod in Backblaze data center

Zach had done nothing wrong. It was just that the clone was successful, with nearly all the files being copied from the previous drive to the smiling clone drive that was putting on Zach’s drive guides and gently being inserted in Zach’s old slot. “Goodbye,” he managed to eek out as he was placed on the cart and watched the tech bring the Storage Pod back to life. Confused, angry, and mostly exhausted, Zach quickly fell asleep.

Zach woke up just in time to see he was in the formatting machine again. The data he had worked so hard to rebuild was being ripped from his platters and replaced randomly with ones and zeroes. This happened multiple times and just as Zach was ready to scream, it stopped, and he was removed from his torture and stacked neatly with a few other drives.

After a while he looked around, and once the lights went out the stories started. Zach wasn’t alone. Several of the other temp drives had pretty much the same story; they thought they had found a home, only to be replaced by some uppity clone drive. One of the temp drives, Lin, said she had been in three different systems only to be replaced each time by a clone drive. No one wanted to believe her, but no one knew what was next either.

The Day the Clone Died

Zach found out the truth a few days later when he was selected, inspected, and injected as a temp drive into another Storage Pod. Then three days later he was removed, wiped, reformatted, and placed back in the temp pool. He began to resign himself to life as a temp drive. Not exactly glamorous, but he did get his serial number in the Backblaze Drive Stats data tables while he was a temp. That was more than the millions of other drives in the world that would forever be unknown.

On his third temp drive stint, he was barely in the pod a day when the lid opened and he was unceremoniously removed. This was the life of temp drive, and when the lid opened on the fourth day of his fourth temp drive shift, he just closed his eyes and waited for his dream to end again. Except, this time, the tech’s hand reached past him and grabbed a drive a few slots away. That unfortunate drive had passed the night before, a full-fledged crash. Zach, like all the other drives nearby, had heard the screams.

Another temp drive Zach knew from the temp table replaced the dead drive, then the lid was closed, the pod slid back into place, and power was restored. With that Zach, doubled down on getting rebuilt — maybe if he could get done before the clone was finished then he could stay. What Zach didn’t know was that the clone process for the drive he had replaced had failed. This happens about half the time. Zach was home free; he just didn’t know it.

In a couple of days, Zach was finished rebuilding and become a real member of a production Backblaze Storage Pod. He now spends his days storing and retrieving data, getting his bits tested by shard integrity checks, and having his S.M.A.R.T. stats logged for the Backblaze Drive Stats. His hard drive life is better than he ever dreamed.

The post The Life and Times of a Backblaze Hard Drive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Interview With the Podfather: Tim Nufire on the Origins of the Pod

Post Syndicated from Patrick Thomas original https://www.backblaze.com/blog/origins-of-the-pod/

Tim, the Podfather, working in data center

It’s early 2008, the concept of cloud computing is only just breaking into public awareness, and the economy is in the tank. Despite this less-than-kind business environment, five intrepid Silicon Valley veterans quit their jobs and pooled together $75K to launch a company with a simple goal: provide an easy-to-use, cloud-based backup service at a price that no one in the market could beat — $5 per month per computer.

The only problem: both hosted storage (through existing cloud services) and purchased hardware (buying servers from Dell or Microsoft) were too expensive to hit this price point. Enter Tim Nufire, aka: The Podfather.

Tim led the effort to build what we at Backblaze call the Storage Pod: The physical hardware our company has relied on for data storage for more than a decade. On the occasion of the decade anniversary of the open sourcing of our Storage Pod 1.0 design, we sat down with Tim to relive the twists and turns that led from a crew of backup enthusiasts in an apartment in Palo Alto to a company with four data centers spread across the world holding 2100 storage pods and closing in on an exabyte of storage.

✣   ✣   ✣

Editors: So Tim, it all started with the $5 price point. I know we did market research and that was the price at which most people shrugged and said they’d pay for backup. But it was so audacious! The tech didn’t exist to offer that price. Why do you start there?

Tim Nufire: It was the pricing given to us by the competitors, they didn’t give us a lot of choice. But it was never a challenge of if we should do it, but how we would do it. I had been managing my own backups for my entire career; I cared about backups. So it’s not like backup was new, or particularly hard. I mean, I firmly believe Brian Wilson’s (Backblaze’s Chief Technical Officer) top line: You read a byte, you write a byte. You can read the byte more gently than other services so as to not impact the system someone is working on. You might be able to read a byte a little faster. But at the end of the day, it’s an execution game not a technology game. We simply had to out execute the competition.

E: Easy to say now, with a company of 113 employees and more than a decade of success behind us. But at that time, you were five guys crammed into a Palo Alto apartment with no funding and barely any budget and the competition — Dell, HP, Amazon, Google, and Microsoft — they were huge! How do you approach that?

TN: We always knew we could do it for less. We knew that the math worked. We knew what the cost of a 1 TB hard drive was, so we knew how much it should cost to store data. We knew what those markups were. We knew, looking at a Dell 2900, how much the margin was in that box. We knew they were overcharging. At that time, I could not build a desktop computer for less than Dell could build it. But I could build a server at half their cost.

I don’t think Dell or anyone else was being irrational. As long as they have customers willing to pay their hard margins, they can’t adjust for the potential market. They have to get to the point where they have no choice. We didn’t have that luxury.

So, at the beginning, we were reluctant hardware manufacturers. We were manufacturing because we couldn’t afford to pay what people were charging, not because we had any passion for hardware design.

E: Okay, so you came on at that point to build a cloud. Is that where your title comes from? Chief Cloud Officer? The pods were a little ways down the road, so Podfather couldn’t have been your name yet. …

TN: This was something like December, 2007. Gleb (Budman, the Chief Executive Officer of Backblaze) and I went snowboarding up in Tahoe, and he talked me into joining the team. … My title at first was all wrong, I never became the VP of Engineering, in any sense of the word. That was never who I was. I held the title for maybe five years, six years before we finally changed it. Chief Cloud Officer means nothing, but it fits better than anything else.

E: It does! You built the cloud for Backblaze with the Storage Pod as your water molecule (if we’re going to beat the cloud metaphor to death). But how does it all begin? Take us back to that moment: the podception.

TN: Well, the first pod, per se, was just a bunch of USB drives strapped to a shelf in the data center attached to two Dell 2900 towers. It didn’t last more than an hour in production. As soon as it got hit with load, it just collapsed. Seriously! We went live on this and it lasted an hour. It was a complete meltdown.

Two things happened: The bus was completely unstable, so the USB drives were unstable. Second, the DRDB (Distributed Replicated Block Device) — which is designed to protect your data by live mirroring it between the two towers — immediately fell apart. You implement a DRDB not because it works in a well-running situation, but because it covers you in the failure mode. And in failure mode it just unraveled — in an hour. It went into a split-brain mode under the hardware failures that the USB drives were causing. A well-running DRDB is fully mirrored, and split-brained mode is when the two sides simply give up and start acting autonomously because they don’t know what the other side is doing and they’re not sure who is boss. The data is essentially inconsistent at that point because you can choose A or B but the two sides are not in agreement.

While the USB specs say you can connect something like 256 or 128 drives to a hub, we were never able to do more than like, five. After something like five or six, the drives just start dropping out. We never really figured it out because we abandoned the approach. I just took the drives out and shoved them inside of the Dells, and those two became pods number 0 and 1. The Dells had room for 10 or 8 drives apiece, and so we brought that system live.

That was what the first six years of this company was like, just a never-ending stream of those kind of moments — mostly not panic inducing, mostly just: you put your head down and you start working through the problems. There’s a little bit of adrenaline, that feeling before a big race of an impending moment. But you have to just keep going.

Tim working on Storage Pod
Tim welcoming another Storage Pod to the family.

E: Wait, so this wasn’t in testing? You were running this live?

TN: Totally! We were in friends-and-family beta at the time. But the software was all written. We didn’t have a lot of customers, but we had launched, and we managed to recover the files: whatever was backed up. The system has always had self healing built into the client.

E: So where do you go from there? What’s the next step?

TN: These were the early days. We were terrified of any commitments. So I think we had leased a half cabinet at the 365 Main facility in San Francisco, because that was the most we could imagine committing to in a contract: We committed to a year’s worth of this tiny little space.

We had those first two pods — the two Dell Towers (0 and 1) — which we eventually built out using external exclosures. So those guys had 40 or 45 drives by the end, with these little black boxes attached to them.

Pod number 2 was the plywood pod, which was another moment of sitting in the data center with a piece of hardware that just didn’t work out of the gate. This was Chris Robertson’s prototype. I credit him with the shape of the basic pod design, because he’s the one that came up with the top loaded 45 drives design. He mocked it up in his home woodshop (also known as a garage).

E: Wood in a data center? Come on, that’s crazy, right?

TN: It was what we had! We didn’t have a metal shop in our garage, we had a woodshop in our garage, so we built a prototype out of plywood, painted it white, and brought it to the data center. But when I went to deploy the system, I ended up having to recable and rewire and reconfigure it on the fly, sitting there on the floor of the data center, kinda similar to the first day.

The plywood pod was originally designed to be 45 drives, top loaded with port multipliers — we didn’t have backplanes. The port multipliers were these little cards that took one set of cables in and five cables out. They were cabled from the top. That design never worked. So what actually got launched was a fifteen drive system that had these little five drive enclosures that we shoved into the face of the plywood pod. It came up as a 15 drive, traditionally front-mounted design with no port multipliers. Nothing fancy there. Those boxes literally have five SATA connections on the back, just a one-to-one cabling.

E: What happened to the plywood pod? Clearly it’s cast in bronze somewhere, right?

TN: That got thrown out in the trash in Palo Alto. I still defend the decision. We were in a small one-bedroom apartment in Palo Alto and all this was cruft.

Wooden pod
The plywood pod, RIP.

E: Brutal! But I feel like this is indicative of how you were working. There was no looking back.

TN: We didn’t have time to ask the question of whether this was going to work. We just stayed ahead of the problems: Pods 0 and 1 continued to run, pod 2 came up as a 15 drive chassis, and runs.

The next three pods are the first where we worked with Protocase. These are the first run of metal — the ones where we forgot a hole for the power button, so you’ll see the pried open spots where we forced the button in. These are also the first three with the port-multiplier backplane. So we built a chassis around that, and we had horrible drive instability.

We were using the Western Digital Green, 1 TB drives. But we couldn’t keep them in the RAID. We wrote these little scripts so that in the middle of the night, every time a drive dropped out of the array, the script would put it back in. It was this constant motion and churn creating a very unstable system.

We suspected the problem was with power. So we made the octopus pod. We drilled holes in the bottom, and ran it off of three PSUs beneath it. We thought: “If we don’t have enough power, we’ll just hit it with a hammer.” Same thing on cooling: “What if it’s getting too hot?” So we put a box fan on top and blew a lot of air into it. We were just trying to figure out what it was that was causing trouble and grief. Interestingly, the array in the plywood pod was stable, but when we replaced the enclosure with steel, it became unstable as well!

Storage Pod with fan
Early experiments in pod cooling.

We slowly circled in on vibration as the problem. That plywood pod had actual disk enclosure with caddies and good locking mechanisms, so we thought the lack of caddies and locking mechanisms could be the issue. I was working with Western Digital at the time, too, and they were telling me that they also suspected vibration as the culprit. And I kept telling them, ‘They are hard drives! They should work!’

At the time, Western Digital was pushing me to buy enterprise drives, and they finally just gave me a round of enterprise drives. They were worse than the consumer drives! So they came over to the office to pick up the drives because they had accelerometers and lot of other stuff to give us data on what was wrong, and we never heard from them again.

We learned later that, when they showed up in an office in a one bedroom apartment in Palo Alto with five guys and a dog, they decided that we weren’t serious. It was hard to get a call back from them after that … I’ll admit, I was probably very hard to deal with at the time. I was this ignorant wannabe hardware engineer on the phone yelling at them about their hard drives. In hindsight, they were right; the chassis needed work.

But I just didn’t believe that vibration was the problem. It’s just 45 drives in a chassis. I mean, I have a vibration app on my phone, and I stuck the phone on the chassis and there’s vibration, but it’s not like we’re trying to run this inside a race car doing multiple Gs around corners, it was a metal box on a desk with hard drives spinning at 5400 or 7200 rpm. This was not a seismic shake table!

The early hard drives were secured with EPDM rubber bands. It turns out that real rubber (latex) turns into powder in about two months in a chassis, probably from the heat. We discovered this very quickly after buying rubber bands at Staples that just completely disintegrated. We eventually got better bands, but they never really worked. The hope was that they would secure a hard drive so it couldn’t vibrate its neighbors, and yet we were still seeing drives dropping out.

At some point we started using clamp down lids. We came to understand that we weren’t trying to isolate vibration between the drives, but we were actually trying to mechanically hold the drives in place. It was less about vibration isolation, which is what I thought the rubber was going to do, and more about stabilizing the SATA connector on the backend, as in: You don’t want the drive moving around in the SATA connector. We were also getting early reports from Seagate at the time. They took our chassis and did vibration analysis and, over time, we got better and better at stabilizing the drives.

We started to notice something else at this time: The Western Digital drives had these model numbers followed by extension numbers. We realized that drives that stayed in the array tended to have the same set of extensions. We began to suspect that those extensions were manufacturing codes, something to do with which backend factory they were built in. So there were subtle differences in manufacturing processes that dictated whether the drives were tolerant of vibration or not. Central Computer was our dominant source of hard drives at the time, and so we were very aggressively trying to get specific runs of hard drives. We only wanted drives with a certain extension. This was before the Thailand drive crisis, before we had a real sense of what the supply chain looked like. At that point we just knew some drives were better than others.

E: So you were iterating with inconsistent drives? Wasn’t that insanely frustrating?

TN: No, just gave me a few more gray hairs. I didn’t really have time to dwell on it. We didn’t have a choice of whether or not to grow the storage pod. The only path was forward. There was no plan B. Our data was growing and we needed the pods to hold it. There was never a moment where everything was solved, it was a constant stream of working on whatever the problem was. It was just a string of problems to be solved, just “wheels on the bus.” If the wheels fall off, put them back on and keep driving.

E: So what did the next set of wheels look like then?

TN: We went ahead with a second small run of steel pods. These had a single Zippy power supply, with the boot drive hanging over the motherboard. This design worked until we went to 1.5TB drives and the chassis would not boot. Clearly a power issue, so Brian Wilson and I sat there and stared at the non-functioning chassis trying to figure out how to get more power in.

The issue with power was not that we were running out of power on the 12V rail. The 5V rail was the issue. All the high end, high-power PSUs give you more and more power on 12V because that’s what the gamers need — it’s what their CPUs and the graphics card need, so you can get a 1000W or a 1500W power supply and it gives you a ton of power on 12V, but still only 25 amps on 5V. As a result, it’s really hard to get more power on the 5V rail, and a hard drive takes 12V and 5V: 12V to spin the motor and 5V to power the circuit board. We were running out of the 5V.

So our solution was two power supplies, and Brian and I were sitting there trying to visually imagine where you could put another power supply. Where are you gonna put it? We can put it were the boot drive is, and move the boot drive to the side, and just kind of hang the PSU up and over the motherboard. But the biggest consequence with this was, again, vibration. Mounting the boot drive to the side of a vibrating chassis isn’t the best place for a boot drive. So we had higher than normal boot drive failures in those nine.

Storage Pod power supply
Tim holding the second power supply in place to show where it should go.

So the next generation, after pod number 8, was the beginning of Storage Pod 1.0. We were still using rubber bands, but it had two power supplies, 45 drives, and we built 20 of them, total. Casey Jones, as our designer, also weighed in at this point to establish how they would look. He developed the faceplate design and doubled down on the deeper shade of red. But all of this was expensive and scary for us: We’re gonna spend $10 grand!? We don’t have much money. We had been two years without salary at this point.

Storage Pod faceplates
Casey Jones’ faceplate vent design versions, with the final first generation below.

We talked to Ken Raab from Sonic Manufacturing, and he convinced us that he could build our chassis, all in, for less than we were paying. He would take the task off my plate, I wouldn’t have to build the chassis, and he would build the whole thing for less than I would spend on parts … and it worked. He had better backend supplier connections, so he could shave a little expense off of everything and was able to mark up 20%.

We fixed the technology and the human processes. On the technology side, we were figuring out the hardware and hard drives, we were getting more and more stable. Which was required. We couldn’t have the same failure rates we were having on the first three pods. In order to reduce (or at least maintain) the total number of problems per day, you have to reduce the number of problems per chassis, because there’s 32 of them now.

We were also learning how to adapt our procedures so that the humans could live. By “the Humans,” I mean me and Sean Harris who joined me in 2010. There are physiological and psychological limits to what is sustainable and we were nearing our wits end.… So, in addition to stabilizing the chassis design, we got better at limiting the type of issues that would wake us up in the middle of the night.

E: So you reached some semblance of stability in your prototype and in your business. You’d been sprinting with no pay for a few years to get to this point and then … you decide to give away all your work for free? You open sourced Storage Pod 1.0 on September 9th, 2009. Were you a nervous wreck that someone was going to run away with all your good work?

TN: Not at all. We were dying for press. We were ready to tell the world anything they would listen to. We had no shame. My only regret is that we didn’t do more. We open sourced our design before anyone was doing that, but we didn’t build a community around it or anything.

Remember, we didn’t want to be a manufacturer. We would have killed for someone to build our pods better and cheaper than we could. Our hope from the beginning was always that we would build our own platform until the major vendors did for the server market what they did in the personal computing market. Until Dell would sell me the box that I wanted at the price I could afford, I was going to continue to build my chassis. But I always assumed they would do it faster than a decade.

Supermicro tried to give us a complete chassis at one point, but their problem wasn’t high margin; they were targeting too high of performance. I needed two things: Someone to sell me a box and not make too much profit off of me, and I needed someone who would wrap hard drives in a minimum performance enclosure and not try to make it too redundant or high performance. Put in one RAID controller, not two; daisy chain all the drives; let us suffer a little! I don’t need any of the hardware that can support SSDs. But no matter how much we ask for barebones servers, no one’s been able to build them for us yet.

So we’ve continued to build our own. And the design has iterated and scaled with our business. So we’ll just keep iterating and scaling until someone can make something better than we can.

E: Which is exactly what we’ve done, leading from Storage Pod 1.0 to 2.0, 3.0, 4.0, 4.5, 5.0, to 6.0 (if you want to learn more about these generations, check out our Pod Museum), preparing the way for more than 800 petabytes of data in management.

Storage Pod Museum
The Backblaze Storage Pod Museum in San Mateo, California

✣   ✣   ✣

But while Tim is still waiting to pass along the official Podfather baton, he’s not alone. There was the early help from Brian Wilson, Casey Jones, Sean Harris, and a host of others, and then in 2014, Ariel Ellis came aboard to wrangle our supply chain. He grew in that role over time until he took over the responsibility over charting the future of the Pod via Backblaze Labs, becoming the Podson, so to speak. Today, he’s sketching the future of Storage Pod 7.0, and — provided no one builds anything better in the meantime — he’ll tell you all about it on our blog.

The post Interview With the Podfather: Tim Nufire on the Origins of the Pod appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Petabytes on a Budget: 10 Years and Counting

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/petabytes-on-a-budget-10-years-and-counting/

A Decade of the Pod

This post is for all of the storage geeks out there who have followed the adventures of Backblaze and our Storage Pods over the years. The rest of you are welcome to come along for the ride.

It has been 10 years since Backblaze introduced our Storage Pod to the world. In September 2009, we announced our hulking, eye-catching, red 4U storage server equipped with 45 hard drives delivering 67 terabytes of storage for just $7,867 — that was about $0.11 a gigabyte. As part of that announcement, we open-sourced the design for what we dubbed Storage Pods, telling you and everyone like you how to build one, and many of you did.

Backblaze Storage Pod version 1 was announced on our blog with little fanfare. We thought it would be interesting to a handful of folks — readers like you. In fact, it wasn’t even called version 1, as no one had ever considered there would be a version 2, much less a version 3, 4, 4.5, 5, or 6. We were wrong. The Backblaze Storage Pod struck a chord with many IT and storage folks who were offended by having to pay a king’s ransom for a high density storage system. “I can build that for a tenth of the price,” you could almost hear them muttering to themselves. Mutter or not, we thought the same thing, and version 1 was born.

The Podfather

Tim, the “Podfather” as we know him, was the Backblaze lead in creating the first Storage Pod. He had design help from our friends at Protocase, who built the first three generations of Storage Pods for Backblaze and also spun out a company named 45 Drives to sell their own versions of the Storage Pod — that’s open source at its best. Before we decided on the version 1 design, there were a few experiments along the way:

Wooden pod
Octopod

The original Storage Pod was prototyped by building a wooden pod or two. We needed to test the software while the first metal pods were being constructed.

The Octopod was a quick and dirty response to receiving the wrong SATA cables — ones that were too long and glowed. Yes, there are holes drilled in the bottom of the pod.

Pre-1 Storage Pod
Early not-red Storage Pod

The original faceplate shown above was used on about 10 pre-1.0 Storage Pods. It was updated to the three circle design just prior to Storage Pod 1.0.

Why are Storage Pods red? When we had the first ones built, the manufacturer had a batch of red paint left over that could be used on our pods, and it was free.

Back in 2007, when we started Backblaze, there wasn’t a whole lot of affordable choices for storing large quantities of data. Our goal was to charge $5/month for unlimited data storage for one computer. We decided to build our own storage servers when it became apparent that, if we were to use the other solutions available, we’d have to charge a whole lot more money. Storage Pod 1.0 allowed us to store one petabyte of data for about $81,000. Today we’ve lowered that to about $35,000 with Storage Pod 6.0. When you take into account that the average amount of data per user has nearly tripled in that same time period and our price is now $6/month for unlimited storage, the math works out about the same today as it did in 2009.

We Must Have Done Something Right

The Backblaze Storage Pod was more than just affordable data storage. Version 1.0 introduced or popularized three fundamental changes to storage design: 1) You could build a system out of commodity parts and it would work, 2) You could mount hard drives vertically and they would still spin, and 3) You could use consumer hard drives in the system. It’s hard to determine which of these three features offended and/or excited more people. It is fair to say that ten years out, things worked out in our favor, as we currently have about 900 petabytes of storage in production on the platform.

Over the last 10 years, people have warmed up to our design, or at least elements of the design. Starting with 45 Drives, multitudes of companies have worked on and introduced various designs for high density storage systems ranging from 45 to 102 drives in a 4U chassis, so today the list of high-density storage systems that use vertically mounted drives is pretty impressive:

CompanyServerDrive Count
45 DrivesStorinator S4545
45 DrivesStorinator XL6060
ChenbroRM4316060
ChenbroRM43699100
DellDSS 700090
HPECloudline CL520080
HPECloudline CL5800100
NetGear ReadyNAS 4360X60
NewisysNDS 445060
QuantaQuantaGrid D51PL-4U102
QuantaQuantaPlex T21P-4U70
Seagate Exos AP 4U10096
SupermicroSuperStorage 6049P-E1CR60L60
SupermicroSuperStorage 6049P-E1CR45L45
TyanThunder SX FA100-B7118100
Viking Enterprise SolutionsNSS-460260
Viking Enterprise SolutionsNDS-490090
Viking Enterprise SolutionsNSS-41000100
Western DigitalUltrastar Serv60+860
WiwynnSV7000G272

Another driver in the development of some of these systems is the Open Compute Project (OCP). Formed in 2011, they gather and share ideas and designs for data storage, rack designs, and related technologies. The group is managed by The Open Compute Project Foundation as a 501(c)(6) and counts many industry luminaries in the storage business as members.

What Have We Done Lately?

In technology land, 10 years of anything is a long time. What was exciting then is expected now. And the same thing has happened to our beloved Storage Pod. We have introduced updates and upgrades over the years twisting the usual dials: cost down, speed up, capacity up, vibration down, and so on. All good things. But, we can’t fool you, especially if you’ve read this far. You know that Storage Pod 6.0 was introduced in April 2016 and quite frankly it’s been crickets ever since as it relates to Storage Pods. Three plus years of non-innovation. Why?

  1. If it ain’t broke, don’t fix it. Storage Pod 6.0 is built in the US by Equus Compute Solutions, our contract manufacturer, and it works great. Production costs are well understood, performance is fine, and the new higher density drives perform quite well in the 6.0 chassis.
  2. Disk migrations kept us busy. From Q2 2016 through Q2 2019 we migrated over 53,000 drives. We replaced 2, 3, and 4 terabyte drives with 8, 10, and 12 terabyte drives, doubling, tripling and sometimes quadrupling the storage density of a storage pod.
  3. Pod upgrades kept us busy. From Q2 2016 through Q1 2019, we upgraded our older V2, V3, and V4.5 storage pods to V6.0. Then we crushed a few of the older ones with a MegaBot and gave a bunch more away. Today there are no longer any stand-alone storage pods; they are all members of a Backblaze Vault.
  4. Lots of data kept us busy. In Q2 2016, we had 250 petabytes of data storage in production. Today, we have 900 petabytes. That’s a lot of data you folks gave us (thank you by the way) and a lot of new systems to deploy. The chart below shows the challenge our data center techs faced.

Petabytes Stored vs Headcount vs Millions Raised

In other words, our data center folks were really, really busy, and not interested in shiny new things. Now that we’ve hired a bunch more DC techs, let’s talk about what’s next.

Storage Pod Version 7.0 — Almost

Yes, there is a Backblaze Storage Pod 7.0 on the drawing board. Here is a short list of some of the features we are looking at:

  • Updating the motherboard
  • Upgrade the CPU and consider using an AMD CPU
  • Updating the power supply units, perhaps moving to one unit
  • Upgrading from 10Gbase-T to 10GbE SFP+ optical networking
  • Upgrading the SATA cards
  • Modifying the tool-less lid design

The timeframe is still being decided, but early 2020 is a good time to ask us about it.

“That’s nice,” you say out loud, but what you are really thinking is, “Is that it? Where’s the Backblaze in all this?” And that’s where you come in.

The Next Generation Backblaze Storage Pod

We are not out of ideas, but one of the things that we realized over the years is that many of you are really clever. From the moment we open sourced the Storage Pod design back in 2009, we’ve received countless interesting, well thought out, and occasionally odd ideas to improve the design. As we look to the future, we’d be stupid not to ask for your thoughts. Besides, you’ll tell us anyway on Reddit or HackerNews or wherever you’re reading this post, so let’s just cut to the chase.

Build or Buy

The two basic choices are: We design and build our own storage servers or we buy them from someone else. Here are some of the criteria as we think about this:

  1. Cost: We’d like the cost of a storage server to be about $0.030 – $0.035 per gigabyte of storage (or less of course). That includes the server and the drives inside. For example, using off-the-shelf Seagate 12 TB drives (model: ST12000NM0007) in a 6.0 Storage Pod costs about $0.032-$0.034/gigabyte depending on the price of the drives on a given day.
  2. International: Now that we have a data center in Amsterdam, we need to be able to ship these servers anywhere.
  3. Maintenance: Things should be easy to fix or replace — especially the drives.
  4. Commodity Parts: Wherever possible, the parts should be easy to purchase, ideally from multiple vendors.
  5. Racks: We’d prefer to keep using 42” deep cabinets, but make a good case for something deeper and we’ll consider it.
  6. Possible Today: No DNA drives or other wistful technologies. We need to store data today, not in the year 2061.
  7. Scale: Nothing in the solution should limit the ability to scale the systems. For example, we should be able to upgrade drives to higher densities over the next 5-7 years.

Other than that there are no limitations. Any of the following acronyms, words, and phrases could be part of your proposed solution and we won’t be offended: SAS, JBOD, IOPS, SSD, redundancy, compute node, 2U chassis, 3U chassis, horizontal mounted drives, direct wire, caching layers, appliance, edge storage units, PCIe, fibre channel, SDS, etc.

The solution does not have to be a Backblaze one. As the list from earlier in this post shows, Dell, HP, and many others make high density storage platforms we could leverage. Make a good case for any of those units, or any others you like, and we’ll take a look.

What Will We Do With All Your Input?

We’ve already started by cranking up Backblaze Labs again and have tried a few experiments. Over the coming months we’ll share with you what’s happening as we move this project forward. Maybe we’ll introduce Storage Pod X or perhaps take some of those Storage Pod knockoffs for a spin. Regardless, we’ll keep you posted. Thanks in advance for your ideas and thanks for all your support over the past ten years.

The post Petabytes on a Budget: 10 Years and Counting appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A Toast to Our Partners in Europe at IBC

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/a-toast-to-our-partners-in-europe-at-ibc/

Join us at IBC

Prost! Skål! Cheers! Celebrate with us as we travel to Amsterdam for IBC, the premier conference and expo for media and entertainment technology in Europe. The show gives us a chance to raise a glass with our partners, customers, and future customers across the pond. And we’re especially pleased that IBC coincides with the opening of our new European data center.

How will we celebrate? With the Backblaze Partner Crawl, a rolling series of parties on the show floor from 13-16 September. Four of our Europe-based integration partners have graciously invited us to co-host drinks and bites in their stands throughout the show.

If you can make the trip to IBC, you’re invited to toast us with a skål! with our Swedish friends at Cantemo on Friday, a prost! with our German friends at Archiware on Saturday, or a cheers! with UK-based friends at Ortana and GB Labs on Sunday or Monday, respectively. Or drop in every day and keep the Backblaze Partner Crawl rolling. And if you can’t make it to IBC this time, we encourage you to raise a glass and toast anyway.

Skål! on Friday With Cantemo

Cantemo’s iconik media management makes sharing and collaborating on media effortless, regardless of wherever you want to do business. Cantemo announced the integration of iconik with Backblaze’s B2 Cloud Storage last fall, and since then we’ve been amazed by customers like Everwell, who replaced all their on-premises storage with a fully cloud-based production workflow. For existing Backblaze customers, iconik can speed up your deployment by ingesting content already uploaded to B2 without having to download files and upload them again.
You can also stop by the Cantemo booth anytime during IBC to see a live demo of iconik and Backblaze in action. Or schedule an appointment and we’ll have a special gift waiting for you.

Join us at Cantemo on Friday 13 September from 16:30-18:00 at Hall 7 — 7.D67

Prost! on Saturday With Archiware

With the latest release of their P5 Archive featuring B2 support, Archiware makes archiving to the cloud even easier. Archiware customers with large existing archives can use the Backblaze Fireball to rapidly import archived content directly to their B2 account. At IBC, we’re also unveiling our latest joint customer, Baron & Baron, a creative agency that turned to P2 and B2 to back up and archive their dazzling array of fashion and luxury brand content.

Join us at Archiware on Saturday 14 September from 16:30-18:00 at Hall 7 — 7.D35

Cheers! on Sunday With Ortana

Ortana integrated their Cubix media asset management and orchestration platform with B2 way back in 2016 during B2’s beta period, making them among our first media workflow partners. More recently, Ortana joined our Migrate or Die webinar and blog series, detailing strategies for how you can migrate archived content from legacy platforms before they go extinct.

Join us at Ortana on Sunday 15 September from 16:30-18:00 at Hall 7 — 7.C63

Cheers! on Monday With GB Labs

If you were at the NAB Show last April, you may have heard GB Labs was integrating their automation tools with B2. It’s official now, as detailed in their announcement in June. GB Labs’ automation allows you to streamline tasks that would otherwise require tedious and repetitive manual processes, and now supports moving files to and from your B2 account.

Join us at GB Labs Monday 16 September from 17:00-18:00 at Hall 7 — 7.B26

Say Hello Anytime to Our Friends at CatDV

CatDV media asset management helps teams organize, communicate, and collaborate effectively, including archiving content to B2. CatDV has been integrated with B2 for over two years, allowing us to serve customers like UC Silicon Valley, who built an end-to-end collaborative workflow for a 22 member team creating online learning videos.

Stop by CatDV anytime at Hall 7 — 7.A51

But we’re not the only ones making a long trek to Amsterdam for IBC. While you’re roaming around Hall 7, be sure to stop by our other partners traveling from near and far to learn what our joint solutions can do for you:

  • EditShare (shared storage with MAM) Hall 7 — 7.A35
  • ProMax (shared storage with MAM) Hall 7 — 7.D55
  • StorageDNA (smart migration and storage) Hall 7 — 7.A32
  • FileCatalyst (large file transfer) Hall 7 — 7.D18
  • eMAM (web-based DAM) Hall 7 — 7.D27
  • Facilis Technology (shared storage) Hall 7 — 7.B48
  • GrayMeta (metadata extraction and insight) Hall 7 — 7.D25
  • Hedge (backup software) Hall 7 — 7.A56
  • axle ai (asset management) Hall 7 — 7.D33
  • Tiger Technology (tiered data management) Hall 7 — 7.B58

We’re hoping you’ll join us for one or more of our Partner Crawl parties. If you want a quieter place and time to discuss how B2 can streamline your workflow, please schedule an appointment with us so we can give you the attention you need.

Finally, if you can’t join us in Amsterdam, open a beer, pour a glass of wine or other drink, and toast to our new European data center, wherever you are, in whatever language you speak. As we say here in the States, Bottoms up!

The post A Toast to Our Partners in Europe at IBC appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Making the Data Center Choice (the Work Has Just Begun)

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/picking-right-eu-data-center-partner/

Globe with Europe

It’s Europe week at Backblaze! On Tuesday, we announced the opening of our first European data center. On Wednesday, we discussed the process of sourcing DCs. Yesterday, we focused on how we narrowed the list down to a small group of finalists. And, today, we’ll share how we ultimately decided on our new partner.

Imagine a globe spinning (or simply look at the top of this blog post). When you start out on a data center search, you could consider almost any corner of the globe. For Backblaze, we knew we wanted to find an anchor location in the European Union. For a variety of reasons, we quickly narrowed in on Amsterdam, Brussels and Dublin as the most likely locations. While we were able to generate a list of 40 qualified locations, narrowed it down to ten for physical visits, and then narrowed it yet again to three finalists, the question remained: How would we choose our ultimate partner? Data center searches have changed a lot since 2012 when we circulated our RFP for a previous expansion.

The good news is we knew our top line requirements would be met. Thinking back to the 2×2 that our Chief Cloud Officer, Tim Nufire, had drawn on the board at the early stages of our search, we felt good that we had weighed the tradeoffs appropriately.

EU data center cost risk quadrant
Cost vs risk

Similarly to hiring an employee, after the screening and the interviews, one runs reference checks. In the case of data centers, that means both validating certain assertions and going into the gory details on certain operational capabilities. For example, in our second post in the EU DC series, we mentioned environmental risks. If one is looking to reduce the probability of catastrophe, making sure that your DC is outside of a flood zone is generally advisable. Of course, the best environmental risk factor reports are much more nuanced and account for changes in the environment.

To help us investigate those sorts of issues, we partnered with PTS Consulting. By engaging with third party experts, we get dispassionate, unbiased, thorough reporting about the locations we are considering. Based on PTS’s reporting, we eliminated one of our finalists. To be clear, there was nothing inherently wrong with the finalist, but it was unlikely that particular location would sustainably meet our long term requirements without significant infrastructure upgrades on their end.

In our prior posts, we mentioned another partner, UpStack. Their platform helped us with the sourcing and narrowing down to a list of finalists. Importantly, their advisory services were crucial in this final stage of diligence. Specifically, UpStack brought in electrical engineering expertise to give us a deep, detailed assessment of the electrical mechanical single line diagrams. For those less versed in the aspects of DC power, that means UpStack was able to go into incredible granularity in looking at the reliability and durability of the power sources of our DCs.

Ultimately, it came down to two finalists:

  • DC 3: Interxion Amsterdam
  • DC 4: The pre-trip favorite

DC four had a lot of things going for it. The pricing was the most affordable and the facility had more modern features and functionality. The biggest downsides were open issues around sourcing and training what would become our remote hands team.

Which gets us back to our matrix of tradeoffs. While more expensive than DC three, Interxion facility graded out equally well during diligence. Ultimately, the people at Interxion and confidence in the ability to build out a sturdy remote hands team made the choice of Interxion clear.

Cost vs risk and result
Cost vs risk and result

Looking back at Tim’s 2×2, DC four presented as financially more affordable, but operationally a little more risky (since we had questions about our ability to effectively operate on a day to day basis).

Interxion, while a little more financially expensive, reduced our operational risks. When thinking of our anchor location in Europe, that felt like the right tradeoff to be making.

Ready, Set, More Work!

The site selection only represented part of the journey. In parallel, our sourcing team has had to learn how to get pods and drives into Europe. Our Tech Ops & Engineering teams have worked through any number of issues around latency, performance, and functionality. Finance & Legal has worked through the implications of having a physical international footprint. And that’s just to name a few things.

Interxion - Backblaze data center floor plan
EU data center floor plan

If you’re in the EU, we’ll be at IBC 2019 in Amsterdam from September 13 to September 17. If you’re interested in making an appointment to chat further, use our form to reserve a time at IBC, or drop by stand 7.D67 at IBC (our friends from Cantemo are hosting us). Or, if you prefer, feel free to leave any questions in the comments below!

The post Making the Data Center Choice (the Work Has Just Begun) appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Logistics of Finding the Right Data Center: The Great European (Non) Vacation

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/data-center-due-diligence/

EU data center search map

It’s Europe week at Backblaze! On Tuesday, we announced the opening of our first European data center. Yesterday, we discussed the process of sourcing DCs. Today, we’ll focus on how we narrowed the list down to a small group of finalists. And, tomorrow, we’ll share how we ultimately decided on our new partner.

Ten locations, three countries, three days. Even the hardest working person in show business wouldn’t take on challenges like that. But for our COO, John Tran, and UpStack’s CEO, Chris Trapp, that’s exactly what they decided to do.

In yesterday’s post, we discussed the path to getting 40 bids from vendors that could meet our criteria for our new European data center (DC). This was a remarkable accomplishment in itself, but still only part way to our objective of actually opening a DC. We needed to narrow down the list.

With help from UpStack, we began to filter the list based on some qualitative characteristics: vendor reputation, vendor business focus, etc. Chris managed to get us down to a list of 10. The wonders of technology today, like the UpStack platform, help people get more information and cast wider nets then at any other time in human history. The downside of that is you get a lot of information on paper, but that is a poor substitute to what you can gather in person. If you’re looking for a good, long term partner then understanding things like how they operate and their company DNA is imperative to finding the right match. So, to find our newest partner, we needed to go for a trip.

Chris took the lead on booking appointments. The majority of the shortlist clustered in the Netherlands and Ireland. The others were in Belgium and with the magic of Google Maps, one could begin to envision an efficient trip to all three countries. The feeling was it could all be done with just three days on the ground in Europe. Going in, they knew it would be a compressed schedule and that they would be on the move. As experienced travelers, they brought small bags that easily fit in the overhead and the right power adapters.

Hitting the Road

On July 23rd, 2018, John left San Francisco International Airport (SFO) at 7:40 a.m. on a non-stop to Amsterdam. Taking into account the 5,448 miles between the two cities and the time change, John landed at Amsterdam Airport Schiphol (AMS) hours at 7:35 a.m. on July 24th. He would land back home on July 27th at 6:45 p.m.

Tuesday (Day One)

The first day officially started when John’s redeye touched down in Amsterdam at 7:35 a.m. local. Thankfully, Chris’ flight from New York’s La Guardia was also on time. With both flights on time, they were able to meet at the airport: literally, for they had never met before.

Both adjourned to the airport men’s room to change out of their travel clothes and into their suits — choosing a data center is serious business, after all. While airport bathroom changes are best left for spy novels, John and Chris made short work of it and headed to the rental car area.

That day, they’ll ended up touring four DCs. One of the biggest takeaways of the trip was that it turned out visiting data centers is similar to wine tasting. While some of the differences can be divined from the specs on paper, when trying to figure out the difference between A and B, it’s very helpful to compare side by side. Also similar to wine tasting, there’s a fine line between understanding nuances between multiple things and it all starting to blend together. In both cases, after a full day of doing it, you feel like you probably shouldn’t operate heavy machinery.

On day one, our team saw a wide range of options. The physical plant is itself one area of differentiation. While we have requirements for things like power, bandwidth, and security, there’s still a lot of room for tradeoffs among those DCs that exceed the requirement. And that’s just the physical space. The first phase of successful screening (discussed in our prior post) is being effective at examining non-emotional decision variables — specs, price, reputation — but not the people. Every DC is staffed by human beings and cultural fit is important with any partnership. Throughout the day, one of the biggest differences we noticed was the culture of each specific DC.

The third stop of the day was Interxion Amsterdam. While we didn’t know it at the time, they would end up being our partner of choice. On paper, it was clear that Interxion would be a contender. Its impressive facility meets all our requirements and, by happenstance, happens to have a footprint available that is almost exactly to the spec of what we were looking for. During our visit, the facility was impressive, as expected. But the connection we felt with the team there would prove to be the thing that would ultimately be the difference.

After leaving the last DC tour around 7pm, our team drove from Amsterdam to Brussels. Day 2 would be another morning start and, after arriving in Brussels a little after 9pm, they had earned some rest!

Insider Tip: Grand Place, BrusselsEarlier in his career, John had spent a good amount of time in Europe and, specifically, Brussels. One of his favorite spots is the Grand Place (Brussels’ Central Market). If in the neighborhood, he recommends you go and enjoy a Belgium beer sitting at one of the restaurants in the market. The smart move is to take the advice. Chris, newer to Brussels, gave John’s tour a favorable TripAdvisor rating.

Wednesday (Day Two)

After getting a well-deserved couple hours of sleep, the day officially started with an 8:30 a.m. meeting for the first DC of the day. Major DC operators generally have multiple locations and DCs five and six are operated by companies that also operate sites visited on day one. It was remarkable, culturally, to compare the teams and operational variability across multiple locations. Even within the same company, teams at different locations have unique personalities and operating styles, which all serves to reinforce the need to physically visit your proposed partners before making a decision.

After two morning DC visits, John and Chris hustled to the Brussels airport to catch their flight to Dublin. At some point during the drive, it was realized that tickets to Dublin hadn’t actually been purchased. Smartphones and connectivity are transformative on road trips like this.

The flight itself was uneventful. When they landed, they got to the rental car area and their car was waiting for them. Oh, by the way, minor detail but the steering wheel was on the wrong side of the car! Chris buckled in tightly and John had flashbacks of driver’s ed having never driven on the right side of the car. Shortly after leaving the airport, it was realized that one also drives on the left side of the road in Ireland. Smartphones and connectivity were not required for this discovery. Thankfully, the drive was uneventful and the hotel was reached without incident. After work and family check ins, another day was put on the books.

Brazenhead, Dublin

Our team checked into their hotel and headed over to the Brazenhead for dinner. Ireland’s oldest pub is worth the visit. It’s here that we come across our it really is a small world nomination for the trip. After starting a conversation with their neighbors at dinner, our team was asked what they were doing in Dublin. John introduced himself as Backblaze’s COO and the conversation seemed to cool a bit. Apparently their neighbor was someone from another large cloud storage provider. Apparently, not all companies like sharing information as much as we do.

Thursday (Day Three)

The day again started with an 8:30 a.m. hotel departure. Bear in mind, during all of this, John and Chris both had their day jobs and families back home to stay in touch with. Today would feature four DC tours. One interesting note about the trip: operating a data center requires a fair amount of infrastructure. In a perfect world, power and bandwidth come in at multiple locations from multiple vendors. This often causes DCs to cluster around infrastructure hubs. Today’s first two DCs were across the street from one another. We’re assuming, but could not verify, a fierce inter-company football rivalry.

While walking across the street was interesting, in the case of the final two DCs, they literally shared the same space; the smaller provider subleasing space from the larger. Here, again, the operating personalities differentiated the companies. It’s not necessarily that one was worse than the other, it is a question of whom you think will be a better partnership match for your own style. In this case, the smaller of the two providers stood out because of the passion and enthusiasm we felt from the team there, and it didn’t hurt that they are long time Hard Drive Stats enthusiasts (flattery will get you everywhere!).

While the trip, and this post, were focused on finding our new DC location, opening up our first physical operations outside of the U.S. had any number of business ramifications. As such, John made sure to swing by the local office of our global accounting firm to take the opportunity to get to know them.

The meeting wrapped up just in time for Chris and John to make it to the Guinness factory by 6:15 p.m. Upon arrival, it was then realized that the last entry into the Guinness factory is 6 p.m. Smartphones and connectivity really can be transformative on road trips like this. All that said, without implicating any of the specific actors, our fearless travelers managed to finagle their way in and could file the report home that they were able to grab a pint or two at St. James’ place.

Guinness sign

Guinness glass

The team would leave for their respective homes early the next morning. John made it back to California in time for a (late) dinner with his family and a well earned weekend.

After a long, productive trip, we had our list of the three finalists. Tomorrow, we’ll discuss how we narrowed it down from three to one. Until then, slainte (cheers)!

The post The Logistics of Finding the Right Data Center: The Great European (Non) Vacation appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Getting Ready to Go

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/getting-ready-to-go/

EU data center cost risk quadrant

On Tuesday August 27th, we announced the opening of our first European data center. This post is part of our three-part series on selecting a new data center.

There’s an old saying, “How do you eat an elephant? One bite at a time.” The best way to tackle big problems is to simplify as much as you can.

In our case, with almost an exabyte of customer data under management and customers in over 160 countries, expanding the geographic footprint of our data centers (DCs) has been a frequently discussed topic. Prior to opening up EU Central, we had three DCs, but all in the western U.S. The topic of opening a DC in Europe is not a new one within Backblaze, but going from idea to storing customer data can be a long journey.

As our team gathered to prioritize the global roadmap, the first question was an obvious one: Why do we want to open a DC in Europe? The answer was simple: Customer demand.

While nearly 15 percent of our existing customer base already resides in Europe, the requests for an EU DC come from citizens around the globe. Why?

  • Customers like keeping data in multiple geographies. Doing so is in line with the best practices of backup (long before there was a cloud, there was still 3-2-1).
  • Geopolitical/regulatory concerns. For any number of reasons, customers may prefer or be required to store data in certain physical locations.
  • Performance concerns. While we enjoy a debate about the effects of latency for most storage use cases, the reality is many customers want a copy of their data as physically close to where it’s being used as possible.

With the need established, the next question was predictably obvious: How are we going to go about this? Our three existing DCs are all in the same timezone as our California headquarters. Logistically, opening and operating a DC that has somewhere around an eight hour time difference from our headquarters felt like a significant undertaking.

Organizing the Search for the Right Data Center

To help get us organized, our co-founder and Chief Cloud Officer, Tim Nufire, drew the following on a whiteboard.

Expense/Risk chart for data center location
Cost vs risk and result

This basic matrix frames the challenge well. If one were willing to accept infinite risk (have customers write to scrolls and “upload” via sealed bottle transported across the ocean), we’d have low financial and effort investments outlays to open the data enter. However, we’re not in the business of accepting infinite risk. So we wanted to achieve a low risk environment for data storage while sustaining our cost advantage for our customers.

But things get much more nuanced once you start digging in.

Risks

There are multiple risk factors to consider when selecting a DC. Some of the leading ones are:

  • Environmental: One could choose a DC in the middle of a floodplain, but, with few exceptions, most DCs don’t work well underwater. We needed to find an area to minimize adverse environmental impact.
  • Political: DCs are physical places. Physical places are governed by some form of nation state. Some customers want (or need) their data to be stored within certain regulatory or diplomatic parameters. In the case of the requests for opening a DC in Europe, many of our customers want their data to be inside of the European Union (EU). That requirement strikes Switzerland off our list. For similar reasons, another requirement we imposed was operating inside of a country that is a NATO member. Regrettably, that eliminated any location inside of Finland. Our customers want EU, not Europe.
  • Financial: By opening a DC in Europe, we will be conducting business with a partner that expects to be paid in euros. As an American company, we primarily operate in dollars. So now the simple timing of when we pay our bills may change the cost (depending on exchange rate fluctuations).

Costs

The other dimension on the board was costs, expressed as Affordable to Expensive. Costs can be thought of both as financial as well as effort:

  • Operating Efficiency: Generally speaking, the climate of the geography will have an effect on the heating/cooling costs. We needed to understand climate nuances across a broad geographic area.
  • Cost of Inputs: Power costs vary widely, often due to fuel sources having different availability at a local level. For example, nuclear power is generally cheaper than fossil fuel, but may not be available in a given region. Complicating things is that power source X may cost one thing in the first country, but something totally different in the next. Our DC negotiations may be for physical space, but we needed to understand our total cost of ownership.
  • Staffing: Some DCs provide remote hands (contract labor) while others expect us to provide our own staffing. We needed to get up to speed on labor laws and talent pools in desired regions.

Trying to Push Forward

We’re fortunate to have a great team of Operations people that have earned expertise in the field. So with the desire to find a DC in the EU, a working group formed to explore our options. A little while later, when the internal memo circulated, the summary in the body of the email jumped out:

“It could take 6-12 months from project kick-off to bring a new EU data center online.”

That’s a significant project for any company. In addition, the time range was sufficiently wide to indicate the number of unknowns in play. We were faced with a difficult decision: How can we move forward on a project with so many unknowns?

While this wouldn’t be our first data center search, prior experience told us we had many more unknowns in front of us. Our most recent facility searches mainly involved coordinating with known vendors to obtain facility reports and pricing for comparison. Even with known vendors, this process involved significant resources from Backblaze to relay requirements to various DC sales reps and to take disparate quotes and create some sort of comparison. All DCs with quote you $/Kilowatt Hour or $/kWh, but there is no standard definition of what is and isn’t included in that. Generally speaking, a DC contract has unit costs that decline as usage goes up. So is the $/kWh in a given quote the blended lifetime cost? Year one? Year five? Adding to this complexity would be all the variables discussed above (and more).

Interested in learning more about the initial assessment of the project? Here is a copy of the internal memo referenced. Because of various privacy agreements, we needed to redact small pieces of the original. Very little has been changed and, if you’re interested in the deep dive, we hope you’ll enjoy!

Serendipity Strikes: UpStack

Despite the obstacles in our path, our team committed to finding a location inside the EU that makes sense for both our customers’ needs and our business model. We have an experienced team that has demonstrated the ability to source and vet DCs already. That said, our experienced team were already quite busy with their day jobs. This project looked to come at a significant opportunity cost as it would fully occupy a number of people for an extended period of time.

At the same time as we were trying to work through the internal resource planning, our CEO happened across an interesting article from our friends at Data Center Knowledge; they were covering a startup called UpStack (“Kayak for data center services”). The premise was intriguing — the UpStack platform is designed to gather and normalize quotes from qualified vendors for relevant opportunities. Minimizing friction for bidding DCs and Backblaze would enable both sides to find the right fit. Intrigued, we reached out to their CEO, Chris Trapp.

UpStack LogoUpStack is a free, vendor-neutral data center sourcing platform that allows businesses to analyze and compare level-set pricing and specifications in markets around the world. Find them at upstack.com.

We were immediately impressed with how easy the user experience was on our side. Knowing how much effort goes into normalizing the data from various DCs, having a DC shopping experience comparable to that of searching for plane tickets was mind blowing. With a plane ticket, you might search for number of stops and layover airports. With UpStack, we were able to search for connectivity to existing bandwidth providers, compliance certifications, and location before asking for pricing.

Once vendors returned pricing, UpStack’s application made it easy to compare specifications and pricing on an apples-to-apples basis. This price normalization was a huge advantage for us as it saved many hours of work usually spent converting quotes into pricing models simply for comparison sake. We have the expertise to do what UpStack does, but we also know how much time that takes us. Being able to leverage a trusted partner was a tremendous value add for Backblaze.

UpStack data center search map
Narrowing down the DC possibilities with UpStack

Narrowing Down The Options

With the benefit of the UpStack platform, we were able to cast a much wider net than would have been viable hopping on phone calls from California.

We specified our load ramp. There’s a finite amount of data that will flow into the new DC on day one, and it only grows from there. So part of the pricing negotiation is agreeing to deploy a minimum amount of racks on day one, a minimum by the end of year one, and so on. In return for the guaranteed revenue, the DCs return pricing based on those deployments. Based on the forecasted storage needs, UpStack’s tool then translates that into estimated power needs so vendors can return bids based on estimated usage. This is an important change from how things are usually done; many quotes otherwise price based on the top estimated usage or a vendor-imposed minimum. By basing quotes off of one common forecast, we could get the pricing that fits our needs.

There are many more efficiencies that UpStack provides us and we’d encourage you to visit their site at https://upstack.com to learn more. The punchline is that we were able to create a shortlist of the DCs that fit our requirements; we received 40 quotes provided by 40 data centers in 10 markets for evaluation. This was a blessing and a curse, as we were able to cast a wider net and learn about more qualified vendors than we thought possible, but a list of 40 needed to be narrowed down.

Based on our cost/risk framework, we narrowed it down to the 10 DCs that we felt gave us our best shot to end up with a low cost, low risk partner. With all the legwork done, it was time to go visit. To learn more about our three country trip to 10 facilities that lasted less than 72 hours, tune in tomorrow. Same bat time, same bat station.

The post Getting Ready to Go appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing Our First European Data Center

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/announcing-our-first-european-data-center/

city view of Amsterdam, Netherlands

Big news: Our first European data center, in Amsterdam, is open and accepting customer data!

This is our fourth data center (DC) location and the first outside of the western United States. As longtime readers know, we have two DCs in the Sacramento, California area and one in the Phoenix, Arizona area. As part of this launch, we are also introducing the concept of regions.

When creating a Backblaze account, customers can choose whether that account’s data will be stored in the EU Central or US West region. The choice made at account creation time will dictate where all of that account’s data is stored, regardless of product choice (Computer Backup or B2 Cloud Storage). For customers wanting to store data in multiple regions, please read this knowledge base article on how to control multiple Backblaze accounts using our (free) Groups feature.

Whether you choose EU Central or US West, your pricing for our products will be unchanged:

  • For B2 Cloud Storage — it’s $0.005/GB/Month. For comparison, storing your data in Amazon S3’s Ireland region will cost ~4.5x more
  • For Computer Backup — $60/Year/Computer is the monthly cost of our industry leading, unlimited data backup for desktops/laptops

Later this week we will be publishing more details on the process we undertook to get to this launch. Here’s a sneak preview:

  • Wednesday, August 28: Getting Ready to Go (to Europe). How do you even begin to think about opening a DC that isn’t within any definition of driving distance? For the vast majority of companies on the planet, simply figuring out how to get started is a massive undertaking. We’ll be sharing a little more on how we thought about our requirements, gathered information, and the importance of NATO in the whole equation.
  • Thursday, August 29: The Great European (Non) Vacation. With all the requirements done, research gathered, and preliminary negotiations held, there comes a time when you need to jump on a plane and go meet your potential partners. For John & Chris, that meant 10 data center tours in 72 hours across three countries — not exactly a relaxing summer holiday, but vitally important!
  • Friday, August 30: Making a Decision. After an extensive search, we are very pleased to have found our partner in Interxion! We’ll share a little more about the process of narrowing down the final group of candidates and selecting our newest partner.
If you’re interested in learning more about the physical process of opening up a data center, check out our post on the seven days prior to opening our Phoenix DC.

New Data Center FAQs:

Q: Does the new DC mean Backblaze has multi-region storage?
A: Yes, by leveraging our Groups functionality. When creating an account, users choose where their data will be stored. The default option will store data in US West, but to choose EU Central, simply select that option in the pull-down menu.

Region selector
Choose EU Central for data storage

If you create a new account with EU Central selected and have an existing account that’s in US West, you can put both of them in a Group, and manage them from there! Learn more about that in our Knowledge Base article.

Q: I’m an existing customer and want to move my data to Europe. How do I do that?
A: At this time, we do not support moving existing data within Backblaze regions. While it is something on our roadmap to support, we do not have an estimated release date for that functionality. However, any customer can create a new account and upload data to Europe. Customers with multiple accounts can administer those accounts via our Groups feature. For more details on how to do that, please see this Knowledge Base article. Existing customers can create a new account in the EU Central region and then upload data to it; they can then either keep or delete the previous Backblaze account in US West.

Q: Finally! I’ve been waiting for this and am ready to get started. Can I use your rapid ingest device, the B2 Fireball?
A: Yes! However, as of the publication of this post, all Fireballs will ship back to one of our U.S. facilities for secure upload (regardless of account location). By the end of the year, we hope to offer Fireball support natively in Europe (so a Fireball with a European customer’s data will never leave the EU).

Q: Does this mean that my data will never leave the EU?
A: Any data uploaded by the customer does not leave the region it was uploaded to unless at the explicit direction of the customer. For example, restores and snapshots of data stored in Europe can be downloaded directly from Europe. However, customers requesting an encrypted hard drive with their data on it will have that drive prepared from a secure U.S. location. In addition, certain metadata about customer accounts (e.g. email address for your account) reside in the U.S. For more information on our privacy practices, please read our Privacy Policy.

Q: What are my payment options?
A: All payments to Backblaze are made in U.S. dollars. To get started, you can enter your credit card within your account.

Q: What’s next?
A: We’re actively working on region selection for individual B2 Buckets (instead of Backblaze region selection on an account basis), which should open up a lot more interesting workflows! For example, customers who want can create geographic redundancy for data within one B2 account (and for those who don’t want to set that up, they can sleep well knowing they have 11 nines of durability).

We like to develop the features and functionality that our customers want. The decision to open up a data center in Europe is directly related to customer interest. If you have requests or questions, please feel free to put them in the comment section below.

The post Announcing Our First European Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: Private Cloud vs Public Cloud

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/private-cloud-vs-public-cloud/

Private Cloud vs Public Cloud

Anyone just starting out with the cloud is going to need answers to some basic questions.

The first, of course, is what exactly is the cloud? Put simply, the cloud is a collection of purpose built servers. These servers could perform one or more services (storage, compute, database, email, web, etc.) and could exist anywhere as long as they’re accessible to whomever needs to use them.

The next important question to ask is whether the servers are in a private cloud or a public cloud. This distinction is often tied to where the servers are located, but more precisely, it reflects who uses the servers and how they use them.

What is Private Cloud?

Private + cloud icon

If the servers are owned by and dedicated to only one tenant (user) or group of related tenants, they are in a private cloud. The private cloud is typically on-site (or on-prem or on-premises in IT lingo), but it could be off-site, as well. The owner is responsible for the management and maintenance of the servers and for planning for future capacity and performance to meet the needs of its users. This planning usually involves long lead times to add additional hardware and services (electricity, broadband, cooling, etc.) to meet the future demand.

What is Public Cloud?

Public + cloud icon

In a public cloud, the servers are shared between multiple unrelated tenants (users). A public cloud is off-site (or off-prem or off-premises). Public clouds are typically owned by a vendor who sells access to servers that are co-located with many servers providing services to many users. Users contract with the vendor for the services they need. The user isn’t responsible for capital expenses, data is backed up regularly, and customers only have to pay for the resources they use. If their needs change, they can add or remove capacity very quickly and easily by requesting changes from the vendor who reserves additional resources to meet demand from its clients.

Differences: Private Cloud vs Public Cloud

Private CloudPublic Cloud
Single clientMultiple clients
On-premises or off-premisesOff-premises
Capital cost to set up and maintainNo capital cost
High IT overheadLow IT overhead
Fully customizableLimited customizations
Fully private networkShared network
Possible under utilizationScalable with demand

Which Cloud is Right For You?

If you’re a big company or organization with special computing needs, you know whether you need to keep your data in a private data center. For businesses in certain industries, for example, government or medical, the decision to host in a private or public cloud will be determined by regulation. These requirements could mandate the use of a private cloud, but there are more and more specialized off-premises clouds with the necessary security and management to support regulated industries.

The public cloud is the cloud of choice for those whose needs don’t yet include building a dedicated data center, or who like the flexibility, scalability, and cost of public cloud offerings. If the organization has a global reach, it also provides an easy way to connect with customers in diverse locations with minimal effort.

The growing number of vendors and variety of public cloud services indicate that the trend is definitely in favor of using the public cloud when possible. Even big customers are increasingly using the public cloud due to its undeniable advantages in rapid scaling, flexibility, and cost savings.

Enter Multi Cloud and Hybrid Cloud

For some, a combination of clouds could provide the best solution. Using multiple public cloud vendors (multi cloud) for independent tasks and duties can provide redundancy and cost savings. The data centers and infrastructure can be spread out geographically to decrease the risk of service loss or disaster, and it makes sense financially to store the second or third copy of data with an additional vendor that offers a good and reliable service at a lower cost.

Multi cloud diagram

Hybrid cloud refers to the presence of multiple deployment types (public or private) with some form of integration or orchestration between them. The hybrid cloud differs from multi cloud in that in the hybrid cloud the components work together while in the multi cloud they remain separate. An organization might choose the hybrid cloud to have the ability to rapidly expand its storage or computing when necessary for planned or unplanned spikes in demand, such as occur during holiday seasons for a retailer, or during a service outage at the primary data center. We wrote about the hybrid cloud in a previous post, Confused About the Hybrid Cloud? You’re Not Alone.

Hybrid Cloud: Flexible Use Responding to Demands, Needs, and Costs

Choose the Best Cloud Model For Your Needs

For businesses in highly regulated industries, the decision to host in a private or public cloud will likely be determined by regulation. For most businesses and organizations, the important factors in selecting a cloud will be cost, accessibility, reliability, and scalability. Whether the private or public cloud, or some combination, offers the best solution for your needs will depend on your type of business, regulations, budget, and future plans. The good news is that there are a wide variety of choices to meet just about any use case or budget.

The post What’s the Diff: Private Cloud vs Public Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

B2 Copy File is Now Public

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/b2-copy-file-is-now-public/

B2 Copy File

At the beginning of summer, we put B2 Copy File APIs into beta. We’re pleased to announce the end of the beta and that the APIs are all now public!

We had a number of people use the beta features and give us great feedback. In fact, because of the feedback, we were able to implement an incremental feature.

New Feature — Bucket to Bucket Copies

Initially, our guidance was that these new APIs were only to be used within the same B2 bucket, but in response to customer and partner feedback, we added the ability to copy files from one bucket to another bucket within the same account.

To use this new feature with b2_copy_file, simply pass in the destinationBucketId where the new file copy will be stored. If this is not set, the copied file will simply default to the same bucket as the source file. Within b2_copy_part, there is a subtle difference in that the Source File ID can belong to a different bucket than the Large File ID.

For the complete API documentation, refer to the Backblaze B2 docs online:

What You Can Do With B2 Copy File

In a literal sense, the new capability enables you to create a new file (or new part of a large file) that is a copy of an existing file (or range of an existing file). You can either copy over the source file’s metadata or specify new metadata for the new file that is created. This all occurs without having to download or re-upload any data.

This has been one of our most requested features as it unlocks:

  • Rename/Re-organize. The new capabilities give customers the ability to re-organize their files without having to download and re-upload. This is especially helpful when trying to mirror the contents of a file system to B2.
  • Synthetic Backup. With the ability to copy ranges of a file, users can now leverage B2 for synthetic backup, i.e. uploading a full backup but then only uploading incremental changes (as opposed to re-uploading the whole file with every change). This is particularly helpful for applications like backing up VMs where re-uploading the entirety of the file every time it changes can be inefficient.

While many of our customers directly leverage our APIs, just as many use 3rd party software (B2 Integration Partners) to facilitate storage into B2. Our Integration Partners were very helpful and active in giving us feedback during the beta. Some highlights of those that are already supporting the copy_file feature:

Transmit logoTransmit: macOS file transfer/cloud storage application that supports high speed copying to data between your Mac and more than 15 different cloud services.
Rclone logoRClone: Rsync for cloud storage is a powerful command line tool to copy and sync files to and from local disk, SFTP servers, and many cloud storage providers.
Mountain Duck logoMountain Duck: Mount server and cloud storage as a disk (Finder on macOS; File Explorer on Windows). With Mountain Duck, you can also open remote files with any application as if the file were on a local volume.
Cyberduck logoCyberduck: File transfer/cloud storage browser for Mac and Windows with support for more than 10 different cloud services.

Where to Learn More

The endpoint documentation can be found here:

The post B2 Copy File is Now Public appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Hard Drive Stats Q2 2019

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-q2-2019/

Backblaze Drive Stats Q2 2019
< ve models that have been around for several years, take a look at how our 14 TB Toshiba drives are doing (spoiler alert: great), and along the way we’ll provide a handful of insights and observations from inside our storage cloud. As always, we’ll publish the data we use in these reports on our Hard Drive Test Data web page and we look forward to your comments.

Hard Drive Failure Stats for Q2 2019

At the end of Q2 2019, Backblaze was using 108,660 hard drives to store data. For our evaluation we remove from consideration those drives that were used for testing purposes and those drive models for which we did not have at least 60 drives (see why below). This leaves us with 108,461 hard drives. The table below covers what happened in Q2 2019.

Backblaze Q2 2019 Hard Drive Failure Rates

Notes and Observations

If a drive model has a failure rate of 0 percent, it means there were no drive failures of that model during Q2 2019 — lifetime failure rates are later in this report. The two drives listed with zero failures in Q2 were the 4 TB and 14 TB Toshiba models. The Toshiba 4 TB drive doesn’t have a large enough number of drives or drive days to be statistically reliable, but only one drive of that model has failed in the last three years. We’ll dig into the 14 TB Toshiba drive stats a little later in the report.

There were 199 drives (108,660 minus 108,461) that were not included in the list above because they were used as testing drives or we did not have at least 60 of a given drive model. We now use 60 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics as there are 60 drives in all newly deployed Storage Pods — older Storage Pod models had a minimum of 45.

2,000 Backblaze Storage Pods? Almost…

We currently have 1,980 Storage Pods in operation. All are version 5 or version 6 as we recently gave away nearly all of the older Storage Pods to folks who stopped by our Sacramento storage facility. Nearly all, as we have a couple in our Storage Pod museum. There are currently 544 version 5 pods each containing 45 data drives, and there are 1436 version 6 pods each containing 60 data drives. The next time we add a Backblaze Vault, which consists of 20 Storage Pods, we will have 2,000 Backblaze Storage Pods in operation.

Goodbye Western Digital

In Q2 2019, the last of the Western Digital 6 TB drives were retired from service. The average age of the drives was 50 months. These were the last of our Western Digital branded data drives. When Backblaze was first starting out, the first data drives we deployed en masse were Western Digital Green 1 TB drives. So, it is with a bit of sadness to see our Western Digital data drive count go to zero. We hope to see them again in the future.

WD Ultrastar 14 TB DC HC530

Hello “Western Digital”

While the Western Digital brand is gone, the HGST brand (owned by Western Digital) is going strong as we still have plenty of the HGST branded drives, about 20 percent of our farm, ranging in size from 4 to 12 TB. In fact, we added over 4,700 HGST 12 TB drives in this quarter.

This just in; rumor has it there are twenty 14 TB Western Digital Ultrastar drives getting readied for deployment and testing in one of our data centers. It appears Western Digital has returned: stay tuned.

Goodbye 5 TB Drives

Back in Q1 2015, we deployed 45 Toshiba 5 TB drives. They were the only 5 TB drives we deployed as the manufacturers quickly moved on to larger capacity drives, and so did we. Yet, during their four plus years of deployment only two failed, with no failures since Q2 of 2016 — three years ago. This made it hard to say goodbye, but buying, stocking, and keeping track of a couple of 5 TB spare drives was not optimal, especially since these spares could not be used anywhere else. So yes, the Toshiba 5 TB drives were the odd ducks on our farm, but they were so good they got to stay for over four years.

Hello Again Toshiba 14 TB Toshiba Drives

We’ve mentioned the Toshiba 14 TB drives in previous reports, now we can dig in a little deeper given that they have been deployed almost nine months and we have some experience working with them. These drives got off to a bit of a rocky start, with six failures in the first three months of being deployed. Since then, there has been only one additional failure, with no failures reported in Q2 2019. The result is that the lifetime annualized failure rate for the Toshiba 14 TB drives has decreased to a very respectable 0.78% as shown in the lifetime table in the following section.

Lifetime Hard Drive Stats

The table below shows the lifetime failure rates for the hard drive models we had in service as of June 30, 2019. This is over the period beginning in April 2013 and ending June 30, 2019.

Backblaze Lifetime Hard Drive Annualized Failure Rates

The Hard Drive Stats Data

The complete data set used to create the information used in this review is available on our Hard Drive Test Data web page. You can download and use this data for free for your own purpose. All we ask are three things: 1) You cite Backblaze as the source if you use the data, 2) You accept that you are solely responsible for how you use the data, and, 3) You do not sell this data to anyone; it is free. Good luck and let us know if you find anything interesting.

If you just want the tables we used to create the charts in this blog post you can download the ZIP file containing the MS Excel spreadsheet.

The post Backblaze Hard Drive Stats Q2 2019 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Shocking Truth — Managing for Hard Drive Failure and Data Corruption

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/managing-for-hard-drive-failures-data-corruption/

hard disk drive covered in 0s, 1s, ?s

Ah, the iconic 3.5″ hard drive, now approaching a massive 16TB of storage capacity. Backblaze storage pods fit 60 of these drives in a single pod, and with well over 750 petabytes of customer data under management in our data centers, we have a lot of hard drives under management.

Yet most of us have just one, or only a few of these massive drives at a time storing our most valuable data. Just how safe are those hard drives in your office or studio? Have you ever thought about all the awful, terrible things that can happen to a hard drive? And what are they, exactly?

It turns out there are a host of obvious physical dangers, but also other, less obvious, errors that can affect the data stored on your hard drives, as well.

Dividing by One

It’s tempting to store all of your content on a single hard drive. After all, the capacity of these drives gets larger and larger, and they offer great performance of up to 150 MB/s. It’s true that flash-based hard drives are far faster, but the dollars per gigabyte price is also higher, so for now, the traditional 3.5″ hard drive holds most data today.

However, having all of your precious content on a single, spinning hard drive is a true tightrope without a net experience. Here’s why.

Drivesaver Failure Analysis by the Numbers

Drive failures by possible external force

I asked our friends at Drivesavers, specialists in recovering data from drives and other storage devices, for some analysis of the hard drives brought into their labs for recovery. What were the primary causes of failure?

Reason One: Media Damage

The number one reason, accounting for 70 percent of failures, is media damage, including full head crashes.

Modern hard drives stuff multiple, ultra thin platters inside that 3.5 inch metal package. These platters spin furiously at 5400 or 7200 revolutions per minute — that’s 90 or 120 revolutions per second! The heads that read and write magnetic data on them sweep back and forth only 6.3 micrometers above the surface of those platters. That gap is about 1/12th the width of a human hair and a miracle of modern technology to be sure. As you can imagine, a system with such close tolerances is vulnerable to sudden shock, as evidenced by Drivesavers’ results.

This damage occurs when the platters receive shock, i.e. physical damage from impact to the drive itself. Platters have been known to shatter, or have damage to their surfaces, including a phenomenon called head crash, where the flying heads slam into the surface of the platters. Whatever the cause, the thin platters holding 1s and 0s can’t be read.

It takes a surprisingly small amount of force to generate a lot of shock energy to a hard drive. I’ve seen drives fail after simply tipping over when stood on end. More typically, drives are accidentally pushed off of a desktop, or dropped while being carried around.

A drive might look fine after a drop, but the damage may have been done. Due to their rigid construction, heavy weight, and how often they’re dropped on hard, unforgiving surfaces, these drops can easily generate the equivalent of hundreds of g-forces to the delicate internals of a hard drive.

To paraphrase an old (and morbid) parachutist joke, it’s not the fall that gets you, it’s the sudden stop!

Reason Two: PCB Failure

The next largest cause is circuit board failure, accounting for 18 percent of failed drives. Printed circuit boards (PCBs), those tiny green boards seen on the underside of hard drives, can fail in the presence of moisture or static electric discharge like any other circuit board.

Reason Three: Stiction

Next up is stiction (a portmanteau of friction and sticking), which occurs when the armatures that drive those flying heads actually get stuck in place and refuse to operate, usually after a long period of disuse. Drivesavers found that stuck armatures accounted for 11 percent of hard drive failures.

It seems counterintuitive that hard drives sitting quietly in a dark drawer might actually contribute to its failure, but I’ve seen many older hard drives pulled from a drawer and popped into a drive carrier or connected to power just go thunk. It does appear that hard drives like to be connected to power and constantly spinning and the numbers seem to bear this out.

Reason Four: Motor Failure

The last, and least common cause of hard drive failure, is hard drive motor failure, accounting for only 1 percent of failures, testament again to modern manufacturing precision and reliability.

Mitigating Hard Drive Failure Risk

So now that you’ve seen the gory numbers, here are a few recommendations to guard against the physical causes of hard drive failure.

1. Have a physical drive handling plan and follow it rigorously

If you must keep content on single hard drives in your location, make sure your team follows a few guidelines to protect against moisture, static electricity, and drops during drive handling. Keeping the drives in a dry location, storing the drives in static bags, using static discharge mats and wristbands, and putting rubber mats under areas where you’re likely to accidentally drop drives can all help.

It’s worth reviewing how you physically store drives, as well. Drivesavers tells us that the sudden impact of a heavy drawer of hard drives slamming home or yanked open quickly might possibly damage hard drives!

2. Spread failure risk across more drives and systems

Improving physical hard drive handling procedures is only a small part of a good risk-reducing strategy. You can immediately reduce the exposure of a single hard drive failure by simply keeping a copy of that valuable content on another drive.This is a common approach for videographers moving content from cameras shooting in the field back to their editing environment. By simply copying content over from one fast drive to another, the odds of both drives failing at once are less likely. This is certainly better than keeping content on only a single drive, but definitely not a great long-term solution.

Multiple drive NAS and RAID systems reduce the impact of failing drives even further. A RAID 6 system composed of eight drives not only has much faster read and write performance than a single drive, but two of its drives can fail and still serve your files, giving you time to replace those failed drives.

Mitigating Data Corruption Risk

The Risk of Bit Flips

Beyond physical damage, there’s another threat to the files stored on hard disks: small, silent bit flip errors often called data corruption or bit rot.

Bit rot errors occur when individual bits in a stream of data in files change from one state to another (positive or negative, 0 to 1, and vice versa). These errors can happen to hard drive and flash storage systems at rest, or be introduced as a file is copied from one hard drive to another.

While hard drives automatically correct single-bit flips on the fly, larger bit flips can introduce a number of errors. This can either cause the program accessing them to halt or throw an error, or perhaps worse, lead you to think that the file with the errors is fine!

Bit Flip Errors by the Book

In a landmark study of data failures in large systems, Disk failures in the real world:
What does an MTTF of 1,000,000 hours mean to you?
, Bianca Schroeder and Garth A. Gibson reported that “a large number of the problems attributed to CPU and memory failures were triggered by parity errors, i.e. the number of errors is too large for the embedded error correcting code to correct them.”

Flash drives are not immune either. Bianca Shroeder recently published a similar study of flash drives, Flash Reliability in Production: The Expected and the Unexpected, and found that “…between 20-63% of drives experienced at least one of the (unrecoverable read errors) during the time it was in production. In addition, between 2-6 out of 1,000 drive days were affected.”

“These UREs are almost exclusively due to bit corruptions that ECC cannot correct. If a drive encounters a URE, the stored data cannot be read. This either results in a failed read in the user’s code, or if the drives are in a RAID group that has replication, then the data is read from a different drive.”

Exactly how prevalent bit flips are is a controversial subject, but if you’ve ever retrieved a file from an old hard drive or RAID system and see sparkles in video, corrupt document files, or lines or distortions in pictures, you’ve seen the results of these errors.

Protecting Against Bit Flip Errors

There are many approaches to catching and correcting bit flip errors. From a system designer standpoint they usually involve some combination of multiple disk storage systems, multiple copies of content, data integrity checks and corrections, including error-correcting code memory, physical component redundancy, and a file system that can tie it all together.

Backblaze has built such a system, and uses a number of techniques to detect and correct file degradation due to bit flips and deliver extremely high data durability and integrity, often in conjunction with Reed-Solomon erasure codes.

Thanks to the way object storage and Backblaze B2 works, files written to B2 are always retrieved exactly as you originally wrote them. If a file ever changes from the time you’ve written it, say, due to bit flip errors, it will either be reproduced from a redundant copy of your file, or even mathematically reconstructed with erasure codes.

So the simplest, and certainly least expensive way to get bit flip protection for the content sitting on your hard drives is to simply have another copy on cloud storage.

Resources:

The Ideal Solution — Performance and Protection

With some thought, you can apply these protection steps to your environment and get the best of both worlds: the performance of your content on fast, local hard drives, and the protection of having a copy on object storage offsite with the ultimate data integrity.

The post The Shocking Truth — Managing for Hard Drive Failure and Data Corruption appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: Durability vs Availability

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/cloud-storage-durability-vs-availability/

What's the Diff: Durability vs Availability

When shopping for a cloud storage provider, customers should ask a few key questions of potential storage providers. In addition to inquiring about storage cost, data center location, and features and capabilities of the service, they’re going to want to know the numbers for two key metrics for measuring cloud storage performance: durability and availability.

We’ve discussed cloud storage costs and data center features in other posts. In this post we’re going to cover the basics about durability and availability.

What is Cloud Durability?

Think of durability as a measurement of how healthy and resilient your data is. You want your data to be as intact and pristine on the day you retrieve it as it was on the day you stored it.

There are a number of ways that data can lose its integrity.

1. Data loss

Data loss can happen through human accident, natural or manmade disaster, or even malicious action out of your control. Whether you store data in your home, office, or with a cloud provider, that data needs to be protected as much as possible from any event that could damage or destroy it. If your data is on a computer, external drive, or NAS in a home or office, you obviously want to keep the computing equipment away from water sources and other environmental hazards. You also have to consider the likelihood of fire, theft, and accidental deletion.

Data center managers go to great lengths to protect data under their care. That care starts with locating a facility in as safe a geographical location as possible, having secure facilities with controlled access, and monitoring and maintaining the storage infrastructure (chassis, drives, cables, power, cooling, etc.)

2. Data corruption

Data on traditional spinning hard drive systems can degrade with time, have errors introduced during copying, or become corrupted in any number of ways. File and operating systems and utilities have ways to double check that data is handled correctly during common file and data handling operations, but corruption can sneak into a system if it isn’t monitored closely or if the storage system doesn’t specifically check for such errors such as is common with systems with ECC (Error Correcting Code) RAM. Object storage systems will commonly monitor for any changes in the data, and often will automatically repair or provide warnings when data has been changed.

How is Durability Measured?

Object storage providers express data durability as an annual percentage in nines, as in two nines before the decimal point and as many nines as warranted after the decimal point. For example, eleven nines of durability is expressed as 99.999999999%.

Of the major vendors, Azure claims 12 nines and even 16 nines durability for some services, while Amazon S3, Google Cloud Platform and Backblaze offer 11 nines, or 99.999999999% annual durability.

4x3 rows of 9s

What this means is that those services are promising that your data will remain intact while it is under their care, and no more than 0.000000001 percent of your data will be lost in a year (in the case of eleven nines annual durability).

How is Durability Maintained?

Generally, there are two ways to maintain data durability. The first approach is to use software algorithms and metadata such as checksums to detect corruption of the data. If corruption is found, the data can be healed using the stored information. Examples of these approaches are erasure coding and Reed-Solomon coding.

Another tried and true method to ensure data integrity is to simply store multiple copies of the data in multiple locations. This is known as redundancy. This approach allows data to survive the loss or corruption of data in one or even multiple locations through accident, war, theft, or any manner of natural disaster or alien invasion. All that’s required is that at least one copy of the data remains intact. The odds for data survival increase with the number of copies stored, with multiple locations an important multiplying factor. If multiple copies (and locations) are lost, well, that means we’re all in a lot of trouble and perhaps there might be other things to think about than the data you have stored.

The best approach is a combination of the above two approaches. Home data storage appliances such as NAS can provide the algorithmic protection through RAID and other technologies. If you store at least one copy of your data in a different location than your office or home than you’ve got redundancy covered, as well. The redundant location can be as simple as a USB or hard drive you regularly drop off in your old bedroom’s closet at mom’s house or a data center in another state that gets a daily backup from your office computer or network.

What is Availability?

If durability can be compared to how well your picnic basket contents survived the automobile trip to the beach, then you might get a good understanding of availability if you subsequently stand and watch that basket being carried out to sea by a wave. The chicken salad sandwich in the basket might be in great shape but you won’t be enjoying it.

Availability is how much time the storage provider guarantees that your data and services are available to you. This is usually documented as a percent of time per year, e.g. 99.9% (or three nines) means that your data will be available to you from the data center and you will be unable to access the data for no more than about ten minutes per week, or 8.77 hours per year. Data centers often plan downtime for maintenance, which is acceptable as long as you have no immediate need of the data during those maintenance windows.

What availability is suitable for your data depends, of course, on how you’re using it. If you’re running an e-commerce site, reservation service, or a site that requires real-time transactions, then availability can be expressed in real dollars for any unexpected downtime. If you are simply storing backups, or serving media for a website that doesn’t get a lot of traffic, you probably can live with the service being unavailable on occasion.

There are of course no guarantees for connectivity issues that affect availability that are out of the control of the storage provider, such as internet outages, bad connections, or power losses affecting your connection to the storage provider.

Guarantees of Availability

Your cloud service provider should both publish and guarantee availability. Much like an insurance policy, the guarantee should be in terms that compensate you if the provider falls short of the guaranteed availability metrics. Naturally, the better the guarantee and the greater the availability, the more reliable and expensive the service will be.

Be sure to read the service level agreement (SLA) closely, to see how your vendor defines availability. A provider might define zero downtime if a single internet client can access even one service, while others might require that multiple internet service providers and countries can access all services to be defined as available.

Backblaze Durability and Availability

Backblaze offers 99.999999999 (eleven nines) annual durability and 99.9% availability for its cloud storage services.

The Bottom Line on Data Durability and Availability

The bottom line is that no number of nines can absolutely protect your data. Human error or acts of nature can always intercede to make the best plans to protect data go awry. The decision you should make is to decide how important the data is to you and whether you can afford to not have access to it temporarily or to lose it completely. That will guide what strategy or vendor you should use to protect that data.

Generally, having multiple copies of your data in different places, using reliable vendors for storage providers, and making sure that the infrastructure storing your data and your access to it will be supported (power, service payments, etc), will go a long way in ensuring that your data will continue to be stable and there when you need it.

The post What’s the Diff: Durability vs Availability appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Vaults: Zettabyte-Scale Cloud Storage Architecture

Post Syndicated from Brian Beach original https://www.backblaze.com/blog/vault-cloud-storage-architecture/

A lot has changed in the four years since Brian Beach wrote a post announcing Backblaze Vaults, our software architecture for cloud data storage. Just looking at how the major statistics have changed, we now have over 100,000 hard drives in our data centers instead of the 41,000 mentioned in the post video. We have three data centers (soon four) instead of one data center. We’re approaching one exabyte of data stored for our customers (almost seven times the 150 petabytes back then), and we’ve recovered over 41 billion files for our customers, up from the 10 billion in the 2015 post.

In the original post, we discussed having durability of seven nines. Shortly thereafter, it was upped to eight nines. In July of 2018, we took a deep dive into the calculation and found our durability closer to eleven nines (and went into detail on the calculations used to arrive at that number). And, as followers of our Hard Drive Stats reports will be interested in knowing, we’ve just started using our first 16 TB drives, which are twice the size of the biggest drives we used back at the time of this post — then a whopping eight TB.

We’ve updated the details here and there in the text from the original post that was published on our blog on March 11, 2015. We’ve left the original 135 comments intact, although some of them might be non sequiturs after the changes to the post. We trust that you will be able to sort out the old from the new and make sense of what’s changed. If not, please add a comment and we’ll be happy to address your questions.

— Editor

Storage Vaults form the core of Backblaze’s cloud services. Backblaze Vaults are not only incredibly durable, scalable, and performant, but they dramatically improve availability and operability, while still being incredibly cost-efficient at storing data. Back in 2009, we shared the design of the original Storage Pod hardware we developed; here we’ll share the architecture and approach of the cloud storage software that makes up a Backblaze Vault.

Backblaze Vault Architecture for Cloud Storage

The Vault design follows the overriding design principle that Backblaze has always followed: keep it simple. As with the Storage Pods themselves, the new Vault storage software relies on tried and true technologies used in a straightforward way to build a simple, reliable, and inexpensive system.

A Backblaze Vault is the combination of the Backblaze Vault cloud storage software and the Backblaze Storage Pod hardware.

Putting The Intelligence in the Software

Another design principle for Backblaze is to anticipate that all hardware will fail and build intelligence into our cloud storage management software so that customer data is protected from hardware failure. The original Storage Pod systems provided good protection for data and Vaults continue that tradition while adding another layer of protection. In addition to leveraging our low-cost Storage Pods, Vaults take advantage of the cost advantage of consumer-grade hard drives and cleanly handle their common failure modes.

Distributing Data Across 20 Storage Pods

A Backblaze Vault is comprised of 20 Storage Pods, with the data evenly spread across all 20 pods. Each Storage Pod in a given vault has the same number of drives, and the drives are all the same size.

Drives in the same drive position in each of the 20 Storage Pods are grouped together into a storage unit we call a tome. Each file is stored in one tome and is spread out across the tome for reliability and availability.

20 hard drives create 1 tome that share parts of a file.

Every file uploaded to a Vault is divided into pieces before being stored. Each of those pieces is called a shard. Parity shards are computed to add redundancy, so that a file can be fetched from a vault even if some of the pieces are not available.

Each file is stored as 20 shards: 17 data shards and three parity shards. Because those shards are distributed across 20 Storage Pods, the Vault is resilient to the failure of a Storage Pod.

Files can be written to the Vault when one pod is down and still have two parity shards to protect the data. Even in the extreme and unlikely case where three Storage Pods in a Vault lose power, the files in the vault are still available because they can be reconstructed from any of the 17 pods that are available.

Storing Shards

Each of the drives in a Vault has a standard Linux file system, ext4, on it. This is where the shards are stored. There are fancier file systems out there, but we don’t need them for Vaults. All that is needed is a way to write files to disk and read them back. Ext4 is good at handling power failure on a single drive cleanly without losing any files. It’s also good at storing lots of files on a single drive and providing efficient access to them.

Compared to a conventional RAID, we have swapped the layers here by putting the file systems under the replication. Usually, RAID puts the file system on top of the replication, which means that a file system corruption can lose data. With the file system below the replication, a Vault can recover from a file system corruption because a single corrupt file system can lose at most one shard of each file.

Creating Flexible and Optimized Reed-Solomon Erasure Coding

Just like RAID implementations, the Vault software uses Reed-Solomon erasure coding to create the parity shards. But, unlike Linux software RAID, which offers just one or two parity blocks, our Vault software allows for an arbitrary mix of data and parity. We are currently using 17 data shards plus three parity shards, but this could be changed on new vaults in the future with a simple configuration update.

Vault Row of Storage Pods

For Backblaze Vaults, we threw out the Linux RAID software we had been using and wrote a Reed-Solomon implementation from scratch, which we wrote about in Backblaze Open Sources Reed-Solomon Erasure Coding Source Code. It was exciting to be able to use our group theory and matrix algebra from college.

The beauty of Reed-Solomon is that we can then re-create the original file from any 17 of the shards. If one of the original data shards is unavailable, it can be re-computed from the other 16 original shards, plus one of the parity shards. Even if three of the original data shards are not available, they can be re-created from the other 17 data and parity shards. Matrix algebra is awesome!

Handling Drive Failures

The reason for distributing the data across multiple Storage Pods and using erasure coding to compute parity is to keep the data safe and available. How are different failures handled?

If a disk drive just up and dies, refusing to read or write any data, the Vault will continue to work. Data can be written to the other 19 drives in the tome, because the policy setting allows files to be written as long as there are two parity shards. All of the files that were on the dead drive are still available and can be read from the other 19 drives in the tome.

Building a Backblaze Vault Storage Pod

When a dead drive is replaced, the Vault software will automatically populate the new drive with the shards that should be there; they can be recomputed from the contents of the other 19 drives.

A Vault can lose up to three drives in the same tome at the same moment without losing any data, and the contents of the drives will be re-created when the drives are replaced.

Handling Data Corruption

Disk drives try hard to correctly return the data stored on them, but once in a while they return the wrong data, or are just unable to read a given sector.

Every shard stored in a Vault has a checksum, so that the software can tell if it has been corrupted. When that happens, the bad shard is recomputed from the other shards and then re-written to disk. Similarly, if a shard just can’t be read from a drive, it is recomputed and re-written.

Conventional RAID can reconstruct a drive that dies, but does not deal well with corrupted data because it doesn’t checksum the data.

Scaling Horizontally

Each vault is assigned a number. We carefully designed the numbering scheme to allow for a lot of vaults to be deployed, and designed the management software to handle scaling up to that level in the Backblaze data centers.

The overall design scales very well because file uploads (and downloads) go straight to a vault, without having to go through a central point that could become a bottleneck.

There is an authority server that assigns incoming files to specific Vaults. Once that assignment has been made, the client then uploads data directly to the Vault. As the data center scales out and adds more Vaults, the capacity to handle incoming traffic keeps going up. This is horizontal scaling at its best.

We could deploy a new data center with 10,000 Vaults holding 16TB drives and it could accept uploads fast enough to reach its full capacity of 160 exabytes in about two months!

Backblaze Vault Benefits

The Backblaze Vault architecture has six benefits:

1. Extremely Durable

The Vault architecture is designed for 99.999999% (eight nines) annual durability (now 11 nines — Editor). At cloud-scale, you have to assume hard drives die on a regular basis, and we replace about 10 drives every day. We have published a variety of articles sharing our hard drive failure rates.

The beauty with Vaults is that not only does the software protect against hard drive failures, it also protects against the loss of entire Storage Pods or even entire racks. A single Vault can have three Storage Pods — a full 180 hard drives — die at the exact same moment without a single byte of data being lost or even becoming unavailable.

2. Infinitely Scalable

A Backblaze Vault is comprised of 20 Storage Pods, each with 60 disk drives, for a total of 1200 drives. Depending on the size of the hard drive, each vault will hold:

12TB hard drives => 12.1 petabytes/vault (Deploying today.)
14TB hard drives => 14.2 petabytes/vault (Deploying today.)
16TB hard drives => 16.2 petabytes/vault (Small-scale testing.)
18TB hard drives => 18.2 petabytes/vault (Announced by WD & Toshiba)
20TB hard drives => 20.2 petabytes/vault (Announced by Seagate)

Backblaze Data Center

At our current growth rate, Backblaze deploys one to three Vaults each month. As the growth rate increases, the deployment rate will also increase. We can incrementally add more storage by adding more and more Vaults. Without changing a line of code, the current implementation supports deploying 10,000 Vaults per location. That’s 90 exabytes of data in each location. The implementation also supports up to 1,000 locations, which enables storing a total of 90 zettabytes! (Also knowWithout changing a line of code, the current implementation supports deploying 10,000 Vaults per location. That’s 160 exabytes of data in each location. The implementation also supports up to 1,000 locations, which enables storing a total of 160 zettabytes! (Also known as 160,000,000,000,000 GB.)

3. Always Available

Data backups have always been highly available: if a Storage Pod was in maintenance, the Backblaze online backup application would contact another Storage Pod to store data. Previously, however, if a Storage Pod was unavailable, some restores would pause. For large restores this was not an issue since the software would simply skip the Storage Pod that was unavailable, prepare the rest of the restore, and come back later. However, for individual file restores and remote access via the Backblaze iPhone and Android apps, it became increasingly important to have all data be highly available at all times.

The Backblaze Vault architecture enables both data backups and restores to be highly available.

With the Vault arrangement of 17 data shards plus three parity shards for each file, all of the data is available as long as 17 of the 20 Storage Pods in the Vault are available. This keeps the data available while allowing for normal maintenance and rare expected failures.

4. Highly Performant

The original Backblaze Storage Pods could individually accept 950 Mbps (megabits per second) of data for storage.

The new Vault pods have more overhead, because they must break each file into pieces, distribute the pieces across the local network to the other Storage Pods in the vault, and then write them to disk. In spite of this extra overhead, the Vault is able to achieve 1,000 Mbps of data arriving at each of the 20 pods.

Backblaze Vault Networking

This capacity required a new type of Storage Pod that could handle this volume. The net of this: a single Vault can accept a whopping 20 Gbps of data.

Because there is no central bottleneck, adding more Vaults linearly adds more bandwidth.

5. Operationally Easier

When Backblaze launched in 2008 with a single Storage Pod, many of the operational analyses (e.g. how to balance load) could be done on a simple spreadsheet and manual tasks (e.g. swapping a hard drive) could be done by a single person. As Backblaze grew to nearly 1,000 Storage Pods and over 40,000 hard drives, the systems we developed to streamline and operationalize the cloud storage became more and more advanced. However, because our system relied on Linux RAID, there were certain things we simply could not control.

With the new Vault software, we have direct access to all of the drives and can monitor their individual performance and any indications of upcoming failure. And, when those indications say that maintenance is needed, we can shut down one of the pods in the Vault without interrupting any service.

6. Astoundingly Cost Efficient

Even with all of these wonderful benefits that Backblaze Vaults provide, if they raised costs significantly, it would be nearly impossible for us to deploy them since we are committed to keeping our online backup service affordable for completely unlimited data. However, the Vault architecture is nearly cost neutral while providing all these benefits.

Backblaze Vault Cloud Storage

When we were running on Linux RAID, we used RAID6 over 15 drives: 13 data drives plus two parity. That’s 15.4% storage overhead for parity.

With Backblaze Vaults, we wanted to be able to do maintenance on one pod in a vault and still have it be fully available, both for reading and writing. And, for safety, we weren’t willing to have fewer than two parity shards for every file uploaded. Using 17 data plus three parity drives raises the storage overhead just a little bit, to 17.6%, but still gives us two parity drives even in the infrequent times when one of the pods is in maintenance. In the normal case when all 20 pods in the Vault are running, we have three parity drives, which adds even more reliability.

Summary

Backblaze’s cloud storage Vaults deliver 99.999999% (eight nines) annual durability (now 11 nines — Editor), horizontal scalability, and 20 Gbps of per-Vault performance, while being operationally efficient and extremely cost effective. Driven from the same mindset that we brought to the storage market with Backblaze Storage Pods, Backblaze Vaults continue our singular focus of building the most cost-efficient cloud storage available anywhere.

•  •  •

Note: This post was updated from the original version posted on March 11, 2015.

The post Backblaze Vaults: Zettabyte-Scale Cloud Storage Architecture appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Out of Stock: How to Survive the LTO-8 Tape Shortage

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/how-to-survive-the-lto-8-tape-shortage/

Not Available - LTO-8 Tapes

Eighteen months ago, the few remaining LTO tape drive manufacturers announced the availability of LTO-8, the latest generation of the Linear Tape-Open storage technology. Yet today, almost no one is actually writing data to LTO-8 tapes. It’s not that people aren’t interested in upgrading to the denser LTO-8 format that offers 12 TB per cartridge, twice LTO-7’s six TB capacity. It’s simply that the two remaining LTO tape manufacturers are locked in a patent infringement battle. And that means LTO-8 tapes are off the market indefinitely.

The pain of this delay is most acute for media professionals who are always quick to adopt higher capacity storage media for video and audio files that are notorious storage hogs. As cameras get more sophisticated, capturing in higher resolutions and higher frame rates, the storage capacity required per hour of content shoots through the roof. For example, one hour of ProRes UltraHD requires 148.72 GB storage capacity, which is four times more than the 37.35 GB required for one hour of ProRes HD-1080. Meanwhile, falling camera prices are encouraging production teams to use more cameras per shoot, further increasing the capacity requirements.

Since its founding, the LTO Consortium has prepared for storage growth by setting a goal of doubling tape density with each LTO generation and committed to releasing a new generation every two to three years. While this lofty goal might seem admirable to the LTO Consortium, it puts customers with earlier generations of LTO systems in a difficult position. New generation LTO drives at best can only read tapes from the two previous generations. So once a new generation is announced, the clock begins ticking on data stored on deprecated generations of tapes. Until you migrate the data to a newer generation, you’re stuck maintaining older tape drive hardware that may be no longer supported by manufacturers.

How Manufacturer Lawsuits Led to the LTO-8 Shortage

How the industry and the market arrived in this painful place is a tangled tale. The lawsuit and counter-lawsuit that led to the LTO-8 shortage is a patent infringement dispute between Fuji and Sony, the only two remaining manufacturers of LTO tape media. The timeline is complicated, starting in 2016 with Fujifilm suing Sony, then Sony counter-suing Fuji. By March 2019, US import bans of LTO products of both manufacturers were in place.

In the middle of these legal battles, LTO-8 drive manufacturers announced product availability in late 2017. But what about the LTO-8 tapes? Fujifilm says they’re not currently manufacturing LTO-8 and have never sold them. And Sony says its US imports of LTO-8 have been stopped and won’t comment about when they will begin shipping again per the dispute. So no LTO-8 for you!

LTO-8 Ultrium Tape Cost

Note that having only two LTO tape manufacturers is a root cause of this shortage. If there were still six LTO tape manufacturers like there were when LTO was launched in 2000, a dispute between two vendors might not have left the market in the lurch.

Weighing Your Options — LTO-8 Shortage Survival Strategies

If you’re currently using LTO for backup or archive, you have a few options for weathering the LTO-8 shortage.

The first option is to keep using your current LTO generation and wait until the disputes settle out completely before upgrading to LTO-8. The downside here is you’ll have to buy more and more LTO-7 or LTO-6 tapes that don’t offer the capacity you probably need if you’re storing higher resolution video or other capacity-hogging formats. And while you’ll be spending more on tapes than if you were able to use the higher capacity newer generation tapes, you’ll also know that anything you write to old-gen LTO tapes will have to be migrated sooner than planned. LTO’s short two to three year generation cycle doesn’t leave time for legal battles, and remember, manufacturers guarantee at most two generations of backward compatibility.

A second option is to go ahead and buy an LTO-8 library and use LTO-7 tapes that have been specially formatted for higher capacity called LTO Type M (M8). When initialized as Type M media, LTO-7 can hold nine TB of data instead of the standard six TB LTO-7 cartridge initialized as Type A. That puts it halfway up to the 12 TB capacity of an LTO-8 tape. However, this extra capacity comes with several caveats:

  • Only new, unused LTO-7 cartridges can be initialized as Type M.
  • Once initialized as Type M, they cannot be changed back to LTO-7 Type A.
  • Only LTO-8 drives in libraries can read and write to Type M, not standalone drives.
  • Future LTO generations — LTO-9, LTO-10, etc. — will not be able to read LTO-7 Type M.

So if you go with LTO-7 Type M for greater capacity, realize it’s still LTO-7, not LTO-8, and when you move to LTO-9, you won’t be able to read those tapes.

LTO Cartridge Capacity (TB) vs. LTO Generation Chart

Managing Tape is Complicated

If your brain hurts reading this as much as mine does writing this, it’s because managing tape is complicated. The devil is in the details, and it’s hard to keep them all straight. When you have years or even decades of content stored on LTO tape, you have to keep track of which content is on which generation of LTO, and ensure your facility has the drive hardware available to read them, and hope that nothing goes wrong with the tape media or the tape drives or libraries.

In general, new drives can read two generations back, but there are exceptions. For example, LTO-8 can’t read LTO-6 because the standard changed from GMR (Giant Magneto-Resistance) heads to TMR (Tunnel Magnetoresistance Recording) heads. The new TMR heads can write data more densely, which is what drives the huge increase in capacity. But that means you’ll want to keep an LTO-7 drive available to read LTO-5 and LTO-6 tapes.

Beyond these considerations for managing the tape storage long-term, there are the day-to-day hassles. If you’ve ever been personally responsible for managing backup and archive for your facility, you’ll know that it’s a labor-intensive, never-ending chore that takes time from your real job. And if your setup doesn’t allow users to retrieve data themselves, you’re effectively on-call to pull data off the tapes whenever it’s needed.

A Third Option — Migrate from LTO to Cloud Storage

If neither of these options to the LTO-8 crisis sounds appealing, there is an alternative: cloud storage. Cloud storage removes the complexity of tape while reducing costs. How much can you save in media and labor costs? We’ve calculated it for you in LTO Versus Cloud Storage Costs — the Math Revealed. And cloud storage makes it easy to give users access to files, either through direct access to the cloud bucket or through one of the integrated applications offered by our technology partners.

At Backblaze, we have a growing number of customers who shifted from tape to our B2 Cloud Storage and never looked back. Customers such as Austin City Limits, who preserved decades of concert historical footage by moving to B2; Fellowship Church, who eliminated Backup Thursdays and freed up staff for other tasks; and American Public Television, who adopted B2 in order to move away from tape distribution to its subscribers. What they’ve found is that B2 made operations simpler and their data more accessible without breaking their budget.

Another consideration: once you migrate your data to B2 cloud storage, you’ll never have to migrate again when LTO generations change or when the media ages. Backblaze takes care of making sure your data is safe and accessible on object storage, and migrates your data to newer disk technologies over time with no disruption to you or your users.

In the end, the problem with tape isn’t the media, it’s the complexity of managing it. It’s a well-known maxim that the time you spend managing how you do your work takes time away from what you do. Having to deal with multiple generations of both tape and tape drives is a good example of an overly complex system. With B2 Cloud Storage, you can get all the economical advantages of tape as well as the disaster recovery advantages of your data being stored away from your facility, without the complexity and the hassles.

With no end in sight to this LTO-8 shortage, now is a good time to make the move from LTO to B2. If you’re ready to start your move to alway available cloud storage, Backblaze and our partners are ready to help you.

Migrate or Die, a Webinar Series on Migrating Assets and Archives to the Cloud

If you’re facing challenges managing LTO and contemplating a move to the cloud, don’t miss Migrate or Die, our webinar series on migrating assets and archives to the cloud.

Migrate or Die: Evading Extinction -- Migrating Legacy Archives

The post Out of Stock: How to Survive the LTO-8 Tape Shortage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

An Introduction to NAS for Photo & Video Production

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/an-introduction-to-nas-for-photo-video-production/

NAS for Photo and Video Production

In this post:

  1. What is a NAS?
  2. NAS capabilities
  3. Three examples of common media workflows using a NAS
  4. Top five benefits of using NAS for photography and videography

The camera might be firmly entrenched at the top of the list of essential equipment for photographers and videographers, but a strong contender for next on the list has to be network-attached storage (NAS).

A big reason for the popularity of NAS is that it’s one device that can do so many things that are needed in a media management workflow. Most importantly, NAS systems offer storage larger than any single hard drive, let you centralize photo storage, protect your files with backups and data storage virtualization (e.g. RAID), allow you to access files from anywhere, integrate with many media editing apps, and securely share media with coworkers and clients. And that’s just the beginning of the wide range of capabilities of NAS. It’s not surprising that NAS has become a standard and powerful data management hub serving the media professional.

This post is an overview of how NAS can fit into the professional or serious amateur photo and video workflow and some of the benefits you can receive from adding a NAS.

Essential NAS Capabilities

Synology NAS
Synology NAS

Storage Flexibility

Firstly, NAS is a data storage device. It connects to your computer, office, and the internet, and supports loading and retrieving data from multiple computers in both local and remote locations.

The number of drives available for data storage is determined by how many bays the NAS has. As larger and faster disk drives become available, a NAS can be upgraded with larger drives to increase capacity, or multiple NAS can be used together. Solid-state drives (SSDs) can be used in a NAS for primary storage or as a cache to speed up data access.

Data Protection and Redundancy

NAS can be used for either primary or secondary local data storage. Whichever it is, it’s important to have an off-site backup of that data, as well, to provide redundancy in case of accident, or in the event of a hardware or software problem. That off-site backup can be drives stored in another location, or more commonly these days, the cloud. The most popular NAS systems typically offer built-in tools to automatically sync files on your NAS to offsite cloud storage, and many also have app stores with backup and many other types of applications, as well.

Data is typically stored on the NAS using some form of error checking and virtual storage system, typically RAID 5 or RAID 6, to keep your data available even if one of the internal hard drives fail. However, if NAS is the only backup you have, and a drive fails, it can take quite a while to recover that data from a RAID device, and the delay only gets longer as drives increase in size. Avoiding this delay is the motivation for many to keep a redundant copy in the cloud so that it’s possible to access the files immediately even before the RAID has completed its recovery.

QNAP NAS
QNAP NAS

If your primary data files are on an editing workstation, the NAS can be your local backup to make sure you keep your originals safe from accidental changes or loss. In some common editing workflows, the raw files are stored on the NAS and lower-resolution, smaller proxies are used for offline editing on the workstation — also called non-destructive or non-linear editing. Once edits are completed, the changes are written back to the NAS. Some applications, including Lightroom, maintain a catalog of files that is separate from the working files and is stored on the editing workstation. This catalog should be routinely backed up locally and remotely to protect it, as well.

The data on the NAS also can be protected with automated data backups or snapshots that protect data in case of loss, or to retrieve an earlier version of a file. A particularly effective plan is to schedule off-hours backups to the cloud to complete the off-site component of the recommended 3-2-1 backup strategy.

Automatic Backup Locally and to the Cloud

Data Accessibility and Sharing

Data can be loaded onto the NAS directly through a USB or SD card slot, if available, or through any device available via the local network or internet. Another possibility is to have a directory/folder on a local computer that automatically syncs any files dropped there to the NAS.

NAS to the cloud

Once on the NAS, files can be shared with coworkers, clients, family, and friends. The NAS can be accessed via the internet from anywhere, so you can easily share work in progress or final media presentations. Access can be configured by file, directory/folder, group, or by settings in the particular application you are using. NAS can be set up with a different user and permission structure than your computer(s), making it easy to grant access to particular folders, and keeping the security separate from however local computers are set up. With proper credentials, a wide range of mobile apps or a web browser can be used to access the data on the NAS.

Media Editing Integration

It’s common for those using applications such as Adobe Lightroom to keep the original media on the NAS and work on a proxy on the local computer. This speeds up the workflow and protects the original media files. Similarly, for video, some devices are fast enough to support NLE (non-linear editing), and therefore support using the NAS for source and production media but allow editing without changing the source files. Popular apps that support NLE include Adobe Premiere, Apple Final Cut Pro X, and Avid Media Composer.

Flexibility and Apps

NAS from Synology, QNAP, FreeNAS/TrueNAS, Morro Bay, and others offer a wide range of apps that extend the functionality of the device. You can easily turn a NAS into a media server that streams audio and video content to TVs and other devices on your network. You can set up a NAS to automatically perform backups of your computers, or configure that NAS as a file server, a web server, or even a telephone system. Some home offices and small businesses have even completely replaced office servers with NAS.

Examples of Common Media Workflows Using a NAS

The following are three examples of how a NAS device can fit into a media production workflow.

Example One — A Home Studio

NAS is a great choice for a home studio that needs additional data storage, file sharing, cloud backup, and secure remote access. NAS is a better choice than using directly-attached storage because it can have separate security than local computers and is accessible both locally and via the internet even when individual workstations might be turned off or disconnected.

NAS can provide centralized backup using common backup apps, including Time Machine and ChronoSync on Mac, or Backup and Restore and File History on Windows.

To back up to the cloud, major NAS providers, including Synology, QNAP, Morro Data, and FreeNAS/TrueNAS include apps that can automatically back up NAS data to B2 or other destinations on the schedule of your choice.

Example Two — A Distributed Media Company with Remote Staff

The connectivity of NAS makes it an ideal hub for a distributed business. It provides a central location for files that can be reliably protected with RAID, backups, and access security, yet available to any authorized staff person no matter where they are located. Professional presentations are easy to do with a range of apps and integrations available for NAS. Clients can be given controlled access to review drafts and final proofs, as well.

Example Three — Using NAS with Photo/Video Editing Applications

Many media pros have turned to NAS for storing their ever-growing photos and video data files. Frequently, these users will optimize their workstation for the editing or cataloging application of their choice using fast central and graphics processors, SSD drives, and large amounts of RAM, and offload the data files to the NAS.

Adobe Lightroom
Adobe Lightroom

While Adobe Lightroom requires that its catalog be kept on a local or attached drive, the working files can be stored elsewhere. Some users have adopted the digital negative (DNG) for working files, which avoids having to manage sidecar (XMP) files. XMP files are stored alongside the RAW files and record edits for file formats that don’t support saving that information natively, such as proprietary camera RAW files, including CRW, CR2, NEF, ORF, and so on.

With the right software and hardware, NAS also can play well in a shared video editing environment, enabling centralized storage of data with controlled access, file security, and supporting other functions such as video transcoding.

Avid Media Composer
Avid Media Composer

Top 5 Benefits of Using NAS for Photography and Videography

To recap, here are the top five benefits of adding NAS to your media workflow.

  1. Flexible and expandable storage — fast, expandable and grows with your needs
  2. Data protection — provides local file redundancy as well as an automated backup gateway to the cloud
  3. Data accessibility and sharing — functions as a central media hub with internet connectivity and access control
  4. Integration with media editing tools — works with editing and cataloging apps for photo and video
  5. Flexibility and apps — NAS can perform many of the tasks once reserved for servers, with a wide range of apps to extend its capabilities

To learn more about what NAS can do for you, take a look at the posts on our blog on specific NAS devices from Synology, QNAP, FreeNAS/TrueNAS, and Morro Data, and about how to use NAS for photo and video storage. You’ll also find more information about how to connect NAS to the cloud. You can quickly find all posts on the NAS topic on our blog by following the NAS tag.

Morro Data CacheDrive
Morro Data CacheDrive

Do you have experience using NAS in a photo or video workflow? We’d love to hear about your experiences in the comments.

•  •  •

Note: This post originally appeared on Lensrentals.com on 10/25/18.

The post An Introduction to NAS for Photo & Video Production appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Profound Benefits of Cloud Collaboration for Business Users

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/cloud-collaboration-for-business-users/

The Profound Benefits of Cloud Collaboration for Business Users

Apple’s annual WWDC is highlighting high-end desktop computing, but it’s laptop computers and the cloud that are driving a new wave of business and creative collaboration

WWDC, Apple’s annual megaconference for developers kicks off this week, and Backblaze has team members on the ground to bring home insights and developments. Yet while everyone is drooling over the powerful new Mac Pro, we know that the majority of business users use a portable computer as their primary system for business and creative use.

The Rise of the Mobile, Always On, Portable Workstation

Analysts confirm this trend towards the use of portable computers and the cloud. IDC’s 2019 Worldwide Quarterly Personal Computing Device Tracker report shows that desktop form-factor systems comprise only 22.6% of new systems and laptops and portables are chosen almost twice as much at 42.4%.

After all, these systems are extremely popular with users and the DevOps and IT teams that support them. Small and self-contained, with massive compute power, modern laptops have fast SSD drives and always-connected Wi-Fi, helping users be productive anywhere: in the field, on business trips, and at home. Surprisingly, companies today can deploy massive fleets of these notebooks with extremely lean staff. At the inaugural MacDevOps conference a few years ago Google’s team shared that they managed 65,000 Macs with a team of seven admins!

Laptop Backup is More Important Than Ever

With the trend towards leaner IT staffs, and the dangers of computers in the field being lost, dropped or damaged, having a reliable backup system that just works is critical. Despite the proliferation of teams using shared cloud documents and email, all of the other files on your laptop you’re working on — the massive presentation due next week or the project that’s not quite ready to share on Google Drive — all have no protection without backup, which is of course why Backblaze exists!

Cloud as a Shared Business Content Hub is Changing Everything

When a company is backing up users’ files comfortably to the cloud, the next natural step is to adopt cloud-based storage like Backblaze B2 for your teams. With over 750 petabytes of customer data under management, Backblaze has worked with businesses of every size as they adopt cloud storage. Each customer and business does so for different reasons.

In the past, a business department typically would get a share of a company’s NAS server and was asked to keep all of the department’s shared documents there. But outside the corporate firewall, it turns out these systems are hard to access remotely from the road. They require VPNs and a constant network connection to mount a corporate shared drive via SMB or NFS. And, of course, running out of space and storing large files was an ever present problem.

Sharing Business Content in the Cloud Can be Transformational for Businesses

When considering a move to cloud-based storage for your team, some benefits seem obvious, but others are more profound and show that cloud storage is emerging as a powerful, organizing platform for team collaboration.

Shifting to cloud storage delivers these well-known benefits:

  • Pay only for storage you actually need
  • Grow as large and as quickly as you might need
  • Service, management, and upgrades are built in to the service
  • Pay for service as you use it out of operating expenses vs. onerous capital expenses

But shifting to shared, cloud storage yields even more profound benefits:

Your Business Content is Easier to Organize and Manage: When your team’s content is in one place, it’s easier to organize and manage, and users can finally let go of stashing content all over your organization or leaving it on their laptops. All of your tools to mine and uncover your business’s content work more efficiently, and your users do as well.

You Get Simple Workflow Management Tools for Free: Storage can fit your business processes much easier with cloud storage and do it on the fly. If you ever need to set up separate storage for teams of users, or define read/write rules for specific buckets of content, it’s easy to configure with cloud storage.

You Can Replace External File-Sharing Tools: Since most email services balk at sending large files, it’s common to use a file sharing service to share big files with other users on your team or outside your organization. Typically this means having to download a massive file, re-upload it to a file-sharing service, and publish that file-sharing link. When your files are already in cloud, sharing it is as simple as retrieving a URL location.

In fact, this is exactly how Backblaze organizes and serves PDF content on our website like customer case studies. When you click on a PDF link on the Backblaze website, it’s served directly from one of these links from a B2 bucket!

You Get Instant, Simple Policy Control over Your Business or Shared Content: B2 offers simple-to-use tools to keep every version of a file as it’s created, keep just the most recent version, or choose how many versions you require. Want to have your shared content links time-out after a day or so? This and more is all easily done from your B2 account page:

B2 Lifecycle Settings
An example of setting up shared link rules for a time-sensitive download: The file is available for 3 days, then deleted after 10 days

You’re One Step Away from Sharing That Content Globally: As you can see, beyond individual file-sharing, cloud storage like Backblaze B2 can serve as your origin store for your entire website. With the emergence of content delivery networks (CDN), you’re now only a step away from sharing and serving your content globally.

To make this easier, Backblaze joined the Bandwidth Alliance, and offers no-cost egress from your content in Backblaze B2 to Cloudflare’s global content delivery network.

Customers that adopt this strategy can dramatically slash the cost of serving content to their users.

"The combination of Cloudflare and Backblaze B2 Cloud Storage saves Nodecraft almost 85% each month on the data storage and egress costs versus Amazon S3." - James Ross, Nodecraft Co-founder/CTO

Read the Nodecraft/Backblaze case study.

Get Sophisticated Content Discovery and Compliance Tools for Your Business Content: With more and more business content in cloud storage, finding the content you need quickly across millions of files, or surfacing content that needs special storage consideration (for GDPR or HIPAA compliance, for example) is critical.

Ideally, you could have your own private, customized search engine across all of your cloud content, and that’s exactly what a new class of solutions provide.

With Acembly or Aparavi on Backblaze, you can build content indexes and offer deep search across all of your content, and automatically apply policy rules for management and retention.

Where Are You in the Cloud Collaboration Trend?

The trend to mobile, always-on workers building and sharing ever more sophisticated content around cloud storage as a shared hub is only accelerating. Users love the freedom to create, collaborate and share content anywhere. Businesses love the benefits of having all of that content in an easily managed repository that makes their entire business more flexible and less expensive to operate.

So, while device manufacturers like Apple may announce exciting Pro level workstations, the need for companies and teams to collaborate and be effective on the move is an even more important and compelling issue than ever before. The cloud is an essential element of that trend that can’t be underestimated.

•  •  •

Upcoming Free Webinars

Wednesday, June 5, 10am PT
Learn how Nodecraft saved 85% on their cloud storage bill with Backblaze B2 and Cloudflare.
Join the Backblaze/Nodecraft webinar.

Thursday, June 13, 10am PT
Want to learn more about turning content in Backblaze B2 into searchable content with powerful policy rules?
Join the Backblaze/Aparavi webinar
.

The post The Profound Benefits of Cloud Collaboration for Business Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

These Aren’t Your Ordinary Data Centers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/these-arent-your-ordinary-data-centers/

Barcelona Supercomputing Center

Many of us would concede that buildings housing data centers are generally pretty ordinary places. They’re often drab and bunker-like with few or no windows, and located in office parks or in rural areas. You usually don’t see signs out front announcing what they are, and, if you’re not in information technology, you might be hard pressed to guess what goes on inside.

If you’re observant, you might notice cooling towers for air conditioning and signs of heavy electrical usage as clues to their purpose. For most people, though, data centers go by unnoticed and out of mind. Data center managers like it that way, because the data stored in and passing through these data centers is the life’s blood of business, research, finance, and our modern, digital-based lives.

That’s why the exceptions to low-key and meh data centers are noteworthy. These unusual centers stand out for their design, their location, what the building was previously used for, or perhaps how they approach energy usage or cooling.

Let’s take a look at a handful of data centers that certainly are outside of the norm.

The Underwater Data Center

Microsoft’s rationale for putting a data center underwater makes sense. Most people live near water, they say, and their submersible data center is quick to deploy, and can take advantage of hydrokinetic energy for power and natural cooling.

Project Natick has produced an experimental, shipping-container-size prototype designed to process data workloads on the seafloor near Scotland’s Orkney Islands. It’s part of a years-long research effort to investigate manufacturing and operating environmentally sustainable, prepackaged datacenter units that can be ordered to size, rapidly deployed, and left to operate independently on the seafloor for years.

Microsoft's Project Natick
Microsoft’s Project Natick at the launch site in the city of Stromness on Orkney Island, Scotland on Sunday May 27, 2018. (Photography by Scott Eklund/Red Box Pictures)
Natick Brest
Microsoft’s Project Natick in Brest, France

The Supercomputing Center in a Former Catholic Church

One might be forgiven for mistaking Torre Girona for any normal church, but this deconsecrated 20th century church currently houses the Barcelona Supercomputing Center, home of the MareNostrum supercomputer. As part of the Polytechnic University of Catalonia, this supercomputer (Latin for Our sea, the Roman name for the Mediterranean Sea), is used for a range of research projects, from climate change to cancer research, biomedicine, weather forecasting, and fusion energy simulations.

Torre Girona. a former Catholic church in Barcelona
Torre Girona, a former Catholic church in Barcelona
The Barcelona Supercomputing Center, home of the MareNostrum supercomputer
The Barcelona Supercomputing Center, home of the MareNostrum supercomputer

The Barcelona Supercomputing Center, home of the MareNostrum supercomputer

The Barcelona Supercomputing Center, home of the MareNostrum supercomputer

The Under-a-Mountain Bond Supervillain Data Center

Most data centers don’t have the extreme protection or history of the The Bahnhof Data Center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described Bond villain ambiance.

We previously wrote about this extraordinary data center in our post, The Challenges of Opening a Data Center — Part 1.

The Bahnhof Data Center under White Mountain in Stockholm, Sweden
The Bahnhof Data Center under White Mountain in Stockholm, Sweden

The Data Center That Can Survive a Class 5 Hurricane

Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category five hurricane winds.

The MI1 facility provides access for the Caribbean, South and Central America to “to more than 148 countries worldwide,” and is the primary network exchange between Latin America and the U.S., according to Equinix. Any outage in this data center could potentially cripple businesses passing information between these locations.

The center was put to the test in 2017 when Hurricane Irma, a class 5 hurricane in the Caribbean, made landfall in Florida as a class 4 hurricane. The storm caused extensive damage in Miami-Dade County, but the Equinix center survived.

Equinix NAP of the Americas Data Center in Miami
Equinix NAP of the Americas Data Center in Miami

The Data Center Cooled by Glacier Water

Located on Norway’s west coast, the Lefdal Mine Datacenter is built 150 meters into a mountain in what was formerly an underground mine for excavating olivine, also known as the gemstone peridot, a green, high- density mineral used in steel production. The data center is powered exclusively by renewable energy produced locally, while being cooled by water from the second largest fjord in Norway, which is 565 meters deep and fed by the water from four glaciers. As it’s in a mine, the data center is located below sea level, eliminating the need for expensive high-capacity pumps to lift the fjord’s water to the cooling system’s heat exchangers, contributing to the center’s power efficiency.

The Lefdal Mine Data Center in Norway
The Lefdal Mine Datacenter in Norway

The World’s Largest Data Center

The Tahoe Reno 1 data center in The Citadel Campus in Northern Nevada, with 7.2 million square feet of data center space, is the world’s largest data center. It’s not only big, it’s powered by 100% renewable energy with up to 650 megawatts of power.

The Switch Core Campus in Nevada
The Switch Core Campus in Nevada
Tahoe Reno Switch Data Center
Tahoe Reno Switch Data Center

An Out of This World Data Center

If the cloud isn’t far enough above us to satisfy your data needs, Cloud Constellation Corporation plans to put your data into orbit. A constellation of eight low earth orbit satellites (LEO), called SpaceBelt, will offer up to five petabytes of space-based secure data storage and services and will use laser communication links between the satellites to transmit data between different locations on Earth.

CCC isn’t the only player talking about space-based data centers, but it is the only one so far with 100 million in funding to make their plan a reality.

Cloud Constellation's SpaceBelt
Cloud Constellation’s SpaceBelt

A Cloud Storage Company’s Modest Beginnings

OK, so our current data centers are not that unusual (with the possible exception of our now iconic Storage Pod design), but Backblaze wasn’t always the profitable and growing cloud services company that it is today. hen Backblaze was just getting started and was figuring out how to make data storage work while keeping costs as low as possible for our customers.There was a time when Backblaze was just getting started, and before we had almost an exabyte of customer data storage, that we were figuring out how to make data storage work while keeping costs as low as possible for our customers.

The photo below is not exactly a data center, but it is the first data storage structure used by Backblaze to develop its storage infrastructure before going live with customer data. It was on the patio behind the Palo Alto apartment that Backblaze used for its first office.

Shed used for very early (pre-customer) data storage testing
Shed used for very early (pre-customer) data storage testing

The photos below (front and back) are of the very first data center cabinet that Backblaze filled with customer data. This was in 2009 in San Francisco, and just before we moved to a data center in Oakland where there was room to grow. Note the storage pod at the top of the cabinet. Yes, it’s made out of wood. (You have to start somewhere.)

Backblaze's first data storage cabinet to hold customer data (2009) (front)
Backblaze’s first data storage cabinet to hold customer data (2009) (front)
Backblaze's first data storage cabinet to hold customer data (2009) (back)
Backblaze’s first data storage cabinet to hold customer data (2009) (back)

Do You Know of Other Unusual Data Centers?

Do you know of another data center that should be on this list? Please tell us in the comments.

The post These Aren’t Your Ordinary Data Centers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze B2 Copy File Beta is Now Public

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/backblaze-b2-copy-file-beta-is-now-public/

B2 Copy File Beta

Since introducing B2 Cloud Storage nearly four years ago, we’ve been busy adding enhancements and new functionality to the service. We continually look for ways to make B2 more useful for our customers, be it through service level enhancements, partnerships with leading Compute providers, or lowering the industry’s lowest download price to 1¢/GB. Today, we’re pleased to announce the beta release of our newest functionality: Copy File.

What You Can Do With B2 Copy File

This new capability enables you to create a new file (or new part of a large file) that is a copy of an existing file (or range of an existing file). You can either copy over the source file’s metadata or specify new metadata for the new file that is created. This all occurs without having to download or reupload any data.

This has been one of our most requested features, as it unlocks:

  • Rename/Re-organize. The new capabilities give customers the ability to reorganize their files without having to download and reupload. This is especially helpful when trying to mirror the contents of a file system to B2.
  • Synthetic Backup. With the ability to copy ranges of a file, users can now leverage B2 for synthetic backup, which is uploading a full backup but then only uploading incremental changes (as opposed to reuploading the whole file with every change). This is particularly helpful for uses such as backing up VMs, where reuploading the entirety of the file every time it changes creates user inefficiencies.

Where to Learn More About B2 Copy File

The endpoint documentation can be found here:

b2_copy_file:  https://www.backblaze.com/b2/docs/b2_copy_file.html
b2_copy_part:  https://www.backblaze.com/b2/docs/b2_copy_part.html

More About the Beta Program

We’re introducing these endpoints as a beta so that developers can provide us feedback before the endpoints go into production. Specifically, this means that the APIs may evolve as a result of the feedback we get. We encourage you to give Copy File a try and, if you have any comments, you can email our B2 beta team at b2beta@backblaze.com. Thanks!

The post Backblaze B2 Copy File Beta is Now Public appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Connect Veeam to the B2 Cloud: Episode 4 — Using Morro Data CloudNAS

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/connect-veeam-to-the-b2-cloud-episode-4-using-morro-data-cloudnas/

Veeam backup to Backblaze B2 Episode 4 of Series

In the fourth post in our series on connecting Veeam with B2, we provide a guide on how to back up your VMs to Backblaze B2 using Veeam and Morro Data’s CloudNAS. In our previous posts, we covered how to connect Veeam to the B2 cloud using OpenDedupe, connect Veeam to the B2 cloud using Synology, and connect Veeam with B2 using StarWind VTL.

VM Backup to B2 Using Veeam Backup & Replication and Morro Data CloudNAS

We are glad to show how Veeam Backup & Replication can work with Morro Data CloudNAS to keep the more recent backups on premises for fast recovery while archiving all backups in B2 Cloud Storage. CloudNAS not only caches the more recent backup files, but also simplifies the management of B2 Cloud Storage with a network share or drive letter interface.

–Paul Tien, Founder & CEO, Morro Data

VM backup and recovery is a critical part of IT operations that supports business continuity. Traditionally, IT has deployed an array of purpose-built backup appliances and applications to protect against server, infrastructure, and security failures. As VMs continue to spread in production, development, and verification environments, the expanding VM backup repository has become a major challenge for system administrators.

Because the VM backup footprint is usually quite large, cloud storage is increasingly being deployed for VM backup. However, cloud storage does not achieve the same performance level as on-premises storage for recovery operation. For this reason, cloud storage has been used as tiered repository behind on-premises storage.

diagram of Veeam backing up to B2 using Cloudflare and Morro Data CloudNAS

In this best practice guide, VM Backup to B2 Using Veeam Backup & Replication and Morro Data CloudNAS, we will show how Veeam Backup & Replication can work with Morro Data CloudNAS to keep the most recent backups on premises for fast recovery while archiving all backups in the retention window in Backblaze B2 cloud storage. CloudNAS caching not only provides buffer for most recent backup files, but also simplifies the management of on-premises storage and cloud storage as an integral backup repository.

Tell Us How You’re Backing Up Your VMs

If you’re backing up VMs to B2 using one of the solutions we’ve written about in this series, we’d like to hear from you in the comments about how it’s going.

View all posts in the Veeam series.

The post Connect Veeam to the B2 Cloud: Episode 4 — Using Morro Data CloudNAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.