All posts by Andy Klein

An Inside Look at Data Center Storage Integration: A Complex, Iterative, and Sustained Process

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/firmware-for-data-center-optimization/

Data Center with Backblaze Pod

How and Why Advanced Devices Go Through Evolution in the Field

By Jason Feist, Seagate Senior Director for Technology Strategy and Product Planning

One of the most powerful features in today’s hard drives is the ability to update the firmware of deployed hard drives. Firmware changes can be straight-forward, such as changing a power setting, or as delicate as adjusting the height a read/write head flies above a spinning platter. By combining customer inputs, drive statistics, and a myriad of engineering talents, Seagate can use firmware updates to optimize the customer experience for the workload at hand.

In today’s guest post we are pleased to have Jason Feist, Senior Director for Technology Strategy and Product Planning at Seagate, describe how the Seagate ecosystem works.

 —Andy Klein

Storage Devices for the Data Center: Both Design-to-Application and In-Field Design Updates Are Important

As data center managers bring new IT architectures online, and as various installed components mature, technology device makers release firmware updates to enhance device operation, add features, and improve interoperability. The same is true for hard drives.

Hardware design takes years; firmware design can unlock the ability for that same hardware platform to persist in the field at the best cost structure if updates are deployed for continuous improvement over the product life cycle. In close and constant consultation with data center customers, hard drive engineers release firmware updates to ensure products provide the best experience in the field. Having the latest firmware is critical to ensure optimal drive operation and data center reliability. Likewise, as applications evolve, performance and features can mature over time to more effectively solve customer needs.

Data Center Managers Must Understand the Evolution of Data Center Needs, Architectures, and Solutions

Scientists and engineers at advanced technology companies like Seagate develop solutions based on understanding customers’ applications up front. But the job doesn’t end there; we also continue to assess and tweak devices in the field to fit very specific and evolving customer needs.

Likewise, the data center manager or IT architect must understand many technical considerations when installing new hardware. Integrating storage devices into a data center is never a matter of choosing any random hard drive or SSD that features a certain capacity or a certain IOPS specification. The data center manager must know the ins and outs of each storage device, and how it impacts myriad factors like performance, power, heat, and device interoperability.

But after rolling out new hardware, the job is not done. In fact, the job’s never done. Data center devices continue to evolve, even after integration. The hardware built for data centers is designed to be updated on a regular basis, based on a continuous cycle of feedback from ever-evolving applications and implementations.

As continued in-field quality assurance activities and updates in the field maintain the device’s appropriate interaction with the data center’s evolving needs, a device will continue to improve in terms of interoperability and performance until the architecture and the device together reach maturity. Managing these evolving needs and technology updates is a critical factor in achieving the expected best TCO (total cost of ownership) for the data center.

It’s important for data center managers to work closely with device makers to ensure integration is planned and executed correctly, monitoring and feedback is continuous, and updates are developed and deployed. In recent years as cloud and hyperscale data centers have evolved, Seagate has worked hard to develop a powerful support ecosystem for these partners.

The Team of Engineers Behind Storage Integration

The key to creating a successful program is to establish an application engineering and technical customer management team that’s engaged with the customer. Our engineering team meets with large data center customers on an ongoing basis. We work together from the pre-development phase to the time we qualify a new storage device. We collaborate to support in-field system monitoring, and sustaining activities like analyzing the logs on the hard drives, consulting about solutions within the data center, and ensuring the correct firmware updates are in place on the storage devices.

The science and engineering specialties on the team are extensive and varied. Depending on the topics at each meeting, analysis and discussion requires a breadth of engineering expertise. Dozens of engineering degrees and years of experience are on hand, including engineers expert in firmware, servo control systems, mechanical, tribology, electrical, reliability, and manufacturing; The titles of experts contributing include computer engineers, aerospace engineers, test engineers, statisticians, data analysts, and material scientists. Within each discipline are unique specializations such as ASIC engineers, channel technology engineers, and mechanical resonance engineers who understand shock and vibration factors.

The skills each engineer brings are necessary to understand the data customers are collecting and analyzing, how to deploy new products and technologies, and when to develop changes that’ll improve the data center’s architecture. It takes this team of engineering talent to comprehend the intricate interplay of devices, code, and processes needed to keep the architecture humming in harmony from the customer’s point of view.

How a Device Maker Works With a Data Center to Integrate and Sustain Performance and Reliability

After we establish our working team with a customer and when we’re introducing a new product for integration into a customer data center, we meet weekly to go over qualification status. We do a full design review on new features, consider the differences from the previous to the new design and how to address particular asks they may have for our next product design.

Traditionally, storage component designers would simply comply with whatever the T10 or T13 interface specification says. These days, many of the cloud data centers are asking for their own special sauce in some form, whether they’re trying to get a certain number of IOPS per terabyte, or trying to match their latency down to a certain number — for example, “I want to achieve four or five 9’s at this latency number; I want to be able to stream data at this rate; I want to have this power consumption.”

Recently, working with a customer to solve a specific need they had, we deployed Flex dynamic recording technology, which enables a single hard drive to use both SMR (Shingled Magnetic Recording) and CMR (Conventional Magnetic Recording, for example Perpendicular Recording) methods on the same drive media. This required a very high-level team integration with the customer. We spent great effort going back and forth on what an interface design should be, what a command protocol should be, and what the behavior should be in certain conditions.

Sometimes a drive design is unique to one customer, and sometimes it’s good for all our customers. There’s always a tradeoff; if you want really high performance, you’re probably going to pay for it with power. But when a customer asks for a certain special sauce, that drives us to figure out how to achieve that in balance with other needs. Then — similarly to when an automaker like Chevy or Honda builds race car engines and learns how to achieve new efficiency and performance levels — we can apply those new features to a broader customer set, and ultimately other customers will benefit too.

What Happens When Adjustments Are Needed in the Field?

Once a new product is integrated, we then continue to work closely from a sustaining standpoint. Our engineers interface directly with the customer’s team in the field, often in weekly meetings and sometimes even more frequently. We provide a full rundown on the device’s overall operation, dealing with maintenance and sustaining issues. For any error that comes up in the logs, we bring in an expert specific to that error to pore over the details.

In any given week we’ll have a couple of engineers in the customer’s data center monitoring new features and as needed debugging drive issues or issues with the customer’s system. Any time something seems amiss, we’ve got plans in place that let us do log analysis remotely and in the field.

Let’s take the example of a drive not performing as the customer intended. There are a number of reliability features in our drives that may interact with drive response — perhaps adding latency on the order of tens of milliseconds. We work with the customer on how we can manage those features more effectively. We help analyze the drive’s logs to tell them what’s going on and weigh the options. Is the latency a result of an important operation they can’t do without, and the drive won’t survive if we don’t allow that operation? Or is it something that we can defer or remove, prioritizing the workload goal?

How Storage Architecture and Design Has Changed for Cloud and Hyperscale

The way we work with cloud and data center partners has evolved over the years. Back when IT managers would outfit business data centers with turn-key systems, we were very familiar with the design requirements for traditional OEM systems with transaction-based workloads, RAID rebuild, and things of that nature. Generally, we were simply testing workloads that our customers ran against our drives.

As IT architects in the cloud space moved toward designing their data centers made-to-order, on open standards, they had a different notion of reliability and doing replication or erasure coding to create a more reliable environment. Understanding these workloads, gathering these traces and getting this information back from these customers was important so we could optimize drive performance under new and different design strategies: not just for performance, but for power consumption also. The number of drives populating large data centers is mind boggling, and when you realize what the power consumption is, you realize how important it is to optimize the drive for that particular variable.

Turning Information Into Improvements

We have always executed a highly standardized set of protocols on drives in our lab qualification environment, using racks that are well understood. In these scenarios, the behavior of the drive is well understood. By working directly with our cloud and data center partners we’re constantly learning from their unique environments.

For example, the customer’s architecture may have big fans in the back to help control temperature, and the fans operate with variable levels of cooling: as things warm up, the fans spin faster. At one point we may discover these fan operations are affecting the performance of the hard drive in the servo subsystem. Some of the drive logging our engineers do has been brilliant at solving issues like that. For example, we’d look at our position error signal, and we could actually tell how fast the fan was spinning based on the adjustments the drive was making to compensate for the acoustic noise generated by the fans.

Information like this is provided to our servo engineering team when they’re developing new products or firmware so they can make loop adjustments in our servo controllers to accommodate the range of frequencies we’re seeing from fans in the field. Rather than having the environment throw the drive’s heads off track, our team can provide compensation to keep the heads on track and let the drives perform reliably in environments like that. We can recreate the environmental conditions and measurements in our shop to validate we can control it as expected, and our future products inherit these benefits as we go forward.

In another example, we can monitor and work to improve data throughput while also maintaining reliability by understanding how the data center environment is affecting the read/write head’s ability to fly with stability at a certain height above the disk platter while reading bits. Understanding the ambient humidity and the temperature is essential to controlling the head’s fly height. We now have an active fly-height control system with the controller-firmware system and servo systems operating based on inputs from sensors within the drive. Traditionally a hard drive’s fly-height control was calibrated in the factory — a set-and-forget kind of thing. But with this field adjustable fly-height capability, the drive is continually monitoring environmental data. When the environment exceeds certain thresholds, the drive will recalculate what that fly height should be, so it’s optimally flying and getting the best air rates, ideally providing the best reliability in the field.

The Benefits of in-Field Analysis

These days a lot of information can be captured in logs and gathered from a drive to be brought back to our lab to inform design changes. You’re probably familiar with SMART logs that have been traditional in drives; this data provides a static snapshot in time of the status of a drive. In addition, field analysis reliability logs measure environmental factors the drive is experiencing like vibration, shock, and temperature. We can use this information to consider how the drive is responding and how firmware updates might deal with these factors more efficiently. For example, we might use that data to understand how a customer’s data center architecture might need to change a little bit to enable better performance, or reduce heat or power consumption, or lower vibrations.

What Does This Mean for the Data Center Manager?

There’s a wealth of information we can derive from the field, including field log data, customers’ direct feedback, and what our failure analysis teams have learned from returned drives. By actively participating in the process, our data center partners maximize the benefit of everything we’ve jointly learned about their environment so they can apply the latest firmware updates with confidence.

Updating firmware is an important part of fleet management that many data center operators struggle with. Some data centers may continue to run firmware even when an update is available, because they don’t have clear policies for managing firmware. Or they may avoid updates because they’re unsure if an update is right for their drive or their situation.

Would You Upgrade a Live Data Center?

Nobody wants their team to be responsible for allowing a server go down due to a firmware issue. How will the team know when new firmware is available, and whether it applies to specific components in the installed configuration? One method is for IT architects to set up a regular quarterly schedule to review possible firmware updates of all data center components. At the least, devising a review and upgrade schedule requires maintaining a regular inventory of all critical equipment, and setting up alerts or pull-push communications with each device maker so the team can review the latest release notes and schedule time to install updates as appropriate.

Firmware sent to the field for the purpose of updating in-service drives undergoes the same rigorous testing that the initial code goes through. In addition, the payload is verified to be compatible with the code and drive model that’s being updated. That means you can’t accidentally download firmware that isn’t for the intended drive. There are internal consistency checks to reject invalid code. Also, to help minimize performance impacts, firmware downloads support segmented download; the firmware can be downloaded in small pieces (the user can choose the size) so they can be interleaved with normal system work and have a minimal impact on performance. The host can decide when to activate the new code once the download is completed.

In closing, working closely with data center managers and architects to glean information from the field is important because it helps bring Seagate’s engineering team closer to our customers. This is the most powerful piece of the equation. Seagate needs to know what our customers are experiencing because it may be new and different for us, too. We intend these tools and processes to help both data center architecture and hard drive science continue to evolve.

The post An Inside Look at Data Center Storage Integration: A Complex, Iterative, and Sustained Process appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Practical Effects of GDPR at Backblaze

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/the-practical-effects-of-gdpr-at-backblaze/


GDPR day, May 25, 2018, is nearly here. On that day, will your inbox explode with update notices, opt-in agreements, and offers from lawyers searching for GDPR violators? Perhaps all the companies on earth that are not GDPR ready will just dissolve into dust. More likely, there will be some changes, but business as usual will continue and we’ll all be more aware of data privacy. Let’s go with the last one.

What’s Different With GDPR at Backblaze

The biggest difference you’ll notice is a completely updated Privacy Policy. Last week we sent out a service email announcing the new Privacy Policy. Some people asked what was different. Basically everything. About 95% of the agreement was rewritten. In the agreement, we added in the appropriate provisions required by GDPR, and hopefully did a better job specifying the data we collect from you, why we collect it, and what we are going to do with it.

As a reminder, at Backblaze your data falls into two catagories. The first type of data is the data you store with us — stored data. These are the files and objects you upload and store, and as needed, restore. We do not share this data. We do not process this data, except as requested by you to store and restore the data. We do not analyze this data looking for keywords, tags, images, etc. No one outside of Backblaze has access to this data unless you explicitly shared the data by providing that person access to one or more files.

The second type of data is your account data. Some of your account data is considered personal data. This is the information we collect from you to provide our Personal Backup, Business Backup and B2 Cloud Storage services. Examples include your email address to provide access to your account, or the name of your computer so we can organize your files like they are arranged on your computer to make restoration easier. We have written a number of Help Articles covering the different ways this information is collected and processed. In addition, these help articles outline the various “rights” granted via GDPR. We will continue to add help articles over the coming weeks to assist in making it easy to work with us to understand and exercise your rights.

What’s New With GDPR at Backblaze

The most obvious addition is the Data Processing Addendum (DPA). This covers how we protect the data you store with us, i.e. stored data. As noted above, we don’t do anything with your data, except store it and keep it safe until you need it. Now we have a separate document saying that.

It is important to note the new Data Processing Addendum is now incorporated by reference into our Terms of Service, which everyone agrees to when they sign up for any of our services. Now all of our customers have a shiny new Data Processing Agreement to go along with the updated Privacy Policy. We promise they are not long or complicated, and we encourage you to read them. If you have any questions, stop by our GDPR help section on our website.

Patience, Please

Every company we have dealt with over the last few months is working hard to comply with GDPR. It has been a tough road whether you tried to do it yourself or like Backblaze, hired an EU-based law firm for advice. Over the coming weeks and months as you reach out to discover and assert your rights, please have a little patience. We are all going through a steep learning curve as GDPR gets put into practice. Along the way there are certain to be some growing pains — give us a chance, we all want to get it right.

Regardless, at Backblaze we’ve been diligently protecting our customers’ data for over 11 years and nothing that will happen on May 25th will change that.

The post The Practical Effects of GDPR at Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Helium Factor and Hard Drive Failure Rates

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/

Seagate Enterprise Capacity 3.5 Helium HDD

In November 2013, the first commercially available helium-filled hard drive was introduced by HGST, a Western Digital subsidiary. The 6 TB drive was not only unique in being helium-filled, it was for the moment, the highest capacity hard drive available. Fast forward a little over 4 years later and 12 TB helium-filled drives are readily available, 14 TB drives can be found, and 16 TB helium-filled drives are arriving soon.

Backblaze has been purchasing and deploying helium-filled hard drives over the past year and we thought it was time to start looking at their failure rates compared to traditional air-filled drives. This post will provide an overview, then we’ll continue the comparison on a regular basis over the coming months.

The Promise and Challenge of Helium Filled Drives

We all know that helium is lighter than air — that’s why helium-filled balloons float. Inside of an air-filled hard drive there are rapidly spinning disk platters that rotate at a given speed, 7200 rpm for example. The air inside adds an appreciable amount of drag on the platters that in turn requires an appreciable amount of additional energy to spin the platters. Replacing the air inside of a hard drive with helium reduces the amount of drag, thereby reducing the amount of energy needed to spin the platters, typically by 20%.

We also know that after a few days, a helium-filled balloon sinks to the ground. This was one of the key challenges in using helium inside of a hard drive: helium escapes from most containers, even if they are well sealed. It took years for hard drive manufacturers to create containers that could contain helium while still functioning as a hard drive. This container innovation allows helium-filled drives to function at spec over the course of their lifetime.

Checking for Leaks

Three years ago, we identified SMART 22 as the attribute assigned to recording the status of helium inside of a hard drive. We have both HGST and Seagate helium-filled hard drives, but only the HGST drives currently report the SMART 22 attribute. It appears the normalized and raw values for SMART 22 currently report the same value, which starts at 100 and goes down.

To date only one HGST drive has reported a value of less than 100, with multiple readings between 94 and 99. That drive continues to perform fine, with no other errors or any correlating changes in temperature, so we are not sure whether the change in value is trying to tell us something or if it is just a wonky sensor.

Helium versus Air-Filled Hard Drives

There are several different ways to compare these two types of drives. Below we decided to use just our 8, 10, and 12 TB drives in the comparison. We did this since we have helium-filled drives in those sizes. We left out of the comparison all of the drives that are 6 TB and smaller as none of the drive models we use are helium-filled. We are open to trying different comparisons. This just seemed to be the best place to start.

Lifetime Hard Drive Failure Rates: Helium vs. Air-Filled Hard Drives table

The most obvious observation is that there seems to be little difference in the Annualized Failure Rate (AFR) based on whether they contain helium or air. One conclusion, given this evidence, is that helium doesn’t affect the AFR of hard drives versus air-filled drives. My prediction is that the helium drives will eventually prove to have a lower AFR. Why? Drive Days.

Let’s go back in time to Q1 2017 when the air-filled drives listed in the table above had a similar number of Drive Days to the current number of Drive Days for the helium drives. We find that the failure rate for the air-filled drives at the time (Q1 2017) was 1.61%. In other words, when the drives were in use a similar number of hours, the helium drives had a failure rate of 1.06% while the failure rate of the air-filled drives was 1.61%.

Helium or Air?

My hypothesis is that after normalizing the data so that the helium and air-filled drives have the same (or similar) usage (Drive Days), the helium-filled drives we use will continue to have a lower Annualized Failure Rate versus the air-filled drives we use. I expect this trend to continue for the next year at least. What side do you come down on? Will the Annualized Failure Rate for helium-filled drives be better than air-filled drives or vice-versa? Or do you think the two technologies will be eventually produce the same AFR over time? Pick a side and we’ll document the results over the next year and see where the data takes us.

The post The Helium Factor and Hard Drive Failure Rates appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Drive Stats for Q1 2018

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-for-q1-2018/

Backblaze Drive Stats Q1 2018

As of March 31, 2018 we had 100,110 spinning hard drives. Of that number, there were 1,922 boot drives and 98,188 data drives. This review looks at the quarterly and lifetime statistics for the data drive models in operation in our data centers. We’ll also take a look at why we are collecting and reporting 10 new SMART attributes and take a sneak peak at some 8 TB Toshiba drives. Along the way, we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

Background

Since April 2013, Backblaze has recorded and saved daily hard drive statistics from the drives in our data centers. Each entry consists of the date, manufacturer, model, serial number, status (operational or failed), and all of the SMART attributes reported by that drive. Currently there are about 97 million entries totaling 26 GB of data. You can download this data from our website if you want to do your own research, but for starters here’s what we found.

Hard Drive Reliability Statistics for Q1 2018

At the end of Q1 2018 Backblaze was monitoring 98,188 hard drives used to store data. For our evaluation below we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives. This leaves us with 98,046 hard drives. The table below covers just Q1 2018.

Q1 2018 Hard Drive Failure Rates

Notes and Observations

If a drive model has a failure rate of 0%, it only means there were no drive failures of that model during Q1 2018.

The overall Annualized Failure Rate (AFR) for Q1 is just 1.2%, well below the Q4 2017 AFR of 1.65%. Remember that quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of Drive Days.

There were 142 drives (98,188 minus 98,046) that were not included in the list above because we did not have at least 45 of a given drive model. We use 45 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics.

Welcome Toshiba 8TB drives, almost…

We mentioned Toshiba 8 TB drives in the first paragraph, but they don’t show up in the Q1 Stats chart. What gives? We only had 20 of the Toshiba 8 TB drives in operation in Q1, so they were excluded from the chart. Why do we have only 20 drives? When we test out a new drive model we start with the “tome test” and it takes 20 drives to fill one tome. A tome is the same drive model in the same logical position in each of the 20 Storage Pods that make up a Backblaze Vault. There are 60 tomes in each vault.

In this test, we created a Backblaze Vault of 8 TB drives, with 59 of the tomes being Seagate 8 TB drives and 1 tome being the Toshiba drives. Then we monitored the performance of the vault and its member tomes to see if, in this case, the Toshiba drives performed as expected.

Q1 2018 Hard Drive Failure Rate — Toshiba 8TB

So far the Toshiba drive is performing fine, but they have been in place for only 20 days. Next up is the “pod test” where we fill a Storage Pod with Toshiba drives and integrate it into a Backblaze Vault comprised of like-sized drives. We hope to have a better look at the Toshiba 8 TB drives in our Q2 report — stay tuned.

Lifetime Hard Drive Reliability Statistics

While the quarterly chart presented earlier gets a lot of interest, the real test of any drive model is over time. Below is the lifetime failure rate chart for all the hard drive models which have 45 or more drives in operation as of March 31st, 2018. For each model, we compute their reliability starting from when they were first installed.

Lifetime Hard Drive Failure Rates

Notes and Observations

The failure rates of all of the larger drives (8-, 10- and 12 TB) are very good, 1.2% AFR (Annualized Failure Rate) or less. Many of these drives were deployed in the last year, so there is some volatility in the data, but you can use the Confidence Interval to get a sense of the failure percentage range.

The overall failure rate of 1.84% is the lowest we have ever achieved, besting the previous low of 2.00% from the end of 2017.

Our regular readers and drive stats wonks may have noticed a sizable jump in the number of HGST 8 TB drives (model: HUH728080ALE600), from 45 last quarter to 1,045 this quarter. As the 10 TB and 12 TB drives become more available, the price per terabyte of the 8 TB drives has gone down. This presented an opportunity to purchase the HGST drives at a price in line with our budget.

We purchased and placed into service the 45 original HGST 8 TB drives in Q2 of 2015. They were our first Helium-filled drives and our only ones until the 10 TB and 12 TB Seagate drives arrived in Q3 2017. We’ll take a first look into whether or not Helium makes a difference in drive failure rates in an upcoming blog post.

New SMART Attributes

If you have previously worked with the hard drive stats data or plan to, you’ll notice that we added 10 more columns of data starting in 2018. There are 5 new SMART attributes we are tracking each with a raw and normalized value:

  • 177 – Wear Range Delta
  • 179 – Used Reserved Block Count Total
  • 181- Program Fail Count Total or Non-4K Aligned Access Count
  • 182 – Erase Fail Count
  • 235 – Good Block Count AND System(Free) Block Count

The 5 values are all related to SSD drives.

Yes, SSD drives, but before you jump to any conclusions, we used 10 Samsung 850 EVO SSDs as boot drives for a period of time in Q1. This was an experiment to see if we could reduce boot up time for the Storage Pods. In our case, the improved boot up speed wasn’t worth the SSD cost, but it did add 10 new columns to the hard drive stats data.

Speaking of hard drive stats data, the complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose, all we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone. It is free.

If you just want the summarized data used to create the tables and charts in this blog post, you can download the ZIP file containing the MS Excel spreadsheet.

Good luck and let us know if you find anything interesting.

[Ed: 5/1/2018 – Updated Lifetime chart to fix error in confidence interval for HGST 4TB drive, model: HDS5C4040ALE630]

The post Hard Drive Stats for Q1 2018 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

American Public Television Embraces the Cloud — And the Future

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/american-public-television-embraces-the-cloud-and-the-future/

American Public Television website

American Public Television was like many organizations that have been around for a while. They were entrenched using an older technology — in their case, tape storage and distribution — that once met their needs but was limiting their productivity and preventing them from effectively collaborating with their many media partners. APT’s VP of Technology knew that he needed to move into the future and embrace cloud storage to keep APT ahead of the game.
Since 1961, American Public Television (APT) has been a leading distributor of groundbreaking, high-quality, top-rated programming to the nation’s public television stations. Gerry Field is the Vice President of Technology at APT and is responsible for delivering their extensive program catalog to 350+ public television stations nationwide.

In the time since Gerry  joined APT in 2007, the industry has been in digital overdrive. During that time APT has continued to acquire and distribute the best in public television programming to their technically diverse subscribers.

This created two challenges for Gerry. First, new technology and format proliferation were driving dramatic increases in digital storage. Second, many of APT’s subscribers struggled to keep up with the rapidly changing industry. While some subscribers had state-of-the-art satellite systems to receive programming, others had to wait for the post office to drop off programs recorded on tape weeks earlier. With no slowdown on the horizon of innovation in the industry, Gerry knew that his storage and distribution systems would reach a crossroads in no time at all.

American Public Television logo

Living the tape paradigm

The digital media industry is only a few years removed from its film, and later videotape, roots. Tape was the input and the output of the industry for many years. As a consequence, the tools and workflows used by the industry were built and designed to work with tape. Over time, the “file” slowly replaced the tape as the object to be captured, edited, stored and distributed. Trouble was, many of the systems and more importantly workflows were based on processing tape, and these have proven to be hard to change.

At APT, Gerry realized the limits of the tape paradigm and began looking for technologies and solutions that enabled workflows based on file and object based storage and distribution.

Thinking file based storage and distribution

For data (digital media) storage, APT, like everyone else, started by installing onsite storage servers. As the amount of digital data grew, more storage was added. In addition, APT was expanding its distribution footprint by creating or partnering with distribution channels such as CreateTV and APT Worldwide. This dramatically increased the number of programming formats and the amount of data that had to be stored. As a consequence, updating, maintaining, and managing the APT storage systems was becoming a major challenge and a major resource hog.

APT Online

Knowing that his in-house storage system was only going to cost more time and money, Gerry decided it was time to look at cloud storage. But that wasn’t the only reason he looked at the cloud. While most people consider cloud storage as just a place to back up and archive files, Gerry was envisioning how the ubiquity of the cloud could help solve his distribution challenges. The trouble was the price of cloud storage from vendors like Amazon S3 and Microsoft Azure was a non-starter, especially for a non-profit. Then Gerry came across Backblaze. B2 Cloud Storage service met all of his performance requirements, and at $0.005/GB/month for storage and $0.01/GB for downloads it was nearly 75% less than S3 or Azure.

Gerry did the math and found that he could economically incorporate B2 Cloud Storage into his IT portfolio, using it for both program submission and for active storage and archiving of the APT programs. In addition, B2 now gives him the foundation necessary to receive and distribute programming content over the Internet. This is especially useful for organizations that can’t conveniently access satellite distribution systems. Not to mention downloading from the cloud is much faster than sending a tape through the mail.

Adding B2 Cloud Storage to their infrastructure has helped American Public Television address two key challenges. First, they now have “unlimited” storage in the cloud without having to add any hardware. In addition, with B2, they only pay for the storage they use. That means they don’t have to buy storage upfront trying to match the maximum amount of storage they’ll ever need. Second, by using B2 as a distribution source for their programming APT subscribers, especially the smaller and remote ones, can get content faster and more reliably without having to perform costly upgrades to their infrastructure.

The road ahead

As APT gets used to their file based infrastructure and workflow, there are a number of cost saving and income generating ideas they are pondering which are now worth considering. Here are a few:

Program Submissions — New content can be uploaded from anywhere using a web browser, an Internet connection, and a login. For example, a producer in Cambodia can upload their film to B2. From there the film is downloaded to an in-house system where it is processed and transcoded using compute. The finished film is added to the APT catalog and added to B2. Once there, the program is instantly available for subscribers to order and download.

“The affordability and performance of Backblaze B2 is what allowed us to make the B2 cloud part of the APT data storage and distribution strategy into the future.” — Gerry Field

Easier Previews — At any time, work in process or finished programs can be made available for download from the B2 cloud. One place this could be useful is where a subscriber needs to review a program to comply with local policies and practices before airing. In the old system, each “one-off” was a time consuming manual process.

Instant Subscriptions — There are many organizations such as schools and businesses that want to use just one episode of a desired show. With an e-commerce based website, current or even archived programming kept in B2 could be available to download or stream for a minimal charge.

At APT there were multiple technologies needed to make their file-based infrastructure work, but as Gerry notes, having an affordable, trustworthy, cloud storage service like B2 is one of the critical building blocks needed to make everything work together.

The post American Public Television Embraces the Cloud — And the Future appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze and GDPR

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/gdpr-compliance/

GDPR General Data Protection Regulation

Over the next few months the noise over GDPR will finally reach a crescendo. For the uninitiated, “GDPR” stands for “General Data Protection Regulation” and it goes into effect on May 25th of this year. GDPR is designed to protect how personal information of EU (European Union) citizens is collected, stored, and shared. The regulation should also improve transparency as to how personal information is managed by a business or organization.

Backblaze fully expects to be GDPR compliant when May 25th rolls around and we thought we’d share our experience along the way. We’ll start with this post as an introduction to GDPR. In future posts, we’ll dive into some of the details of the process we went through in meeting the GDPR objectives.

GDPR: A Two Way Street

To ensure we are GDPR compliant, Backblaze has assembled a dedicated internal team, engaged outside counsel in the United Kingdom, and consulted with other tech companies on best practices. While it is a sizable effort on our part, we view this as a waypoint in our ongoing effort to secure and protect our customers’ data and to be transparent in how we work as a company.

In addition to the effort we are putting into complying with the regulation, we think it is important to underscore and promote the idea that data privacy and security is a two-way street. We can spend millions of dollars on protecting the security of our systems, but we can’t stop a bad actor from finding and using your account credentials left on a note stuck to your monitor. We can give our customers tools like two factor authentication and private encryption keys, but it is the partnership with our customers that is the most powerful protection. The same thing goes for your digital privacy — we’ll do our best to protect your information, but we will need your help to do so.

Why GDPR is Important

At the center of GDPR is the protection of Personally Identifiable Information or “PII.” The definition for PII is information that can be used stand-alone or in concert with other information to identify a specific person. This includes obvious data like: name, address, and phone number, less obvious data like email address and IP address, and other data such as a credit card number, and unique identifiers that can be decoded back to the person.

How Will GDPR Affect You as an Individual

If you are a citizen in the EU, GDPR is designed to protect your private information from being used or shared without your permission. Technically, this only applies when your data is collected, processed, stored or shared outside of the EU, but it’s a good practice to hold all of your service providers to the same standard. For example, when you are deciding to sign up with a service, you should be able to quickly access and understand what personal information is being collected, why it is being collected, and what the business can do with that information. These terms are typically found in “Terms and Conditions” and “Privacy Policy” documents, or perhaps in a written contract you signed before starting to use a given service or product.

Even if you are not a citizen of the EU, GDPR will still affect you. Why? Because nearly every company you deal with, especially online, will have customers that live in the EU. It makes little sense for Backblaze, or any other service provider or vendor, to create a separate set of rules for just EU citizens. In practice, protection of private information should be more accountable and transparent with GDPR.

How Will GDPR Affect You as a Backblaze Customer

Over the coming months Backblaze customers will see changes to our current “Terms and Conditions,” “Privacy Policy,” and to our Backblaze services. While the changes to the Backblaze services are expected to be minimal, the “terms and privacy” documents will change significantly. The changes will include among other things the addition of a group of model clauses and related materials. These clauses will be generally consistent across all GDPR compliant vendors and are meant to be easily understood so that a customer can easily determine how their PII is being collected and used.

Common GDPR Questions:

Here are a few of the more common questions we have heard regarding GDPR.

  1. GDPR will only affect citizens in the EU.
    Answer: The changes that are being made by companies such as Backblaze to comply with GDPR will almost certainly apply to customers from all countries. And that’s a good thing. The protections afforded to EU citizens by GDPR are something all users of our service should benefit from.
  2. After May 25, 2018, a citizen of the EU will not be allowed to use any applications or services that store data outside of the EU.
    Answer: False, no one will stop you as an EU citizen from using the internet-based service you choose. But, you should make sure you know where your data is being collected, processed, and stored. If any of those activities occur outside the EU, make sure the company is following the GDPR guidelines.
  3. My business only has a few EU citizens as customers, so I don’t need to care about GDPR?
    Answer: False, even if you have just one EU citizen as a customer, and you capture, process or store data their PII outside of the EU, you need to comply with GDPR.
  4. Companies can be fined millions of dollars for not complying with GDPR.
    Answer:
    True, but: the regulation allows for companies to be fined up to $4 Million dollars or 20% of global revenue (whichever is greater) if they don’t comply with GDPR. In practice, the feeling is that such fines will be reserved (at least initially) for egregious violators that ignore or merely give “lip-service” to GDPR.
  5. You’ll be able to tell a company is GDPR compliant because they have a “GDPR Certified” badge on their website.
    Answer: There is no official GDPR certification or an official GDPR certification program. Companies that comply with GDPR are expected to follow the articles in the regulation and it should be clear from the outside looking in that they have followed the regulations. For example, their “Terms and Conditions,” and “Privacy Policy” should clearly spell out how and why they collect, use, and share your information. At some point a real GDPR certification program may be adopted, but not yet.

For all the hoopla about GDPR, the regulation is reasonably well thought out and addresses a very important issue — people’s privacy online. Creating a best practices document, or in this case a regulation, that companies such as Backblaze can follow is a good idea. The document isn’t perfect, and over the coming years we expect there to be changes. One thing we hope for is that the countries within the EU continue to stand behind one regulation and not fragment the document into multiple versions, each applying to themselves. We believe that having multiple different GDPR versions for different EU countries would lead to less protection overall of EU citizens.

In summary, GDPR changes are coming over the next few months. Backblaze has our internal staff and our EU-based legal council working diligently to ensure that we will be GDPR compliant by May 25th. We believe that GDPR will have a positive effect in enhancing the protection of personally identifiable information for not only EU citizens, but all of our Backblaze customers.

The post Backblaze and GDPR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Hard Drive Stats for 2017

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-for-2017/

Backbalze Drive Stats 2017 Review

Beginning in April 2013, Backblaze has recorded and saved daily hard drive statistics from the drives in our data centers. Each entry consists of the date, manufacturer, model, serial number, status (operational or failed), and all of the SMART attributes reported by that drive. As of the end of 2017, there are about 88 million entries totaling 23 GB of data. You can download this data from our website if you want to do your own research, but for starters here’s what we found.

Overview

At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We’ll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track. Along the way we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

Hard Drive Reliability Statistics for Q4 2017

At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.

Hard Drive Annualized Failure Rates for Q4 2017

A few things to remember when viewing this chart:

  • The failure rate listed is for just Q4 2017. If a drive model has a failure rate of 0%, it means there were no drive failures of that model during Q4 2017.
  • There were 62 drives (91,305 minus 91,243) that were not included in the list above because we did not have at least 45 of a given drive model. The most common reason we would have fewer than 45 drives of one model is that we needed to replace a failed drive and we had to purchase a different model as a replacement because the original model was no longer available. We use 45 drives of the same model as the minimum number to qualify for reporting quarterly, yearly, and lifetime drive statistics.
  • Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure.
  • AFR stands for Annualized Failure Rate, which is the projected failure rate for a year based on the data from this quarter only.

Bulking Up and Adding On Storage

Looking back over 2017, we not only added new drives, we “bulked up” by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:

Backblaze Drive Population by Drive Size

For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.

2017 Hard Drive Failure Stats

Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.

Hard Drive Annualized Failure Rates

The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.

Let’s take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.

Annual Hard Drive Failure Rates by Year

The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:

  • The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
  • While it looks like the failure rates for the 3 TB WDC drives have also decreased, you’ll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
  • The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That’s impressive, but with only 45 drives in use for each model, not statistically useful.
  • The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.

A Few More Numbers

To save you countless hours of looking, we’ve culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.

  • 116,833 — The number of hard drives for which we have data from April 2013 through the end of December 2017. Currently there are 91,305 drives (data drives) in operation. This means 25,528 drives have either failed or been removed from service due for some other reason — typically migration.
  • 29,844 — The number of hard drives that were installed in 2017. This includes new drives, migrations, and failure replacements.
  • 81.76 — The number of hard drives that were installed each day in 2017. This includes new drives, migrations, and failure replacements.
  • 95,638 — The number of drives installed since we started keeping records in April 2013 through the end of December 2017.
  • 55.41 — The average number of hard drives installed per day from April 2013 to the end of December 2017. The installations can be new drives, migration replacements, or failure replacements.
  • 1,508 — The number of hard drives that were replaced as failed in 2017.
  • 4.13 — The average number of hard drives that have failed each day in 2017.
  • 6,795 — The number of hard drives that have failed from April 2013 until the end of December 2017.
  • 3.94 — The average number of hard drives that have failed each day from April 2013 until the end of December 2017.

Can’t Get Enough Hard Drive Stats?

We’ll be presenting the webinar “Backblaze Hard Drive Stats for 2017” on Thursday February 9, 2017 at 10:00 Pacific time. The webinar will dig deeper into the quarterly, yearly, and lifetime hard drive stats and include the annual and lifetime stats by drive size and manufacturer. You will need to subscribe to the Backblaze BrightTALK channel to view the webinar. Sign up today.

As a reminder, the complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone — it is free.

Good luck and let us know if you find anything interesting.

The post Backblaze Hard Drive Stats for 2017 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cloud Babble: The Jargon of Cloud Storage

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/what-is-cloud-computing/

Cloud Babble

One of the things we in the technology business are good at is coming up with names, phrases, euphemisms, and acronyms for the stuff that we create. The Cloud Storage market is no different, and we’d like to help by illuminating some of the cloud storage related terms that you might come across. We know this is just a start, so please feel free to add in your favorites in the comments section below and we’ll update this post accordingly.

Clouds

The cloud is really just a collection of purpose built servers. In a public cloud the servers are shared between multiple unrelated tenants. In a private cloud, the servers are dedicated to a single tenant or sometimes a group of related tenants. A public cloud is off-site, while a private cloud can be on-site or off-site – or on-prem or off-prem, if you prefer.

Both Sides Now: Hybrid Clouds

Speaking of on-prem and off-prem, there are Hybrid Clouds or Hybrid Data Clouds depending on what you need. Both are based on the idea that you extend your local resources (typically on-prem) to the cloud (typically off-prem) as needed. This extension is controlled by software that decides, based on rules you define, what needs to be done where.

A Hybrid Data Cloud is specific to data. For example, you can set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

A Hybrid Cloud is similar to a Hybrid Data Cloud except it also extends compute. For example, at the end of the quarter, you can spin up order processing application instances off-prem as needed to add to your on-prem capacity. Of course, determining where the transactional data used and created by these applications resides can be an interesting systems design challenge.

Clouds in my Coffee: Fog

Typically, public and private clouds live in large buildings called data centers. Full of servers, networking equipment, and clean air, data centers need lots of power, lots of networking bandwidth, and lots of space. This often limits where data centers are located. The further away you are from a data center, the longer it generally takes to get your data to and from there. This is known as latency. That’s where “Fog” comes in.

Fog is often referred to as clouds close to the ground. Fog, in our cloud world, is basically having a “little” data center near you. This can make data storage and even cloud based processing faster for everyone nearby. Data, and less so processing, can be transferred to/from the Fog to the Cloud when time is less a factor. Data could also be aggregated in the Fog and sent to the Cloud. For example, your electric meter could report its minute-by-minute status to the Fog for diagnostic purposes. Then once a day the aggregated data could be send to the power company’s Cloud for billing purposes.

Another term used in place of Fog is Edge, as in computing at the Edge. In either case, a given cloud (data center) usually has multiple Edges (little data centers) connected to it. The connection between the Edge and the Cloud is sometimes known as the middle-mile. The network in the middle-mile can be less robust than that required to support a stand-alone data center. For example, the middle-mile can use 1 Gbps lines, versus a data center, which would require multiple 10 Gbps lines.

Heavy Clouds No Rain: Data

We’re all aware that we are creating, processing, and storing data faster than ever before. All of this data is stored in either a structured or more likely an unstructured way. Databases and data warehouses are structured ways to store data, but a vast amount of data is unstructured – meaning the schema and data access requirements are not known until the data is queried. A large pool of unstructured data in a flat architecture can be referred to as a Data Lake.

A Data Lake is often created so we can perform some type of “big data” analysis. In an over simplified example, let’s extend the lake metaphor a bit and ask the question; “how many fish are in our lake?” To get an answer, we take a sufficient sample of our lake’s water (data), count the number of fish we find, and extrapolate based on the size of the lake to get an answer within a given confidence interval.

A Data Lake is usually found in the cloud, an excellent place to store large amounts of non-transactional data. Watch out as this can lead to our data having too much Data Gravity or being locked in the Hotel California. This could also create a Data Silo, thereby making a potential data Lift-and-Shift impossible. Let me explain:

  • Data Gravity — Generally, the more data you collect in one spot, the harder it is to move. When you store data in a public cloud, you have to pay egress and/or network charges to download the data to another public cloud or even to your own on-premise systems. Some public cloud vendors charge a lot more than others, meaning that depending on your public cloud provider, your data could financially have a lot more gravity than you expected.
  • Hotel California — This is like Data Gravity but to a lesser scale. Your data is in the Hotel California if, to paraphrase, “your data can check out any time you want, but it can never leave.” If the cost of downloading your data is limiting the things you want to do with that data, then your data is in the Hotel California. Data is generally most valuable when used, and with cloud storage that can include archived data. This assumes of course that the archived data is readily available, and affordable, to download. When considering a cloud storage project always figure in the cost of using your own data.
  • Data Silo — Over the years, businesses have suffered from organizational silos as information is not shared between different groups, but instead needs to travel up to the top of the silo before it can be transferred to another silo. If your data is “trapped” in a given cloud by the cost it takes to share such data, then you may have a Data Silo, and that’s exactly opposite of what the cloud should do.
  • Lift-and-Shift — This term is used to define the movement of data or applications from one data center to another or from on-prem to off-prem systems. The move generally occurs all at once and once everything is moved, systems are operational and data is available at the new location with few, if any, changes. If your data has too much gravity or is locked in a hotel, a data lift-and-shift may break the bank.

I Can See Clearly Now

Hopefully, the cloudy terms we’ve covered are well, less cloudy. As we mentioned in the beginning, our compilation is just a start, so please feel free to add in your favorite cloud term in the comments section below and we’ll update this post with your contributions. Keep your entries “clean,” and please no words or phrases that are really adverts for your company. Thanks.

The post Cloud Babble: The Jargon of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hamr-hard-drives/

HAMR drive illustration

During Q4, Backblaze deployed 100 petabytes worth of Seagate hard drives to our data centers. The newly deployed Seagate 10 and 12 TB drives are doing well and will help us meet our near term storage needs, but we know we’re going to need more drives — with higher capacities. That’s why the success of new hard drive technologies like Heat-Assisted Magnetic Recording (HAMR) from Seagate are very relevant to us here at Backblaze and to the storage industry in general. In today’s guest post we are pleased to have Mark Re, CTO at Seagate, give us an insider’s look behind the hard drive curtain to tell us how Seagate engineers are developing the HAMR technology and making it market ready starting in late 2018.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Guest Blog Post by Mark Re, Seagate Senior Vice President and Chief Technology Officer

Earlier this year Seagate announced plans to make the first hard drives using Heat-Assisted Magnetic Recording, or HAMR, available by the end of 2018 in pilot volumes. Even as today’s market has embraced 10TB+ drives, the need for 20TB+ drives remains imperative in the relative near term. HAMR is the Seagate research team’s next major advance in hard drive technology.

HAMR is a technology that over time will enable a big increase in the amount of data that can be stored on a disk. A small laser is attached to a recording head, designed to heat a tiny spot on the disk where the data will be written. This allows a smaller bit cell to be written as either a 0 or a 1. The smaller bit cell size enables more bits to be crammed into a given surface area — increasing the areal density of data, and increasing drive capacity.

It sounds almost simple, but the science and engineering expertise required, the research, experimentation, lab development and product development to perfect this technology has been enormous. Below is an overview of the HAMR technology and you can dig into the details in our technical brief that provides a point-by-point rundown describing several key advances enabling the HAMR design.

As much time and resources as have been committed to developing HAMR, the need for its increased data density is indisputable. Demand for data storage keeps increasing. Businesses’ ability to manage and leverage more capacity is a competitive necessity, and IT spending on capacity continues to increase.

History of Increasing Storage Capacity

For the last 50 years areal density in the hard disk drive has been growing faster than Moore’s law, which is a very good thing. After all, customers from data centers and cloud service providers to creative professionals and game enthusiasts rarely go shopping looking for a hard drive just like the one they bought two years ago. The demands of increasing data on storage capacities inevitably increase, thus the technology constantly evolves.

According to the Advanced Storage Technology Consortium, HAMR will be the next significant storage technology innovation to increase the amount of storage in the area available to store data, also called the disk’s “areal density.” We believe this boost in areal density will help fuel hard drive product development and growth through the next decade.

Why do we Need to Develop Higher-Capacity Hard Drives? Can’t Current Technologies do the Job?

Why is HAMR’s increased data density so important?

Data has become critical to all aspects of human life, changing how we’re educated and entertained. It affects and informs the ways we experience each other and interact with businesses and the wider world. IDC research shows the datasphere — all the data generated by the world’s businesses and billions of consumer endpoints — will continue to double in size every two years. IDC forecasts that by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1 ZB of data generated in 2016. IDC cites five key trends intensifying the role of data in changing our world: embedded systems and the Internet of Things (IoT), instantly available mobile and real-time data, cognitive artificial intelligence (AI) systems, increased security data requirements, and critically, the evolution of data from playing a business background to playing a life-critical role.

Consumers use the cloud to manage everything from family photos and videos to data about their health and exercise routines. Real-time data created by connected devices — everything from Fitbit, Alexa and smart phones to home security systems, solar systems and autonomous cars — are fueling the emerging Data Age. On top of the obvious business and consumer data growth, our critical infrastructure like power grids, water systems, hospitals, road infrastructure and public transportation all demand and add to the growth of real-time data. Data is now a vital element in the smooth operation of all aspects of daily life.

All of this entails a significant infrastructure cost behind the scenes with the insatiable, global appetite for data storage. While a variety of storage technologies will continue to advance in data density (Seagate announced the first 60TB 3.5-inch SSD unit for example), high-capacity hard drives serve as the primary foundational core of our interconnected, cloud and IoT-based dependence on data.

HAMR Hard Drive Technology

Seagate has been working on heat assisted magnetic recording (HAMR) in one form or another since the late 1990s. During this time we’ve made many breakthroughs in making reliable near field transducers, special high capacity HAMR media, and figuring out a way to put a laser on each and every head that is no larger than a grain of salt.

The development of HAMR has required Seagate to consider and overcome a myriad of scientific and technical challenges including new kinds of magnetic media, nano-plasmonic device design and fabrication, laser integration, high-temperature head-disk interactions, and thermal regulation.

A typical hard drive inside any computer or server contains one or more rigid disks coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code.

Increasing the amount of data you can store on a disk requires cramming magnetic regions closer together, which means the grains need to be smaller so they won’t interfere with each other.

Heat Assisted Magnetic Recording (HAMR) is the next step to enable us to increase the density of grains — or bit density. Current projections are that HAMR can achieve 5 Tbpsi (Terabits per square inch) on conventional HAMR media, and in the future will be able to achieve 10 Tbpsi or higher with bit patterned media (in which discrete dots are predefined on the media in regular, efficient, very dense patterns). These technologies will enable hard drives with capacities higher than 100 TB before 2030.

The major problem with packing bits so closely together is that if you do that on conventional magnetic media, the bits (and the data they represent) become thermally unstable, and may flip. So, to make the grains maintain their stability — their ability to store bits over a long period of time — we need to develop a recording media that has higher coercivity. That means it’s magnetically more stable during storage, but it is more difficult to change the magnetic characteristics of the media when writing (harder to flip a grain from a 0 to a 1 or vice versa).

That’s why HAMR’s first key hardware advance required developing a new recording media that keeps bits stable — using high anisotropy (or “hard”) magnetic materials such as iron-platinum alloy (FePt), which resist magnetic change at normal temperatures. Over years of HAMR development, Seagate researchers have tested and proven out a variety of FePt granular media films, with varying alloy composition and chemical ordering.

In fact the new media is so “hard” that conventional recording heads won’t be able to flip the bits, or write new data, under normal temperatures. If you add heat to the tiny spot on which you want to write data, you can make the media’s coercive field lower than the magnetic field provided by the recording head — in other words, enable the write head to flip that bit.

So, a challenge with HAMR has been to replace conventional perpendicular magnetic recording (PMR), in which the write head operates at room temperature, with a write technology that heats the thin film recording medium on the disk platter to temperatures above 400 °C. The basic principle is to heat a tiny region of several magnetic grains for a very short time (~1 nanoseconds) to a temperature high enough to make the media’s coercive field lower than the write head’s magnetic field. Immediately after the heat pulse, the region quickly cools down and the bit’s magnetic orientation is frozen in place.

Applying this dynamic nano-heating is where HAMR’s famous “laser” comes in. A plasmonic near-field transducer (NFT) has been integrated into the recording head, to heat the media and enable magnetic change at a specific point. Plasmonic NFTs are used to focus and confine light energy to regions smaller than the wavelength of light. This enables us to heat an extremely small region, measured in nanometers, on the disk media to reduce its magnetic coercivity,

Moving HAMR Forward

HAMR write head

As always in advanced engineering, the devil — or many devils — is in the details. As noted earlier, our technical brief provides a point-by-point short illustrated summary of HAMR’s key changes.

Although hard work remains, we believe this technology is nearly ready for commercialization. Seagate has the best engineers in the world working towards a goal of a 20 Terabyte drive by 2019. We hope we’ve given you a glimpse into the amount of engineering that goes into a hard drive. Keeping up with the world’s insatiable appetite to create, capture, store, secure, manage, analyze, rapidly access and share data is a challenge we work on every day.

With thousands of HAMR drives already being made in our manufacturing facilities, our internal and external supply chain is solidly in place, and volume manufacturing tools are online. This year we began shipping initial units for customer tests, and production units will ship to key customers by the end of 2018. Prepare for breakthrough capacities.

The post What is HAMR and How Does It Enable the High-Capacity Needs of the Future? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

B2 Cloud Storage Roundup

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/b2-cloud-storage-roundup/

B2 Integrations
Over the past several months, B2 Cloud Storage has continued to grow like we planted magic beans. During that time we have added a B2 Java SDK, and certified integrations with GoodSync, Arq, Panic, UpdraftPlus, Morro Data, QNAP, Archiware, Restic, and more. In addition, B2 customers like Panna Cooking, Sermon Audio, and Fellowship Church are happy they chose B2 as their cloud storage provider. If any of that sounds interesting, read on.

The B2 Java SDK

While the Backblaze B2 API is well documented and straight-forward to implement, we were asked by a few of our Integration Partners if we had an SDK they could use. So we developed one as an open-course project on GitHub, where we hope interested parties will not only use our Java SDK, but make it better for everyone else.

There are different reasons one might use the Java SDK, but a couple of areas where the SDK can simplify the coding process are:

Expiring Authorization — B2 requires an application key for a given account be reissued once a day when using the API. If the application key expires while you are in the middle of transferring files or some other B2 activity (bucket list, etc.), the SDK can be used to detect and then update the application key on the fly. Your B2 related activities will continue without incident and without having to capture and code your own exception case.

Error Handling — There are different types of error codes B2 will return, from expired application keys to detecting malformed requests to command time-outs. The SDK can dramatically simplify the coding needed to capture and account for the various things that can happen.

While Backblaze has created the Java SDK, developers in the GitHub community have also created other SDKs for B2, for example, for PHP (https://github.com/cwhite92/b2-sdk-php,) and Go (https://github.com/kurin/blazer.) Let us know in the comments about other SDKs you’d like to see or perhaps start your own GitHub project. We will publish any updates in our next B2 roundup.

What You Can Do with Affordable and Available Cloud Storage

You’re probably aware that B2 is up to 75% less expensive than other similar cloud storage services like Amazon S3 and Microsoft Azure. Businesses and organizations are finding that projects that previously weren’t economically feasible with other Cloud Storage services are now not only possible, but a reality with B2. Here are a few recent examples:

SermonAudio logo SermonAudio wanted their media files to be readily available, but didn’t want to build and manage their own internal storage farm. Until B2, cloud storage was just too expensive to use. Now they use B2 to store their audio and video files, and also as the primary source of downloads and streaming requests from their subscribers.
Fellowship Church logo Fellowship Church wanted to escape from the ever increasing amount of time they were spending saving their data to their LTO-based system. Using B2 saved countless hours of personnel time versus LTO, fit easily into their video processing workflow, and provided instant access at any time to their media library.
Panna logo Panna Cooking replaced their closet full of archive hard drives with a cost-efficient hybrid-storage solution combining 45Drives and Backblaze B2 Cloud Storage. Archived media files that used to take hours to locate are now readily available regardless of whether they reside in local storage or in the B2 Cloud.

B2 Integrations

Leading companies in backup, archive, and sync continue to add B2 Cloud Storage as a storage destination for their customers. These companies realize that by offering B2 as an option, they can dramatically lower the total cost of ownership for their customers — and that’s always a good thing.

If your favorite application is not integrated to B2, you can do something about it. One integration partner told us they received over 200 customer requests for a B2 integration. The partner got the message and the integration is currently in beta test.

Below are some of the partner integrations completed in the past few months. You can check the B2 Partner Integrations page for a complete list.

Archiware — Both P5 Archive and P5 Backup can now store data in the B2 Cloud making your offsite media files readily available while keeping your off-site storage costs predictable and affordable.

Arq — Combine Arq and B2 for amazingly affordable backup of external drives, network drives, NAS devices, Windows PCs, Windows Servers, and Macs to the cloud.

GoodSync — Automatically synchronize and back up all your photos, music, email, and other important files between all your desktops, laptops, servers, external drives, and sync, or back up to B2 Cloud Storage for off-site storage.

QNAP — QNAP Hybrid Backup Sync consolidates backup, restoration, and synchronization functions into a single QTS application to easily transfer your data to local, remote, and cloud storage.

Morro Data — Their CloudNAS solution stores files in the cloud, caches them locally as needed, and syncs files globally among other CloudNAS systems in an organization.

Restic – Restic is a fast, secure, multi-platform command line backup program. Files are uploaded to a B2 bucket as de-duplicated, encrypted chunks. Each backup is a snapshot of only the data that has changed, making restores of a specific date or time easy.

Transmit 5 by Panic — Transmit 5, the gold standard for macOS file transfer apps, now supports B2. Upload, download, and manage files on tons of servers with an easy, familiar, and powerful UI.

UpdraftPlus — WordPress developers and admins can now use the UpdraftPlus Premium WordPress plugin to affordably back up their data to the B2 Cloud.

Getting Started with B2 Cloud Storage

If you’re using B2 today, thank you. If you’d like to try B2, but don’t know where to start, here’s a guide to getting started with the B2 Web Interface — no programming or scripting is required. You get 10 gigabytes of free storage and 1 gigabyte a day in free downloads. Give it a try.

The post B2 Cloud Storage Roundup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Drive Stats for Q3 2017

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-failure-rates-q3-2017/

Q3 2017 Hard Drive Stats

In Q3 2017, Backblaze introduced both 10 TB and 12 TB hard drives into our data centers, we continued to retire 3 TB and 4 TB hard drives to increase storage density, and we added over 59 petabytes of data storage to bring our total storage capacity to 400 petabytes.

In this update, we’ll review the Q3 2017 and lifetime hard drive failure rates for all our drive models in use at the end of Q3. We’ll also check in on our 8 TB enterprise versus consumer hard drive comparison, and look at the storage density changes in our data centers over the past couple of years. Along the way, we’ll share our observations and insights, and as always, you can download the hard drive statistics data we use to create these reports.

Q3 2017 Hard Drive Failure Rates

Since our Q2 2017 report, we added 9,599 new hard drives and retired 6,221 hard drives, for a net add of 3,378 drives and a total of 86,529. These numbers are for those hard drives of which we have 45 or more drives — with one exception that we’ll get to in a minute.

Let’s look at the Q3 statistics that include our first look at the 10 TB and 12 TB hard drives we added in Q3. The chart below is for activity that occurred just in Q3 2017.

Hard Drive Failure Rates for Q3 2017

Observations

  1. The hard drive failure rate for the quarter was 1.84%, our lowest quarterly rate ever. There are several factors that contribute to this, but one that stands out is the average age of the hard drives in use. Only the 4 TB HGST drives (model: HDS5C4040ALE630) have an average age over 4 years — 51.3 months to be precise. The average age of all the other drive models is less than 4 years, with nearly 80% of all of the drives being less than 3 years old.
  2. The 10- and 12 TB drive models are new. With a combined 13,000 drive days in operation, they’ve had zero failures. While all of these drives passed through formatting and load testing without incident, it is a little too early to reach any conclusions.

Testing Drives

Normally, we list only those drive models where we have 45 drives or more, as it formerly took 45 drives (currently 60), to fill a Storage Pod. We consider a Storage Pod as a base unit for drive testing. Yet, we listed the 12 TB drives even though we only have 20 of them in operation. What gives? It’s the first step in testing drives.

A Backblaze Vault consists of 20 Storage Pods logically grouped together. Twenty 12 TB drives are deployed in the same drive position in each of the 20 Storage Pods and grouped together into a storage unit we call a “tome.” An incoming file is stored in one tome, and is spread out across the 20 storage pods in the tome for reliability and availability. The remaining 59 tomes, in this case, use 8 TB drives. This allows us to see the performance and reliability of a 12 TB hard drive model in an operational environment without having to buy 1,200 of them to start.

Breaking news: Our first Backblaze Vault filled with 1,200 Seagate 12 TB hard drives (model: ST12000NM007) went into production on October 20th.

Storage Density Continues to Increase

As noted earlier, we retired 6,221 hard drives in Q3: all 3- or 4 TB hard drives. The retired drives have been replaced by 8-, 10-, and 12 TB drive models. This dramatic increase in storage density added 59 petabytes of storage in Q3. The following chart shows that change since the beginning of 2016.

Hard Drive Count by Drive Size

You clearly can see the retirement of the 2 TB and 3 TB drives, each being replaced predominantly by 8 TB drives. You also can see the beginning of the retirement curve for the 4 TB drives that will be replaced most likely by 12 TB drives over the coming months. A subset of the 4 TB drives, about 10,000 or so which were installed in the past year or so, will most likely stay in service for at least the next couple of years.

Lifetime Hard Drive Stats

The table below shows the failure rates for the hard drive models we had in service as of September 30, 2017. This is over the period beginning in April 2013 and ending September 30, 2017. If you are interested in the hard drive failure rates for all the hard drives we’ve used over the years, please refer to our 2016 hard drive review.

Cumulative Hard Drive Failure Rates

Note 1: The “+ / – Change” column reflects the change in the annualized failure rate from the previous quarter. Down is good, up is bad.
Note 2: You can download the data on this chart and the data from the “Hard Drive Failure Rates for Q3 2017” chart shown earlier in this review. The downloaded ZIP file contains one MSExcel spreadsheet.

The annualized failure rate for all of the drive models listed above is 2.07%; this is the higher than the 1.97% for the previous quarter. The primary driver behind this was the retirement of all of the HGST 3 TB drives (model: HDS5C3030ALA630) in Q3. Those drives had over 6 million drive days and an annualized failure rate of 0.82% — well below the average for the entire set of drives. Those drives now are gone and no longer part of the results.

Consumer Versus Enterprise Drives

The comparison of the consumer and enterprise Seagate 8 TB drives continues. Both of the drive models, Enterprise: ST8000NM0055 and Consumer: ST8000DM002, saw their annualized failure rates decrease from the previous quarter. In the case of the enterprise drives, this occurred even though we added 8,350 new drives in Q3. This brings the total number of Seagate 8 TB enterprise drives to 14,404, which have accumulated nearly 1.4 million drive days.

A comparison of the two drive models shows the annualized failure rates being very similar:

  • 8 TB Consumer Drives: 1.1% Annualized Failure Rate
  • 8 TB Enterprise Drives: 1.2% Annualized Failure Rate

Given that the failure rates for the two drive models appears to be similar, are the Seagate 8 TB enterprise drives worth any premium you might have to pay for them? As we have previously documented, the Seagate enterprise drives load data faster and have a number of features such as the PowerChoiceTM technology that can be very useful. In addition, enterprise drives typically have a 5 year warranty versus a 2 year warranty for the consumer drives. While drive price and availability are our primary considerations, you may decide other factors are more important.

We will continue to follow these drives, especially as they age over 2 years: the warranty point for the consumer drives.

Join the Drive Stats Webinar on Friday, November 3

We will be doing a deeper dive on this review in a webinar: “Q3 2017 Hard Drive Failure Stats” being held on Friday, November 3rd at 10:00 am Pacific Time. We’ll dig into what’s behind the numbers, including the enterprise vs consumer drive comparison. To sign up for the webinar, you will need to subscribe to the Backblaze BrightTALK channel if you haven’t already done so.

Wrapping Up

Our next drive stats post will be in January, when we’ll review the data for Q4 and all of 2017, and we’ll update our lifetime stats for all of the drives we have ever used. In addition, we’ll get our first real look at the 12 TB drives.

As a reminder, the hard drive data we use is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone: it is free.

Good luck and let us know if you find anything interesting.

The post Hard Drive Stats for Q3 2017 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Yes, Backblaze Just Ordered 100 Petabytes of Hard Drives

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/400-petabytes-cloud-storage/

10 Petabyt vault, 100 Petabytes ordered, 400 Petabytes stored

Backblaze just ordered a 100 petabytes’ worth of hard drives, and yes, we’ll use nearly all of them in Q4. In fact, we’ll begin the process of sourcing the Q1 hard drive order in the next few weeks.

What are we doing with all those hard drives? Let’s take a look.

Our First 10 Petabyte Backblaze Vault

Ken clicked the submit button and 10 Petabytes of Backblaze Cloud Storage came online ready to accept customer data. Ken (aka the Pod Whisperer), is one of our Datacenter Operations Managers at Backblaze and with that one click, he activated Backblaze Vault 1093, which was built with 1,200 Seagate 10 TB drives (model: ST10000NM0086). After formatting and configuration of the disks, there is 10.12 Petabytes of free space remaining for customer data. Back in 2011, when Ken started at Backblaze, he was amazed that we had amassed as much as 10 Petabytes of data storage.

The Seagate 10 TB drives we deployed in vault 1093 are helium-filled drives. We had previously deployed 45 HGST 8 TB helium-filled drives where we learned one of the benefits of using helium drives — they consume less power than traditional air-filled drives. Here’s a quick comparison of the power consumption of several high-density drive models we deploy:

MFR Model Fill Size Idle (1) Operating (2)
Seagate ST8000DM002 Air 8 TB 7.2 watts 9.0 watts
Seagate ST8000NM0055 Air 8 TB 7.6 watts 8.6 watts
HGST HUH728080ALE600 Helium 8 TB 5.1 watts 7.4 watts
Seagate ST10000NM0086 Helium 10 TB 4.8 watts 8.6 watts
(1) Idle: Average Idle in watts as reported by the manufacturer.
(2) Operating: The maximum operational consumption in watts as reported by the manufacturer — typically for read operations.

I’d like 100 Petabytes of Hard Drives To Go, Please

“100 Petabytes should get us through Q4.” — Tim Nufire, Chief Cloud Officer, Backblaze

The 1,200 Seagate 10 TB drives are just the beginning. The next Backblaze Vault will be configured with 12 TB drives which will give us 12.2 petabytes of storage in one vault. We are currently building and adding two to three Backblaze Vaults a month to our cloud storage system, so we are going to need more drives. When we did all of our “drive math,” we decided to place an order for 100 petabytes of hard drives comprised of 10 and 12 TB models. Gleb, our CEO and occasional blogger, exhaled mightily as he signed the biggest purchase order in company history. Wait until he sees the one for Q1.

Enough drives for a 10 petabyte vault

400 Petabytes of Cloud Storage

When we added Backblaze Vault 1093, we crossed over 400 Petabytes of total available storage. For those of you keeping score at home, we reached 350 Petabytes about 3 months ago as you can see in the chart below.

Petabytes of data stored by Backblaze

Backblaze Vault Primer

All of the storage capacity we’ve added in the last two years has been on our Backblaze Vault architecture, with vault 1093 being the 60th one we have placed into service. Each Backblaze Vault is comprised of 20 Backblaze Storage Pods logically grouped together into one storage system. Today, each Storage Pod contains sixty 3 ½” hard drives, giving each vault 1,200 drives. Early vaults were built on Storage Pods with 45 hard drives, for a total of 900 drives in a vault.

A Backblaze Vault accepts data directly from an authenticated user. Each data blob (object, file, group of files) is divided into 20 shards (17 data shards and 3 parity shards) using our erasure coding library. Each of the 20 shards is stored on a different Storage Pod in the vault. At any given time, several vaults stand ready to receive data storage requests.

Drive Stats for the New Drives

In our Q3 2017 Drive Stats report, due out in late October, we’ll start reporting on the 10 TB drives we are adding. It looks like the 12 TB drives will come online in Q4. We’ll also get a better look at the 8 TB consumer and enterprise drives we’ve been following. Stay tuned.

Other Big Data Clouds

We have always been transparent here at Backblaze, including about how much data we store, how we store it, even how much it costs to do so. Very few others do the same. But, if you have information on how much data a company or organization stores in the cloud, let us know in the comments. Please include the source and make sure the data is not considered proprietary. If we get enough tidbits we’ll publish a “big cloud” list.

The post Yes, Backblaze Just Ordered 100 Petabytes of Hard Drives appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Drive Stats for Q2 2017

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-failure-stats-q2-2017/

Backblaze Drive Stats Q2 2017

In this update, we’ll review the Q2 2017 and lifetime hard drive failure rates for all our current drive models. We also look at how our drive migration strategy is changing the drives we use and we’ll check in on our enterprise class drives to see how they are doing. Along the way we’ll share our observations and insights and as always we welcome your comments and critiques.

Since our last report for Q1 2017, we have added 635 additional hard drives to bring us to the 83,151 drives we’ll focus on. In Q1 we added over 10,000 new drives to the mix, so adding just 635 in Q2 seems “odd.” In fact, we added 4,921 new drives and retired 4,286 old drives as we migrated from lower density drives to higher density drives. We cover more about migrations later on, but first let’s look at the Q2 quarterly stats.

Hard Drive Stats for Q2 2017

We’ll begin our review by looking at the statistics for the period of April 1, 2017 through June 30, 2017 (Q2 2017). This table includes 17 different 3 ½” drive models that were operational during the indicated period, ranging in size from 3 to 8 TB.

Quarterly Hard Drive Failure Rates for Q2 2017

When looking at the quarterly numbers, remember to look for those drives with at least 50,000 drive hours for the quarter. That works out to about 550 drives running the entire quarter. That’s a good sample size. If the sample size is below that, the failure rates can be skewed based on a small change in the number of drive failures.

As noted previously, we use the quarterly numbers to look for trends. So this time we’ve included a trend indicator in the table. The “Q2Q Trend” column is short for quarter-to-quarter trend, i.e. last quarter to this quarter. We can add, change, or delete trend columns depending on community interest. Let us know what you think in the comments.

Good Migrations

In Q2 we continued with our data migration program. For us, a drive migration means we intentionally remove a good drive from service and replace it with another drive. Drives that are removed via migrations are not counted as failed. Once they are removed they stop accumulating drive hours and other stats in our system.

There are three primary drivers for our migration program.

  1. Increase Storage Density – For example, in Q3 we replaced 3 TB drives with 8 TB drives, more than doubling the amount of storage in a given Storage Pod for the same footprint. The cost of electricity was nominally more with the 8 TB drives, but the increase in density more than offset the additional cost. For those interested you can read more about the cost of cloud storage here.
  2. Backblaze Vaults – Our Vault architecture has proven to be more cost effective over the past two years than using stand-alone Storage Pods. A major goal of the migration program is to have the entire Backblaze cloud deployed on the highly efficient and resilient Backblaze Vault architecture.
  3. Balancing the Load – With our Phoenix data center online and accepting data, we have migrated some systems to the Phoenix DC. Don’t worry, we didn’t put your data on a truck and drive it to Phoenix. We simply built new systems there and transferred the data from our Northern California DC. In the process, we are gaining valuable insights as we move towards being able to replicate data between the two data centers.
During Q2 we migrated nearly 30 Petabytes of data.

During Q2 we migrated the data on 155 systems, giving nearly 30 petabytes of data a new, more durable, place to call home. There are still 644 individual Storage Pods (Storage Pod Classics, as we call them) left to migrate to the Backblaze Vault architecture.

Just in case you don’t know, a Backblaze Vault is a logical collection of 20 beefy Storage Pods (not Classics). Using our own Reed-Solomon erasure coding library, data is spread out across the 20 Pods into 17 data shards and 3 parity shards. The data and parity shards of each arriving data blob can be stored on different Storage Pods in a given Backblaze Vault.

Lifetime Hard Drive Failure Rates for Current Drives

The table below shows the failure rates for the hard drive models we had in service as of June 30, 2017. This is over the period beginning in April 2013 and ending June 30, 2017. If you are interested in the hard drive failure rates for all the hard drives we’ve used over the years, please refer to our 2016 hard drive review.

Cumulative Hard Drive Failure Rates

Enterprise vs Consumer Drives

We added 3,595 enterprise class 8 TB drives in Q2 bringing our total to 6,054 drives. You may be tempted to compare the failure rates of the 8 TB enterprise drive (model: ST8000NM005) to the consumer 8 TB drive (model: ST8000DM002), and conclude the enterprise drives fail at a higher rate. Let’s not jump to that conclusion yet, as the average operational age of the enterprise drives is only 2.11 months.

There are some insights we can gain from the current data. The enterprise drives have 363,282 drives hours and an annualized failure rate of 1.61%. If we look back at our data, we find that as of Q3 2016, the 8 TB consumer drives had 422,263 drive hours with an annualized failure rate of 1.60%. That means that when both drive models had a similar number of drive hours, they had nearly the same annualized failure rate. There are no conclusions to be made here, but the observation is worth considering as we gather data for our comparison.

Next quarter, we should have enough data to compare the 8 TB drives, but by then the 8TB drives could be “antiques.” In the next week or so, we’ll be installing 12 TB hard drives in a Backblaze Vault. Each 60-drive Storage Pod in the Vault would have 720 TB of storage available and a 20-pod Backblaze Vault would have 14.4 petabytes of raw storage.

Better Late Than Never

Sorry for being a bit late with the hard drive stats report this quarter. We were ready to go last week, then this happened. Some folks here thought that was more important than our Q2 Hard Drive Stats. Go figure.

Drive Stats at the Storage Developers Conference

We will be presenting at the Storage Developers Conference in Santa Clara on Monday September 11th at 8:30am. We’ll be reviewing our drive stats along with some interesting observations from the SMART stats we also collect. The conference is the leading event for technical discussions and education on the latest storage technologies and standards. Come join us.

The Data For This Review

If you are interested in the data from the two tables in this review, you can download an Excel spreadsheet containing the two tables. Note: the domain for this download will be f001.backblazeb2.com.

You also can download the entire data set we use for these reports from our Hard Drive Test Data page. You can download and use this data for free for your own purposes. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone. It is free.

Good luck, and let us know if you find anything interesting.

The post Hard Drive Stats for Q2 2017 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How to Migrate All of Your Data from CrashPlan

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/how-to-migrate-your-data-from-crashplan/

Migrating from Crashplan

With CrashPlan deciding to leave the consumer backup space, ex-customers are faced with having to migrate their data to a new cloud backup service. Uploading your data from your computer to a new service is onerous enough, but one thing that seems to be getting overlooked is the potential for the files that reside in CrashPlan Central, but not on your computer, to be lost during the migration to a new provider. Here’s an overview of the migration process to make sure you don’t lose data you wish to keep.

Why would you lose files?

By default CrashPlan for Home does not delete files from CrashPlan Central (their cloud storage servers) after they are uploaded from your computer. Unless you changed your CrashPlan “Frequency and versions” settings, all of the files you uploaded are still there. This includes all the files you deleted from your computer. For example, you may have a folder of old videos that you uploaded to CrashPlan and then deleted from your computer because of space concerns. This folder of old video files is still in your CrashPlan archive. It is very likely you have files stored in CrashPlan Central that are not on your computer. Such files are now in migration limbo, and we’ll get to those files in a minute, but first…

Get Started Now

CrashPlan was kind enough to make sure that everyone will have at least 60 days from August 22nd, 2017 to transfer their data. Most people will have more time, but everyone must be migrated by the end of October 2018.

Regardless, it’s better to get started now as it can take some time to upload your data to another backup provider. The first step in migrating your files is to choose a new cloud backup provider. Let’s assume you choose Backblaze Personal Backup.

Crashplan Migration Steps

The first step is to migrate all the data that is currently on your computer to Backblaze. Once you install Backblaze on your computer, it will automatically scan your system to locate the data to upload to Backblaze. The upload will continue automatically. You can speed up or slow down how quickly Backblaze will upload files by adjusting your performance settings for your Mac or for your Windows PC. In addition, any changes and new files are automatically uploaded as well. Backblaze keeps up to 30 days’ worth of file versions and always keeps the most recent version of every data file currently on your computer.

Question — Should you remove CrashPlan from your computer before migrating to Backblaze?
Answer — No.

If your computer fails during the upload to Backblaze, you’ll still have a full backup with CrashPlan. During the upload period you may want to decrease the resources (CPU and Network) used by CrashPlan and increase the resources available to Backblaze. You can “pause” CrashPlan for up to 24 hours, but that is a manual operation and may not be practical. In any case, you’ll also need to have CrashPlan around to recover those files in migration limbo.

Saving the Files in Migration Limbo

Let’s divide this process into two major parts: recovering the files and getting them stored somewhere else.

    Recovering Files in Limbo

    1. Choose a recovery device — Right now you don’t know how many files you will need to recover, but once you know that information, you’ll need a device to hold them. We recommend that you use an external USB hard drive as your recovery device. If you believe you will only have a small number of limbo files, then a thumb drive will work.
    2. Locate the Limbo files — Open the CrashPlan App on your computer and select the “Restore” menu item on the left. As an example, you can navigate to a given folder and see the files in that folder as shown below:

    Restore files from Crashplan

    1. Click on the “Show deleted files” box as shown below to display all the files, including those that are deleted. As an example, the same files listed above are shown below, and the list now includes the deleted file IMG_6533.JPG.

    Finding deleted file in Crasphlan Central

    1. Deleted files can be visually identified via the different icon and the text shown grayed out. Navigate through your folder/directory structure and select the files you wish to recover. Yes, this can take a while. You only need to click on the deleted files as the other files are currently still on your computer and being backed up directly to Backblaze.
    2. Make sure you change the restore location. By default this is set to “Desktop.” Click on the word “Desktop” to toggle through your options. Click on the option, and you’ll be able to change your backup destination to any mounted device connected to your system. As an example, we’ve chosen to restore the deleted files to the USB external drive named “Backblaze.”
    3. Click “Restore” to restore the files you have selected.

    Storing the Restored Limbo Files

    Now that you have an external USB hard drive with the recovered Limbo files, let’s get them saved to the cloud. With Backblaze you have two options. The first option is to make the Limbo files part of your Backblaze backup. You can do this in two ways.

    1. Copy the Limbo files to your computer and they will be automatically backed up to Backblaze with the rest of your files.
    2. – or –

    3. Connect the external USB Hard Drive to your computer and configure Backblaze to back up that device. This device should remain connected to the computer while the backup occurs, and then once every couple of weeks to make sure that nothing has changed on the hard drive.

    If neither of the above solutions works for you, the other option is to use the Backblaze B2 Cloud Storage service.

What is Backblaze B2 Cloud Storage?

B2 Cloud Storage is a service for storing files in the cloud. Files are available for download at any time, either through the API or through a browser-compatible URL. Files stored in the B2 cloud are not deleted unless you explicitly delete them. In that way it is very similar to CrashPlan. Here’s some help, if you are unsure about the difference between Backblaze Personal Backup and Backblaze B2.

There are four ways to access B2: 1) a Web GUI, 2) a Command-line interface (CLI), 3) an API, and 4) via partner integrations, such a CloudBerry, Synology, Arq, QNAP, GoodSync and many more you can find on our B2 integrations page. Most CrashPlan users will find either the Web GUI or a partner integration to be the way to go. Note: There is an additional cost to use the B2 service, and we’ll get to that shortly.

  1. Since you already have a Backblaze account, you just have to log in to your account. Click on “My Settings” on the left hand navigation and enable B2 Cloud Storage. If you haven’t already done so you will be asked to provide a Mobile number for contact and authentication purposes.
  2. To use the B2 Web GUI, you create a B2 “bucket” and then drag-and-drop the files into the B2 bucket.
  3. You can also choose to use a B2 partner integration to store your data into B2.

If you use B2 to store your Limbo files rescued from CrashPlan and you use Backblaze to back up your computer, you will be able to access and manage all of your data from your one Backblaze account.

What does all this cost?

If you are only going to use Backblaze Personal Backup to back up your computer, then you will pay $50/year per computer.

If you decide to combine the use of Backblaze Personal Backup and Backblaze B2, let’s assume you have 500 GB of data to back up from your computer to Backblaze. Let’s also assume you have to store 100 GB of data in Backblaze B2 that you rescued from CrashPlan limbo. Your annual cost would be:

    To back up 500 GB:

    1. — Backblaze Personal Backup — 1 year/1 computer — $50.00

    To archive 100 GB:

    1. — Backblaze B2 — 100 GB @ $0.005/GB/month for 12 months — $6.00

    The Total Annual Cost to store your CrashPlan data in Backblaze, including your recovered deleted files, is $56.00.

Migrating from CrashPlan to Carbonite

If you are considering migrating your CrashPlan for Home account to Carbonite, you will still have to upload your data to Carbonite. There is no automatic process to copy the files from CrashPlan to Carbonite. You will also have to recover the Limbo files we’ve been speaking about using the process we’ve outlined above. In summary, when moving from CrashPlan for Home to any other vendor you will have to reupload your data to the new vendor.

One More Option

There is one more option you can use when you move your data from CrashPlan to another cloud service. You can download all of your data from CrashPlan, including the active and deleted files, to a local computer or device such an external USB Hard Drive. Then you can upload all that data to the new cloud backup provider. Of course this will mean all that data makes two trips through your local network — down and then back up. This will take time and could be very taxing on any bandwidth limits you may have in place from your network provider.

If you have the bandwidth and the time, this can be a good option, as all your files stored in CrashPlan Central are included in your backup. But, if you have a lot of data and/or a slow internet connection, this can take a really, really long time.

Join Our Webinar for More Information

You can sign up for our upcoming webinar, “Migrating from CrashPlan for Home to Backblaze” on September 7th at 10:00 am PDT if you’d like to learn more about the migration methods we covered today. Please note, you will need to register for this webinar by either signing up for a Backblaze BrightTALK channel account or using your existing BrightTALK account.

CrashPlan Replacement

Now that you are faced with replacing your CrashPlan for Home account, don’t wait until your contract is about to run out. Give yourself at least a couple of months to make sure all the data, including the Limbo data, is safely migrated somewhere else.

Also, regardless of which option you chose for migrating your data from CrashPlan to a new cloud backup service, once everything is moved and you’ve checked to make sure you got everything, then and only then should you turn off your CrashPlan account and uninstall CrashPlan.

An Invitation

If you are a CrashPlan for Home user going through the migration to a new cloud backup service, and have ideas to help other users through the migration process, let us know in the comments. We’ll update this post with any relevant ideas from the community.

The post How to Migrate All of Your Data from CrashPlan appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Getting Your Data into the Cloud is Just the Beginning

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cost-data-of-transfer-cloud-storage/

Total Cloud Storage Cost

Organizations should consider not just the cost of getting their data into the cloud, but also long-term costs for storage and retrieval when deciding which cloud storage solution meets their needs.

As cloud storage has become ubiquitous, organizations large and small are joining in. For larger organizations the lure of reducing capital expenses and their associated operational costs is enticing. For smaller organizations, cloud storage often replaces an unmanageable closet full of external hard drives, thumb drives, SD cards, and other devices. With terabytes or even petabytes of data, the common challenge facing organizations, large and small, is how to get their data up to the cloud.

Transferring Data to the Cloud

The obvious solution for getting your data to the cloud is to upload your data from your internal network through the internet to the cloud storage vendor you’ve selected. Cloud storage vendors don’t charge you for uploading your data to their cloud, but you, of course, have to pay your network provider and that’s where things start to get interesting. Here are a few things to consider.

  • The initial upload: Unless you are just starting out, you will have a large amount of data you want to upload to the cloud. This could be data you wish to archive or have had archived previously, for example data stored on LTO tapes or kept stored on external hard drives.
  • Pipe size: This is the amount of upload bandwidth of your network connection. This is measured in Mbps (megabits per second). Remember, your data is stored in MB (megabytes), so an upload connection of 80 Mbps will transfer no more than 10 MB of data per second and most likely a lot less.
  • Cost and caps: In some places, organizations pay a flat monthly rate for a specified level of service (speed) for internet access. In other locations, internet access is metered, or pay as you go. In either case, there can be internet service caps that limit or completely stop data transfer once you reach your contracted threshold.

One or more of these challenges has the potential to make the initial upload of your data expensive and potentially impossible. You could wait until cloud storage companies start buying up internet providers and make data upload cheap (or free with Amazon Prime!), but there is another option.

Data Transfer Devices

Given the potential challenges of using your network for the initial upload of your data to the cloud, a handful of cloud storage companies have introduced data transfer or data ingest services. Backblaze has the B2 Fireball, Amazon has Snowball (and other similar devices), and Google recently introduced their Transfer Appliance.

KLRU-TV Austin PBS uploaded their Austin City Limits musical anthology series to Backblaze using a B2 Fireball.

These services work as follows:

  • The provider sends you a portable (or somewhat portable) storage device.
  • You connect the device to your network and load some amount of data on the device over your internal network connection.
  • You return the device, loaded with your data, to the provider, who uploads your data to your cloud storage account from inside their own data center.

Data Transfer Devices Save Time

Assuming your Internet connection is a flat rate service that has no caps or limits and your organizational operations can withstand the traffic, you still may want to opt to use a data transfer service to move your data to the cloud. Why? Time. For example, if your initial data upload is 100 TB here’s how long it would take using different network upload connection speeds:

Network Speed Upload Time
10 Mbps 3 years
100 Mbps 124 days
500 Mbps 25 days
1 Gbps 12 days

This assumes you are using most of your upload connection to upload your data, which is probably not realistic if you want to stay in business. You could potentially rent a better connection or upgrade your connection permanently, both of which add to the cost of running your business.

Speaking of cost, there is of course a charge for the data transfer service that can be summarized as follows:

  • Backblaze B2 Fireball — Up to 40 TB of data per trip for $550.00 for 30 days in use at your site.
  • Amazon Snowball — up to 50 TB of data per trip for $200.00 for 10 days use at your site, plus $15/day each day in use at your site thereafter.
  • Google Transfer Appliance — up to 100 TB of data per trip for $300.00 for 10 days use at your site, plus $10/day each day in use at your site thereafter.

These prices do not include shipping, which can range from $100 to $900 depending on shipping method, location, etc.

Both Amazon and Google have transfer devices that are larger and cost more. For comparison purposes below we’ll use the three device versions listed above.

The Real Cost of Uploading Your Data

If we stopped our review at the previous paragraph and we were prepared to load up our transfer device in 10 days or less, the clear winner would be Google. But, this leaves out two very important components of any cloud storage project; the cost of storing your data and the cost of downloading your data.

Let’s look at two examples:

Example 1 — Archive 100 TB of data:

  • Use the data transfer service move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
Service Transfer Cost Cloud Storage Total
Backblaze B2 $1,650 (3 trips) $6,000 $7,650
Google Cloud $300 (1 trip) $24,000 $24,300
Amazon S3 $400 (2 trips) $25,200 $25,600

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $16,650 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Example 2 — Store and use 100 TB of data:

  • Use the data transfer service to move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
  • Add 5 TB a month (on average) to the total stored.
  • Delete 2 TB a month (on average) from the total stored.
  • Download 10 TB a month (on average) from the total stored.
Service Transfer Cost Cloud Storage Total
Backblaze B2 $1,650 (3 trips) $9,570 $11,220
Google Cloud $300 (1 trip) $39,684 $39,984
Amazon S3 $400 (2 trips) $36,114 $36,514

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $28,764 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Notes:

  • All prices listed are based on list prices from the vendor websites as of the date of this blog post.
  • We are accomplishing the transfer of your data to the device within the 10 day “free” period specified by Amazon and Google.
  • We are comparing cloud storage services that have similar performance. For example, once the data is uploaded, it is readily available for download. The data is also available for access via a Web GUI, CLI, API, and/or various applications integrated with the cloud storage service. Multiple versions of files can be kept as desired. Files can be deleted any time.

To be fair, it requires Backblaze three trips to move 100 TB while it only takes 1 trip for the Google Transfer Appliance. This adds some cost to prepare, monitor, and ship three B2 Fireballs versus one Transfer Appliance. Even with that added cost, the Backblaze B2 solution will still be significantly less expensive over the one year period and beyond.

Have a Data Transfer Device Owner

Before you run out and order a transfer device, make sure the transfer process is someone’s job once the device arrives at your organization. Filling a transfer device should only take a few days, but if it is forgotten, you’ll find you’ve had the device for 2 or 3 weeks. While that’s not much of a problem with a B2 Fireball, it could start to get expensive otherwise.

Just the Beginning

As with most “new” technologies and services, you can expect other companies to jump in and provide various data ingest services. The cost will get cheaper or even free as cloud storage companies race to capture and lock up the data you have kept locally all these years. When you are evaluating cloud storage solutions, it’s best to look past the data ingest loss-leader price, and spend a few minutes to calculate the long-term cost of storing and using your data.

The post Getting Your Data into the Cloud is Just the Beginning appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A new twist on data backup: CloudNAS

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cloudnas-backup/

Morro CacheDrive

There are many ways for SMBs, professionals, and advanced users to back up their data. The process can be as simple as copying files to a flash drive or an external drive, or as sophisticated as using a Synology or QNAP NAS device as your primary storage device and syncing the files to a cloud storage service such as Backblaze B2.

A recent entry into the backup arena is Morro Data and their CloudNAS solution, where files are stored in the cloud, cached locally as needed, and synced globally among the other CloudNAS systems in a given organization. There are three components to the solution:

  • A Morro CacheDrive — This resides on your internal network like a NAS device and stores from 1- to 8 TB of data depending on the model
  • The CloudNAS service — This software runs on the Morro CacheDrive to keep track of and manage the data
  • Backblaze B2 Cloud Storage — Where the data is stored in the cloud

The Morro CacheDrive is installed on your local network and looks like a network share. On Windows, the share can be mounted as a letter device, M:, for example. On the Mac, the device is mounted as a Shared device (Databank in the example below).

CloudNAS software dashboard

In either case, the device works like a folder/directory, typically on your desktop. You then either drag-and-drop or save a file to the folder/directory. This places the file on the CacheDrive. Once there, the file is automatically backed up to the cloud. In the case of CloudNAS solution, that cloud is Backblaze B2.

All that sounds pretty straight-forward, but what makes the CloudNAS solution unique is the solution allows you to have unlimited storage space. For example, you can access 5 TB of data from a 1 TB CacheDrive. Confused? Let me explain. All 5 TB of the data is stored in B2, having been uploaded to B2 each time you stored data on the CacheDrive. The 1 TB CacheDrive keeps (caches) the most recent or most often used files on the CacheDrive. When you need a file not currently stored on the CacheDrive, the CloudNAS software automatically downloads the file from the B2 cloud to the CacheDrive and makes it available to use as desired.

Things to know about the CloudNAS solution

  • Sharing Systems: Multiple users can mount the same CacheDrive with each being able to update and share the files.
  • Synced Systems: If you have two or more CloudNAS systems on your network, they will keep the B2 directory of files synced between all of the systems. Everyone on the network sees the same file list.
  • Unlimited Data: Regardless of the size of the CacheDrive device you purchase, you will not run out of space as Backblaze B2 will contain all of your data. That said, you should choose the size of your CacheDrive that fits your operational environment.
  • Network Speed: Files are initially stored on the CacheDrive, then copied to B2. Local network connections are typically much faster than internet network speeds. This means your files are uploaded to the CacheDrive fast then transferred to B2 as time allows at the speed of your internet connection, all without slowing you down. This should be interesting to those of you who have slower internet connections.
  • Access: The files stored using the Cloud NAS solution can be accessed through the shared folder/directory on your desktop as well as through a web-based Team Portal.

Getting Started

To start, you purchase a Morro CacheDrive. The price starts at $499.00 for a unit with 1 TB of cache storage. Next you choose a CloudNAS subscription. This starts at $10/month for the Standard plan, and lets you manage up to 10 TB of data. Finally, you connect Backblaze B2 to the Morro system to finish the set-up process. You pay Backblaze each month for the data you store in and download from B2 while using the Morro solution.

The CloudNAS solution is certainly a different approach to storing your data. You get the ability to store a nearly unlimited amount of data without having to upgrade your hardware as you go, and all of your data is readily available with just a few clicks. For users who need to store terabytes of data that needs be available anytime, the CloudNAS solution is worth a look.

The post A new twist on data backup: CloudNAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Drive Cost Per Gigabyte

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-cost-per-gigabyte/

Hard Drive Cost

For hard drive prices, the race to zero is over: nobody won. For the past 35+ years or so, hard drives prices have dropped, from around $500,000 per gigabyte in 1981 to less than $0.03 per gigabyte today. This includes the period of the Thailand drive crisis in 2012 that spiked hard drive prices. Matthew Komorowski has done an admirable job of documenting the hard drive price curve through March 2014 and we’d like to fill in the blanks with our own drive purchase data to complete the picture. As you’ll see, the hard drive pricing curve has flattened out.

75,000 New Hard Drives

We first looked at the cost per gigabyte of a hard drive in 2013 when we examined the effects of the Thailand Drive crisis on our business. When we wrote that post, the cost per gigabyte for a 4 TB hard drive was about $0.04 per gigabyte. Since then 5-, 6-, 8- and recently 10 TB hard drives have been introduced and during that period we have purchased nearly 75,000 drives. Below is a chart by drive size of the drives we purchased since that last report in 2013.

Hard Drive Cost Per GB by drive size

Observations

  1. We purchase drives in bulk, thousands at a time. The price you might get at Costco or BestBuy, or on Amazon will most likely be higher.
  2. The effect of the Thailand Drive crisis is clearly seen from October 2011 through mid-2013.

The 4 TB Drive Enigma

Up through the 4 TB drive models, the cost per gigabyte of a larger sized drive always became less than the smaller sized drives. In other words, the cost per gigabyte of a 2 TB drive was less than that of a 1 TB drive resulting in higher density at a lower cost per gigabyte. This changed with the introduction of 6- and 8 TB drives, especially as it relates to the 4 TB drives. As you can see in the chart above, the cost per gigabyte of the 6 TB drives did not fall below that of the 4 TB drives. You can also observe that the 8 TB drives are just approaching the cost per gigabyte of the 4 TB drives. The 4 TB drives are the price king as seen in the chart below of the current cost of Seagate consumer drives by size.

Seagate Hard Drive Prices By Size

Drive Size Model Price Cost/GB
1 TB ST1000DM010 $49.99 $0.050
2 TB ST2000DM006 $66.99 $0.033
3 TB ST3000DM008 $83.72 $0.028
4 TB ST4000DM005 $99.99 $0.025
6 TB ST6000DM004 $240.00 $0.040
8 TB ST8000DM005 $307.34 $0.038

The data on this chart was sourced from the current price of these drives on Amazon. The drive models selected were “consumer” drives, like those we typically use in our data centers.

The manufacturing and marketing efficiencies that drive the pricing of hard drives seems to have changed over time. For example, the 6 TB drives have been in the market at least 3 years, but are not even close to the cost per gigabyte of the 4 TB drives. Meanwhile, back in 2011, the 3 TB drives models fell below the cost per gigabyte of the 2 TB drives they “replaced” within a few months. Have we as consumers decided that 4 TB drives are “big enough” for our needs and we are not demanding (by purchasing) larger sized drives in the quantities needed to push down the unit cost?

Approaching Zero: There’s a Limit

The important aspect is the trend of the cost over time. While it has continued to move downward, the rate of change has slowed dramatically as observed in the chart below which represents our average quarterly cost per gigabyte over time.

Hard Drive Cost per GB over time

The change in the rate of the cost per gigabyte of a hard drive is declining. For example, from January 2009 to January 2011, our average cost for a hard drive decreased 45% from $0.11 to $0.06 – $0.05 per gigabyte. From January 2015 to January 2017, the average cost decreased 26% from $0.038 to $0.028 – just $0.01 per gigabyte. This means that the declining price of storage will become less relevant in driving the cost of providing storage.

Back in 2011, IDC predicted that the overall data will grow by 50 times by 2020, and in 2014, EMC estimated that by 2020, we will be creating 44 trillion gigabytes of data annually. That’s quite a challenge for the storage industry especially as the cost per gigabyte curve for hard drives is flattening out. Improvements in existing storage technologies (Helium, HAMR) along with future technologies (Quantum Storage, DNA), are on the way – we can’t wait. Of course we’d like these new storage devices to be 50% less expensive per gigabyte then today’s hard drives. That would be a good start.

The post Hard Drive Cost Per Gigabyte appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Desert To Data in 7 Days – Our New Phoenix Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/data-center-design/

We are pleased to announce that Backblaze is now storing some of our customers’ data in our newest data center in Phoenix. Our Sacramento facility was slated to store about 500 petabytes of data and was starting to fill up so it was time to expand. After visiting multiple locations in the US and Canada, we selected Phoenix as it had the right combination of power, networking, price and more that we were seeking. Let’s take you through the process of getting the Phoenix data center up and running.

Day 0 – Designing the Data Center

After we selected the Phoenix location as our next DC (data center), we had to negotiate the contract. We’re going to skip that part of the process because, unless you’re a lawyer, it’s a long, boring process. Let’s just say we wanted to be ready to move in once the contract was signed. That meant we had to gather up everything we needed and order a bunch of other things like networking equipment, racks, storage pods, cables, etc. We decided to use our Sacramento DC as the staging point and started gathering what was going to be needed in Phoenix.

In actuality, for some items we started the process several months ago as lead times for things like network switches, Storage Pods, and even hard drives can be measured in months and delays are normal. For example, depending on our move in date, the network providers we wanted would only be able to provide limited bandwidth, so we had to prepare for that possibility. It helps to have a procurement person who knows what they are doing, can work the schedule, and is creatively flexible – thanks Amanda.

So by Day 0, we had amassed multiple pallets of cabinets, network gear, PDUs, tools, hard drives, carts, Guido, and more. And yes, for all you Guido fans he is still with us and he now resides in Phoenix. Everything was wrapped and loaded into a 53-foot semi-truck that was driven the 755 miles (1,215 km) from Sacramento, California to Phoenix, Arizona.

Day 1 – Move In Day

We sent a crew of 5 people to Phoenix with the goal of going from empty space to being ready to accept data in one week. The truck from Sacramento arrived mid-morning and work started unloading and marshaling the pallets and boxes into one area, while the racks were placed near their permanent location on the DC floor.

Day 2 – Building the Racks

Day 2 was spent primarily working with the racks. First they were positioned to their precise location on the data center floor. They were then anchored down and tied together. We started with 2 rows of twenty-two racks each, with twenty being for storage pods and two being for networking equipment. By the end of the week there will be 4 rows of racks installed.

Day 3 – Networking and Power, Part 1

While one team continued to work on the racks, another team began the process a getting the racks connected to the electricty and running the network cables to the network distribution racks. Once that was done, networking gear and rack-based PDUs (Power Distribution Units) were installed in the racks.

Day 4 – Rack Storage Pods

The truck from Sacramento brought 100 Storage Pods, a combination of 45 drive and 60 drive systems. Why did we use 45 drives units here? It has to do with the size (in racks and power) of the initial installation commitment and the ramp (increase) of installations over time. Contract stuff: boring yes, important yes. Basically to optimize our spend we wanted to use as much of the initial space we were allotted as possible. Since we had a number of empty 45 drive chassis available in Sacramento we decided to put them to use.

Day 5 – Drive Day

Our initial set-up goal was to build out five Backblaze Vaults. Each Vault is comprised of twenty Storage Pods. Four of the Vaults were filled with 45 drive Storage Pods and one was filled with 60 drive Storage Pods. That’s 4,800 hard drives to install – thank goodness we don’t use those rubber bands around the drives anymore.

Day 6 – Networking and Power, Part 2

With the storage pods in place, Day 6 was spent routing network and power cables to the individual pods. A critical part of the process is to label every wire so you know where it comes from and where it goes too. Once labeled, wires are bundled together and secured to the racks in a standard pattern. Not only does this make things look neat, it standardizes where you’ll find each cable across the hundreds of racks that are in the DC.

Day 7 – Test, Repair, Test, Ready

With all the power and networking finished, it was time to test the installation. Most of the Storage Pods light up with no problem, but there were a few that failed. These failures are quickly dealt with, and one by one each Backblaze Vault is registered into our monitoring and administration systems. By the end of the day, all five Vaults were ready.

Moving Forward

The Phoenix data center was ready for operation except that the network carriers we wanted to use could only provide a limited amount of bandwidth to start. It would take a few more weeks before the final network lines would be provisioned and operational. Even with the limited bandwidth we kicked off the migration of customer data from Sacramento to Phoenix to help balance out the workload. A few weeks later, once the networking was sorted out, we started accepting external customer data.

We’d like to thank our data center build team for documenting their work in pictures and allowing us to share some of them with our readers.

















Questions About Our New Data Center

Now that we have a second DC, you might have a few questions, such as can you store your data there and so on. Here’s the status of things today…

    Q: Does the new DC mean Backblaze has multi-region storage?
    A: Not yet. Right now we consider the Phoenix DC and the Sacramento DC to be in the same region.

    Q: Will you ever provide multi-region support?
    A: Yes, we expect to provide multi-region support in the future, but we don’t have a date for that capability yet.

    Q: Can I pick which data center will store my data?
    A: Not yet. This capability is part of our plans when we provide multi-region support.

    Q: Which data center is my data being stored in?
    A: Chances are that your data is in the Sacramento data center given it currently stores about 90% of our customer’s data.

    Q: Will my data be split across the two data centers?
    A: It is possible that one portion of your data will be stored in the Sacramento DC and another portion of your data will be stored in the Phoenix DC. This will be completely invisible to you and you should see no difference in storage or data retrieval times.

    Q: Can my data be replicated from one DC to the other?
    A: Not today. As noted above, your data will be in one DC or the other. That said files uploaded to the Backblaze Vaults in either DC are stored redundantly across 20 Backblaze Storage Pods within that DC. This translates to 99.999999% durability for the data stored this way.

    Q: Do you plan on opening more data centers?
    A: Yes. We are actively looking for new locations.

If you have any additional questions, please let us know in the comments or on social media. Thanks.

The post Desert To Data in 7 Days – Our New Phoenix Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze B2, Cloud Storage on a Budget: One Year Later

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-b2-cloud-storage-on-a-budget-one-year-later/

B2 Cloud Storage Review

A year ago, Backblaze B2 Cloud Storage came out of beta and became available for everyone to use. We were pretty excited, even though it seemed like everyone and their brother had a cloud storage offering. Now that we are a year down the road let’s see how B2 has fared in the real world of tight budgets, maxed-out engineering schedules, insanely funded competition, and more. Spoiler alert: We’re still pretty excited…

Cloud Storage on a Budget

There are dozens of companies offering cloud storage and the landscape is cluttered with incomprehensible pricing models, cleverly disguised transfer and download charges, and differing levels of service that seem to be driven more by marketing departments than customer needs.

Backblaze B2 keeps things simple: A single performant level of service, a single affordable price for storage ($0.005/GB/month), a single affordable price for downloads ($0.02/GB), and a single list of transaction charges – all on a single pricing page.

Who’s Using B2?

By making cloud storage affordable, companies and organizations now have a way to store their data in the cloud and still be able to access and restore it as quickly as needed. You don’t have to choose between price and performance. Here are a few examples:

  • Media & Entertainment: KLRU-TV, Austin PBS, is using B2 to preserve their video catalog of the world renown musical anthology series, Austin City Limits.
  • LTO Migration: The Girl Scouts San Diego, were able to move their daily incremental backups from LTO tape to the cloud, saving money and time, while helping automate their entire backup process.
  • Cloud Migration: Vintage Aerial found it cost effective to discard their internal data server and store their unique hi-resolution images in B2 Cloud Storage.
  • Backup: Ahuja and Clark, a boutique accounting firm, was able to save over 80% on the cost to backup all their corporate and client data.

How is B2 Being Used?

B2 Cloud Storage can be accessed in four ways: using the Web GUI, using the CLI, using the API library, and using a product or service integrated with B2. While many customers are using the Web GUI, CLI and API to store and retrieve data, the most prolific use of B2 occurs via our integration partners. Each integration partner has certified they have met our best practices for integrating to B2 and we’ve tested each of the integrations submitted to us. Here are a few of the highlights.

  • NAS Devices – Synology and QNAP have integrations which allow their NAS devices to sync their data to/from B2.
  • Backup and Sync – CloudBerry, GoodSync, and Retrospect are just a few of the services that can backup and/or sync data to/from B2.
  • Hybrid Cloud – 45 Drives and OpenIO are solutions that allow you to setup and operate a hybrid data storage cloud environment.
  • Desktop Apps – CyberDuck, MountainDuck, Dropshare, and more allow users an easy way to store and use data in B2 right from your desktop.
  • Digital Asset Management – Cantemo, Cubix, CatDV, and axle Video, let you catalog your digital assets and then store them in B2 for fast retrieval when they are needed.

If you have an application or service that stores data in the cloud and it isn’t integrated with Backblaze B2, then your customers are probably paying too much for cloud storage.

What’s New in B2?

B2 Fireball – our rapid data ingest service. We send you a storage device, and you load it up with up to 40 TB of data and send it back, then we load the data into your B2 account. The cost is $550 per trip plus shipping. Save your network bandwidth with the B2 Fireball.

Lowered the download price – When we introduced B2, we set the price to download a gigabyte of data to be $0.05/GB – the same as most competitors. A year in, we reevaluated the price based on usage and decided to lower the price to $0.02/GB.

B2 User Groups – Backblaze Groups functionality is now available in B2. An administrator can invite users to a B2 centric Group to centralize the storage location for that group of users. For example, multiple members of a department working on a project will be able to archive their work-in-process activities into a single B2 bucket.

Time Machine backup – You may know that you can use your Synology NAS as the destination for your Time Machine backup. With B2 you can also sync your Synology NAS to B2 for a true 3-2-1 backup solution. If your system crashes or is lost, you can restore your Time Machine image directly from B2 to your new machine.

Life Cycle Rules – Create rules that allow you to manage the length of time deleted files will remain in your B2 bucket before they are deleted. A great option for managing the cleanup of outdated file versions to save on storage costs.

Large Files – In the B2 Web GUI you can upload files as large as 500 MB using either the upload or drag-and-drop functionality. The B2 CLI and API support the ability to upload/download files as large as 10 TB.

5 MB file part size – When working with large files, the minimum file part size can now be set as low as 5 MB versus the previous low setting of 100 MB. Now the range of a file part when working with large files can be from 5 MB to 5GB. This increases the throughput of your data uploads and downloads.

SHA-1 at the end – This feature allows you to compute the SHA-1 checksum and append it to the end of the request body versus doing the computation before the file is sent. This is especially useful for those applications which stream data to/from B2.

Cache-Control – When data is downloaded from B2 into a browser, the length of time the file remains in the browser cache can be set at the bucket level using the b2_create_bucket and b2_update_bucket API calls. Setting this policy is optional.

Customized delimiters – Used in the API, this allows you to specify a delimiter to use for a given purpose. A common use is to set a delimiter in the file name string. Then use that delimiter to detect a folder name within the string.

Looking Ahead

Over the past year we added nearly 30,000 new B2 customers to the fold and are welcoming more and more each day as B2 continues to grow. We have plans to expand our storage footprint by adding more data centers as we look forward to moving towards a multi-region environment.

For those of you who are B2 customers – thank you for helping build B2. If you have an interesting way you are using B2, tell us in the comments below.

The post Backblaze B2, Cloud Storage on a Budget: One Year Later appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.