The holiday season is upon us, and here at Backblaze, we always love to see what cool, new things are out there for us to give to our friends and family. Every year, we ask our team members what gifts they’ll be giving (or treating themselves to) and share their ideas on our blog.
This year, our team’s gift ideas range from the latest in fitness trackers for the person who’s always on the go, to weighted blankets that keep you cozy, and helpful plant-rearing guides for blossoming green thumbs. Read on for some gift ideas that are sure to bring some holiday joy!
WHOOP is a fitness and health membership that offers a fitness tracker and an app that analyzes fitness, health, and sleep data as well as a way to connect with other WHOOP members. Their adjustable device can be worn with a range of garments or on their handy wristbands. And they’re stylish!
If you or someone you know spent the early months of the pandemic transforming their home into a greenhouse (that also smelled of sourdough), this book provides all the houseplant tips and tricks they might need to keep their indoor garden thriving.
As many of us spend our days staring at screens that can strain our eyes over time, this eye massager is a great way to reduce tension and headaches. It even connects to your phone via Bluetooth so you can choose music to play as it works. Not to be confused with an
Oculus Quest 2.
QNAP produces a popular line of networking products, including NAS units that can work with Macintosh, Windows, Linux, and other operating systems. QNAP’s NAS products are used in office, home, and professional environments for storage and a variety of applications, including business, development, home automation, security, and entertainment.
Data stored on a QNAP NAS can be backed up or synced to Backblaze B2 Cloud Storage using QNAP’s Hybrid Backup Sync application, which consolidates backup, restoration, and synchronization functions into a single QTS application. With the latest releases of QTS and Hybrid Backup Sync, you can sync the data on your QNAP NAS to and from Backblaze B2 Cloud Storage.
We’re sharing a step by step guide on connecting your QNAP NAS to Backblaze B2. The first step is to set up your Backblaze B2 account. Then, you can set up QNAP Hybrid Backup Sync with Backblaze B2. Read on for the step-by-step guide below.
How to Set Up Your Backblaze B2 Account
If you already have a Backblaze B2 account, feel free to skip ahead to the next section. Otherwise, you can sign up for an account here and get started with 10GB of storage for free.
How to Set Up a Bucket, Application Key ID, and Application Key
Once you’ve signed up for a Backblaze B2 Account, you’ll need to create a bucket, Application Key ID, and Application Key. This may sound like a lot, but all you need are a few clicks, a couple names, and less than a minute!
On the Buckets page of your account, click the Create a Bucket button.
Give your bucket a name and enable features like encryption and Object Lock for added security. (Object Lock is a backup protection feature that prevents a file from being altered or deleted until a given date—learn more about it here.)
Click the Create a Bucket button and you should see your new bucket on the Buckets page.
Navigate to the App Keys page of your account and click the Add Application Key button.
Name your Application Key and click the Create New Key button—make sure that your key has both Read and Write permissions (the default option).
Your Application Key ID and Application Key will appear on your App Keys page. Make sure to copy these somewhere secure as the Application Key will not appear again!
How to Set Up QNAP’s Hybrid Backup Sync to Work With B2 Cloud Storage
To set up your QNAP with Backblaze B2 sync support, you’ll need access to your B2 Cloud Storage account. You’ll also need your B2 Cloud Storage account ID, Application Key, and bucket name—all of which are available after you log in to your Backblaze account. Finally, you’ll need the Hybrid Backup Sync application installed in QTS. You’ll need QTS 4.3.3 or later and Hybrid Backup Sync v2.1.170615 or later. You can follow these steps to complete the process:
Open the QTS desktop by entering the IP address of your NAS in your web browser. Alternatively, you can download the Qfinder Pro app which will discover the NAS on your network and open the webpage for you automatically.
If it’s not already installed, install the latest Hybrid Backup Sync from the App Center.
Click on Hybrid Backup Sync icon to launch the app.
Select Sync in the menu in the left column, then pull down the disclosure triangle in the purple Sync Now button and select One-way Sync Job to start configuring your sync job.
Scroll down to the Cloud Server section and select Backblaze B2.
Next, select an existing Account (QNAP’s name for a storage destination) or simply select Backblaze B2 to create a new one.
Next, select an existing bucket in your Backblaze B2 account, or create a new one.
Enter a name for the Backblaze B2 connection, then enter the Application Key ID and Application Key for the Backblaze B2 account. Be sure that the Application Key allows full read/write permissions for the bucket or section of the bucket the QNAP will access. Click Create to move to the next step.
Now that your storage destination is set up, it’s time to select what you will sync to Backblaze. Select and add the source folders on the NAS on the left.
Then on the right side, pair the sync source with a destination in Backblaze B2. You can select the parent folder /, an existing folder in the bucket, or click the icon of a folder with a plus sign to create a new one.
If you’d like this storage job to run on a schedule, you can set it now, or select the Sync Now checkbox to run the job immediately.
You now have the option to fine-tune the behavior of the sync job. By selecting Methods in the column on the left, you can enable or disable data compression.
In Policies, you can enable client-side encryption, set rate limiting, and other advanced settings.
In Options, scroll to the bottom of the page to choose how many files to sync concurrently—you can increase this if you have more upload bandwidth.
Click Next to see the job creation summary, then Create to complete your sync job creation and see a summary page. From this page, you can click Sync Now to start the sync process, or create another sync job definition.
What Can You Do With Backblaze B2 and QNAP Hybrid Backup Sync?
With QNAP’s Hybrid Backup Sync software, you can easily sync data to the cloud. Here’s some more information on what you can do to make the most of your setup.
Hybrid Backup Sync 3.0
QNAP and Backblaze B2 users can take advantage of Hybrid Backup Sync, as explained above. Hybrid Backup Sync is a powerful tool that provides true backup capability with features like version control, client-side encryption, and block-level deduplication. QNAP’s operating system, QTS, continues to deliver innovation and add thrilling new features. The ability to preview backed up files using the QuDedup Extract Tool, a feature first released in QTS 4.4.1, allowed QNAP users to save on bandwidth costs.
You can download the latest QTS update here and Hybrid Backup Sync is available in the App Center on your QNAP device.
Hybrid Mount and VJBOD Cloud
The Hybrid Mount and VJBOD Cloud apps allow QNAP users to designate a drive in their system to function as a cache while accessing B2 Cloud Storage. This allows users to interact with Backblaze B2 just like you would a folder on your QNAP device while using Backblaze B2 as an active storage location.
Hybrid Mount and VJBOD Cloud are both included in the QTS 4.4.1 versions and higher and function as a storage gateway on a file-based or block-based level, respectively. Hybrid Mount enables Backblaze B2 to be used as a file server and is ideal for online collaboration and file-level data analysis. VJBOD Cloud is ideal for a large number of small files or singular massively large files (Think databases!) since it’s able to update and change files on a block-level basis. Both apps offer the ability to connect to B2 Cloud Storage via popular protocols to fit any environment, including SMB, AFP, NFS, FTP, and WebDAV.
QuDedup introduces client-side deduplication to the QNAP ecosystem. This helps users at all levels save on space on their NAS by avoiding redundant copies in storage. Backblaze B2 users have something to look forward to as well since these savings carry over to cloud storage via the HBS 3.0 update.
Why Backblaze B2?
QNAP continues to innovate and unlock the potential of B2 Cloud Storage in the NAS ecosystem. If you haven’t given B2 Cloud Storage a try yet, now is the time. You can get started with Backblaze B2 and your QNAP NAS right now, and make sure your NAS is synced securely and automatically to the cloud.
Do you remember when “Pokémon Go” came out in 2016? Suddenly it was everywhere. It was a world-wide obsession, with over 10 million downloads in its first week and 500 million downloads in six months. System load rapidly escalated to 50 times the anticipated demand. How could the game architecture support such out-of-control hypergrowth?
In this post, we’ll take a look at what Kubernetes does, how it works, and how it could be applicable in your environment.
What Is Kubernetes?
You may be familiar with containers. They’re conceptually similar to lightweight virtual machines. Instead of simulating computer hardware and running an entire operating system (OS) on that simulated computer, the container runs applications under a parent OS with almost no overhead. Containers allow developers and system administrators to develop, test, and deploy software and applications much faster than VMs, and most applications today are built with them.
But what happens if one of your containers goes down, or your ecommerce store experiences high demand, or if you release a viral sensation like “Pokémon Go”? You don’t want your application to crash, and you definitely don’t want your store to go down during the Christmas crush. Unfortunately, containers don’t solve those problems. You could implement intelligence in your application to scale as needed, but that would make your application a lot more complex and expensive to implement. It would be simpler and faster if you could use a drop-in layer of management—a “fleet manager” of sorts—to coordinate your swarm of containers. That’s Kubernetes.
Kubernetes Architecture: How Does Kubernetes Work?
Kubernetes implements a fairly straightforward hierarchy of components and concepts:
Containers: Virtualized environments where the application code runs.
Pods: “Logical hosts” that contain and manage containers, and potentially local storage.
Nodes: The physical or virtual compute resources that run the container code.
Cluster: A grouping of one or more nodes.
Control Plane: Manages the worker nodes and Pods in the cluster.
You have a few options to run Kubernetes. The minikube utility launches and runs a small single-node cluster locally for testing purposes. And you can control Kubernetes with any of several control interfaces: the kubectl command provides a command-line interface, and library APIs and REST endpoints provide programmable interfaces.
What Does Kubernetes Do?
Modern web-based applications are commonly implemented with “microservices,” each of which embodies one part of the desired application behavior. Kubernetes distributes the microservices across Pods. Pods can be used two ways—to run a single container (the most common use case) or to run multiple containers (like a pod of peas or a pod of whales—a more advanced use case). Kubernetes operates on the Pods, which act as a sort of wrapper around the container(s) rather than the containers themselves. As the microservices run, Kubernetes is responsible for managing the application’s execution. Kubernetes “orchestrates” the Pods, including:
Autoscaling: As more users connect to the application’s website, Kubernetes can start up additional Pods to handle the load.
Self-healing: If the code in a Pod crashes, or if there is a hardware failure, Kubernetes will detect it and restart the code in a new Pod.
Parallel worker processes: Kubernetes distributes the Pods across multiple nodes to benefit from parallelism.
Load balancing: If one server gets overloaded, Kubernetes can balance the load by migrating Pods to other nodes.
Storage orchestration: Kubernetes lets you automatically mount persistent storage, say a local device or cloud-based object storage.
The beauty of this model is that the applications don’t have to know about the Kubernetes management. You don’t have to write load-balancing functionality into every application, or autoscaling, or other orchestration logic. The applications just run simplified microservices in a simple environment, and Kubernetes handles all the management complexity.
As an example: You write a small reusable application (say, a simple database) on a Debian Linux system. Then you could transfer that code to an Ubuntu system and run it, without any changes, in a Debian container. (Or, maybe you just download a database container from the Docker library.) Then you create a new application that calls the database application. When you wrote the original database on Debian, you might not have anticipated it would be used on an Ubuntu system. You might not have known that the database would be interacting with other application components. Fortunately, you didn’t have to anticipate the new usage paradigm. Kubernetes and containers isolate your code from the messy details.
Keep in mind, Kubernetes is not the only orchestration solution—there’s Docker Swarm, Hashicorp’s Nomad, and others. Cycle.io, for example, offers a simple container orchestration solution that focuses on ease for the most common container use cases.
Kubernetes spins up and spins down Pods as needed. Each Pod can host its own internal storage (as shown in the diagram above), but that’s not often used. A Pod might get discarded because the load has dropped, or the process crashed, or for other reasons. The Pods (and their enclosed containers and volumes) are ephemeral, meaning that their state is lost when they are destroyed. But most applications are stateful. They couldn’t function in a transitory environment like this. In order to work in a Kubernetes environment, the application must store its state information externally, outside the Pod. A new instance (a new Pod) must fetch the current state from the external storage when it starts up, and update the external storage as it executes.
You can specify the external storage when you create the Pod, essentially mounting the external volume in the container. The container running in the Pod accesses the external storage transparently, like any other local storage. Unlike local storage, though, cloud-based object storage is designed to scale almost infinitely right alongside your Kubernetes deployment. That’s what makes object storage an ideal match for applications running Kubernetes.
When you start up a Pod, you can specify the location of the external storage. Any container in the Pod can then access the external storage like any other mounted file system.
Kubernetes in Your Environment
While there’s no doubt a learning curve involved (Kubernetes has sometimes been described as “not for normal humans”), container orchestrators like Kubernetes, Cycle.io, and others can greatly simplify the management of your applications. If you use a microservice model, or if you work with similar cloud-based architectures, a container orchestrator can help you prepare for success from day one by setting your application up to scale seamlessly.
Today is a big day for Backblaze—we became a public company listed on the Nasdaq Stock Exchange under the ticker symbol BLZE!
Before I explain what this means for us and for you, I want to give my thanks. Going public is an important milestone and one we couldn’t have accomplished without your support. Thank you.
Whether you have believed in us from the beginning and have been a customer for over a decade, or joined us yesterday; whether you entrust us to back up a single computer or to run your entire company’s infrastructure on the Backblaze Storage Cloud; whether you’ve partnered with us to bring our services to one individual or thousands of companies, whether you’re a first-time visitor to our site or you’ve been a reader all along: Thank you. We really appreciate you working with us and supporting us.
What Does Becoming a Public Company Mean for Backblaze?
It means we have more resources with IPO proceeds to increase investment in the development of our Storage Cloud platform and the B2 Cloud Storage and Computer Backup services that run on it.
The future is being built on independent cloud platforms, and ours has been 14 years in the making. Today, we take the next big step in being the leading independent cloud for data storage.
Additionally, while we help about 500,000 customers already, we plan to expand our sales and marketing efforts to bring Backblaze to more businesses, developers, and individuals that would benefit from easy and affordable data storage that they can trust.
Finally, we have built Backblaze with not only a focus on the products we provide, but with a deep care for what it is like to work here. With these proceeds, we plan to continue to significantly grow our team, and are looking for many more kind, smart, talented people to join us. (Is that you? We’re hiring!)
And Most Importantly, What Does It Mean for You?
My short answer is: It means more of the good things you’ve come to expect from us at Backblaze.
I want to emphasize that while we’ll be doing “more” for you, today’s events don’t mean that we’re “different” on any fundamental level. We’re still guided by the same principles and the same team. As a reminder, here’s the core of the values that we’ve been committed to since our founding (as written by Brian Wilson, Co-founder and CTO):
“At Backblaze, we want to provide a quality product for a fair price. We want to be honest and up front with our customers as to what we can and cannot do, and we want to be paid only the money honestly owed to us, and never engage in sleazy or misleading business practices where customers are misled in any way or pay for a service they do not receive. We are the ‘good guys,’ and we act like it.”
The only thing that’s changing today is we now have a more robust structure and additional funding to deliver on these values for more customers and partners.
If you’d like to share your thoughts, we’d love to hear from you in the comments section below. In the coming weeks, I’ll share more about where we started, why we decided to go public, how we did it, and more. Stay tuned and for now…
While the first half of 2021 saw disruptive, high-profile attacks, Q3 saw attention and intervention at the highest levels. Last quarter, cybercriminals found themselves in the sights of government and law enforcement agencies as they responded to the vulnerabilities the earlier attacks revealed. Despite these increased efforts, the ransomware threat remains, simply because the rewards continue to outweigh the risks for bad actors.
If you’re responsible for protecting company data, ransomware news is certainly on your radar. In this series of posts, we aim to keep you updated on evolving trends as we see them to help inform your IT decision-making. Here are five key takeaways from our monitoring over Q3 2021.
This post is a part of our ongoing series on ransomware. Take a look at our other posts for more information on how businesses can defend themselves against a ransomware attack, and more.
No surprises here. Ransomware operators continued to carry out attacks—against Howard University, Accenture, and the fashion brand Guess, to name a few. In August, the FBI’s Cyber Division and the Cybersecurity and Infrastructure Security Agency (CISA) reported an increase in attacks on holidays and weekends and alerted businesses to be more vigilant as we approach major holidays. Then, in early September, the FBI also noticed an uptick in attacks on the food and agriculture sector. The warnings proved out, and in late September, we saw a number of attacks against farming cooperatives in Iowa and Minnesota. While the attacks were smaller in scale compared to those earlier in the year, the reporting speaks to the fact that ransomware is definitely not a fad that’s on a downswing.
2. More Top-down Government Intervention
Heads of state and government agencies took action in response to the ransomware threat last quarter. In September, the U.S. Treasury Department updated an Advisory that discourages private companies from making ransomware payments, and outlines mitigating factors it would consider when determining a response to sanctions violations. The Advisory makes clear that the Treasury will expect companies to do more to proactively protect themselves, and may be less forgiving to those who pay ransoms without doing so.
Earlier in July, the TSA also issued a Security Directive that requires pipeline owners and operators to implement specific security measures against ransomware, develop recovery plans, and conduct a cybersecurity architecture review. The moves demonstrate all the more that the government doesn’t take the ransomware threat lightly, and may continue to escalate actions.
3. Increased Scrutiny on Key Players Within the Ransomware Economy
Two major ransomware syndicates, REvil and Darkside, went dark days after President Joe Biden’s July warning to Russian President Vladimir Putin to rein in ransomware operations. We now see this was but a pause. However, the rapid shuttering does suggest executive branch action can make a difference, in one country or another.
Keep in mind, though, that the ransomware operators themselves are just one part of the larger ransomware economy (detailed in the infographic at the bottom of the post). Two other players within the ransomware economy faced increased pressure this past quarter—currency exchanges and cyber insurance carriers.
Currency Exchanges: In addition to guidance for private businesses, the Treasury Department’s September Advisory specifically added the virtual currency exchange, SUEX, to the Specially Designated Nationals and Blocked Persons List, after it found that more than 40% of the exchange’s transactions were likely related to ransomware payments. The Advisory imposed sanctions that prohibit any U.S. individual or entity from engaging in transactions with SUEX.
Cyber Insurance Carriers: It makes sense the cyber insurance industry is booming—the economics of risk make it lucrative for certain providers. Interestingly, though, we’re starting to see more discussion of how cyber insurance providers and the victim-side vendors they engage with—brokers, negotiators, and currency platforms like SUEX—are complicit in perpetuating the ransomware cycle. Further, the Treasury Department’s September Advisory also included a recommendation to these victim-side vendors to implement sanctions compliance programs that account for the risk that payments may be made to sanctioned entities.
4. An Emerging Moral Compass?
In messages with Bloomberg News, the BlackMatter syndicate pointed out its rules of engagement, saying hospitals, defense, and governments are off limits. But, sectors that are off limits to some are targets for others. While some syndicates work to define a code of conduct for criminality, victims continue to suffer. According to a Ponemon survey of 597 health care organizations, ransomware attacks have a significant impact on patient care. Respondents reported longer length of stay (71%), delays in procedures and tests (70%), increase in patient transfers or facility diversions (65%), and an increase in complications from medical procedures (36%) and mortality rates (22%).
5. Karma Is a Boomerang
It’s not surprising that ransomware operators would steal from their own, but that doesn’t make it any less comical to hear low-level ransomware affiliates complaining of “lousy partner programs” hawked by ransomware gangs “you cannot trust.” ZDNet reports that the REvil group has been accused of coding a “backdoor” into their affiliate product that allows the group to barge into negotiations and take the keep all for themselves. It’s a dog-eat-dog world out there.
The Good News
This quarter, the good news is that ransomware has caught the attention of the people who can take steps to curb it. Government recommendations to strengthen ransomware protection make investing the time and effort easier to justify, especially when it comes to your cloud strategy. If there’s anything this quarter taught us, it’s that ransomware protection should be priority number one.
If you want to share this infographic on your site, copy the code below and paste into a Custom HTML block.
<div><div><strong>The Ransomware Economy</strong></div><a href="https://www.backblaze.com/blog/ransomware-takeaways-q3-2021/"><img src="https://www.backblaze.com/blog/wp-content/uploads/2021/11/The-Ransomware-Economy-Q3-2021-scaled.jpg" border="0" alt="diagram of the players and elements involved in spreading ransomware" title="diagram of the players and elements involved in spreading ransomware" /></a></div>
Every business faces an ongoing IT question—when to manage some or all IT services or projects in-house and when to outsource them. Maybe you’re facing a new challenge, be it safeguarding against next-gen threats or deploying a new tech stack. Maybe a windfall of growth makes small IT problems bigger. Maybe your IT manager leaves suddenly, and you’re left in the lurch (true story). Or it may just be a desire to focus headcount elsewhere, difficulty finding the right talent, or a push for more efficiency.
If you’re nodding your head yes to any of the above, the answer may be to consider outsourcing a part of the project, or all of it, to a managed service provider. Especially as technologies and threats evolve, how you manage IT resources matters.
In this post, we explain why businesses should be thinking about IT management early on, and when and why hiring a managed service provider (MSP) makes sense when you don’t want to resource IT in-house.
What Is a MSP?
MSPs are companies that provide outsourced IT services to businesses. These services can range from offering light support as needed to installing and running new workflows and scalable systems ongoing. They can even help by leading technical build-outs as companies grow and move into new facilities.
A business can hire a MSP to provide help with one task that they would prefer not to be handled in-house, like data backup or disaster recovery, or they can outsource to an MSP to run their entire IT infrastructure.
When You Need More Than a Band-aid to Fix the Problem
Back to that true story I hinted at above, here’s a personal example from my past when I decided to hire a MSP: Many years ago, I was director of strategy and operations for a boutique management consulting firm when our sole IT manager rather abruptly decided to exit the organization. Before leaving, he emailed me—a fairly non-technical person at the time—instructions for maintaining on-premises servers and laptops in various states of readiness, along with advice that I shouldn’t let company leadership switch from PCs to Macs because it would wreak havoc. At this time, we had also recently deployed Microsoft Sharepoint for document management and storage, but the team hadn’t gotten used to it yet—they still relied on hard drives and emailing copies of important documents to themselves to back them up. What could we do?
My first thought had been to backfill IT management. Yet the team and I didn’t feel we had the knowledge to effectively assess candidates’ skills. We also saw the need and skillset evolving over time, so calling upon a trusted advisor to help vet candidates likely wasn’t the solution. Here were our key criteria:
Competence to solve immediate problems.
Vision to plan and execute for the future.
Internal customer orientation.
Willingness to be called upon nights and weekends.
It was a big ask.
And we also weren’t sure if we needed a full-time resource forever. So instead of going that route, I started to explore outsourcing our IT infrastructure management and was happy to find MSPs that could effectively handle the organization’s requirements. The MSP that we ultimately chose brought executional excellence, strategic thinking, and high-quality service. I heard nothing but positive feedback from the greater consulting team—team members felt more supported and confident in using technology solutions. As a bonus, choosing a MSP to handle our IT management yielded around 25% IT budget savings compared to hiring a full-time employee and buying or deploying tools ourselves.
The MSP support model is a great choice both in the short or long term depending on a company’s needs, but it might not be right for every business. How do you know if hiring a MSP is right for you?
What to Consider When Hiring a MSP
There are a number of reasons that a company could outsource their IT management to a MSP. When weighing the options, consider the following:
What services do you need?
What skills do you have or wish to have in-house?
How important are the services and skills you need (e.g. security versus less consequential services)?
How long will you need support for these services and skills (e.g. ongoing versus one time)?
What are your other considerations (e.g. budget, headcount, etc.)?
Services and Skills
MSPs offer a wide range of services and specialties, from isolated tasks like disaster recovery to ongoing projects like IT infrastructure management. The scope of your needs can help you decide whether hiring or relying on internal support can provide you with appropriate coverage, or whether outsourcing to a MSP will provide the necessary expertise. Some MSPs also specialize in specific industries with specific IT needs.
Data security has never been more important, and the consequences of recovering from a cybersecurity attack are costly. If you already have a ransomware protection and disaster recovery system covered in-house, then you’re all set. On the other hand, if you’re not entirely confident that there is a system in place protecting your company data and backing it up, or if you feel that you or your team aren’t able to keep up with threats as they are evolving, a MSP can help take over that effort for you.
A MSP can identify any preventative or maintenance issues and address them before any data loss occurs. MSPs can also offer ongoing security monitoring and scan for vulnerabilities in your network, keeping your business ahead of a possible attack. Additionally, MSPs can help with regularly maintaining a company’s network so these important security measures don’t fall to the wayside.
MSPs in Action
Continuity Centers is a New York area-based MSP specializing in business continuity and disaster recovery.
In 2020, Continuity Centers implemented Veeam backup software to offer their customers added security and recovery support. They chose to implement Backblaze’s immutable backups feature with Veeam, so they are able to protect data in Backblaze B2 Cloud Storage from ransomware attacks or data loss. The savings that Continuity Centers gained from choosing Backblaze B2 as their cloud provider allowed them to offer enhanced data protection services without raising prices for their customers.
A MSP can provide one-time assistance or setup for a specific service you need, or longer-term management depending on the scope of the project. If your business requires 24/7 support, some remote MSP services are available for continuous assistance. Many MSPs offer real-time monitoring and management to ensure that any issues can be identified and fixed before they pose a threat to business operations.
Hiring an expert to handle IT management in-house can be costly—not to mention building and maintaining a team. Hiring a MSP can free up resources and save money in the long run with predictable, fixed prices.
Another important budgetary factor to consider is the cost of downtime in the case of a ransomware attack. While ransom payments continue to be one of the highest costs to businesses, the true cost of ransomware includes downtime, people hours, device costs, network costs, lost opportunities, and more. MSPs that provide business continuity services can help minimize these costs and ensure they’re avoided in the future.
MSPs in Action
Clicpomme is a Montréal, Québec-based MSP specializing in IT services and solutions for Apple products.
Their solutions range from device and IT infrastructure management to server deployment and off-site backup. Clicpomme uses the Backblaze mass deployment feature to easily deploy Backblaze software on customers’ endpoints at scale, so customers don’t have to handle deployment or backup management themselves.
Is a MSP Right for Your Business?
Are you considering getting help from a MSP with your IT management, or have you turned to one in the past? Comment with your questions or experience working with a MSP below.
As of September 30, 2021, Backblaze had 194,749 drives spread across four data centers on two continents. Of that number, there were 3,537 boot drives and 191,212 data drives. The boot drives consisted of 1,557 hard drives and 1,980 SSDs. This report will review the quarterly and lifetime failure rates for our data drives, as well as compare failure rates for our SSD and HDD boot drives. Along the way, we’ll share our observations and insights of the data presented and, as always, we look forward to your comments below.
Q3 2021 Hard Drive Failure Rates
At the end of September 2021, Backblaze was monitoring 191,212 hard drives used to store data. For our evaluation, we removed from consideration 386 drives which were used for either testing purposes or were drive models for which we did not have at least 60 drives. This leaves us with 190,826 hard drives for the Q3 2021 quarterly report, as shown below.
Notes and Observations on the Q3 2021 Stats
The data for all of the drives in our data centers, including the 386 drives not included in the list above, is available for download on the Hard Drive Test Data webpage.
Five drive models recorded one drive failure during the quarter:
HGST 12TB drive (model: HUH728080ALE600).
Seagate 6TB drive (model: ST6000DX000).
Toshiba 4TB drive (model: MD04ABA400V).
Toshiba 14TB drive (model: MG07ACA14TEY).
WDC 16TB drive (model: WUH721816ALE6L0).
While one failure is good, the number of drive days for each of these drives is 100,256 or less for the quarter. This leads to a wide confidence interval for the annualized failure rate (AFR) for these drives. Still, kudos to the Seagate 6TB drives (average age 77.8 months) and Toshiba 4TB drives (average age 75.6 months) as they have been good for a long time.
We added a new Toshiba 16TB drive this quarter (model: MG08ACA16TE). There were a couple of early drive failures, but they’ve only been installed a little over a month. This drive is similar to model MG08ACA16TEY, with the difference purportedly being the latter having the Sanitize Instant Erase (SIE) feature, which shouldn’t be in play in our environment. It will be interesting to see how they compare over time.
There are two drives in the quarterly results which require additional information beyond the raw numbers presented. Let’s start with the Seagate 12TB drive (model: ST12000NM0007). Back in January of 2020, we noted that these drives were not working optimally in our environment and higher failure rates were predicted. Together with Seagate, we decided to remove these drives from service over the coming months. Covid-19 delayed the project some and the results are the predicted higher failure rates. We expect all of the remaining drives to be removed during Q4.
The second outlier is the Seagate 14TB drive (model: ST14000NM0138). As noted in the Q2 Drive Stats report, these drives, while manufactured by Seagate, were provisioned in Dell storage servers. As noted, both Seagate and Dell were looking into the possible causes for the unexpected failure rate. The limited number of failures, 26 this quarter, have made failure analysis challenging. As we learn more, we will let you know.
HDDs versus SSDs
As a reminder, we use both SSDs and HDDs as boot drives in our storage servers. The workload for a boot drive includes regular reading, writing, and deleting of files (log files typically) along with booting the server when needed. In short, the workload for each type of drive is similar.
In our recent post, “Are SSDs Really More Reliable Than Hard Drives?” we compared the failure rates of our HDD and SSD boot drives using data through Q2 2021. In that post, we found that if we controlled for the average age and drive days for each cohort, we were able to compare failure rates over time.
We’ll continue that comparison, and we have updated the chart below through Q3 2021 to reflect the latest data.
The first four points of each drive type create lines that are very similar, albeit the SSD failures rates are slightly lower. The HDD failure rates began to spike in year five (2018) as the HDD drive fleet started to age. Given what we know about drive failure over time, it is reasonable to assume that the failure rates of the SSDs will rise as they get older. The question to answer is: Will it be higher, lower, or the same? Stay tuned.
Data Storage Changes
Over the last year, we’ve added 40,129 new hard drives. Actually, we installed 67,990 new drives and removed 27,861 old drives. The removed drives included failed drives (1,674) and migrations (26,187). That works out to installing about 187 drives a day, which over the course of the last year, totaled just over 600PB of new data storage.
The following chart breaks down the efforts of our intrepid data center teams.
Lifetime Hard Drive Stats
The chart below shows the lifetime AFRs of all the hard drive models in production as of September 30, 2021.
Notes and Observations on the Lifetime Stats
The lifetime AFR for all of the drives in our farm continues to decrease. The 1.43% AFR is the lowest recorded value since we started back in 2013. The drive population spans drive models from 4TB to 16TB and varies in average age from one month (Toshiba 16TB) to over six years (Seagate 6TB).
Our best performing drive models in our environment by drive size are listed in the table below.
The WDC 16TB drive (model: WUH721816ALE6L0) does not appear to be available in the U.S. through retail channels. It is available in Europe for 549,00 EUR.
Status is based on what is stated on the website. Further investigation may be required to ensure you are purchasing a new drive versus a refurbished drive marked as new.
The source and price columns were as of 10/23/2021.
Interested in learning more? Join our webinar on November 4th at 10 a.m. PT with Drive Stats author, Andy Klein, to gain unique and valuable insights into why drives fail, how often they fail, and which models work best in our environment of 190,000+ drives. Register today.
The Hard Drive Stats Data
The complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone; it is free.
If you just want the summarized data used to create the tables and charts in this blog post, you can download the ZIP file containing the Excel XLXS files for each chart.
Good luck and let us know if you find anything interesting.
Using Synology and Backblaze together lets you get the most out of your DiskStation while accessing the benefits of the cloud. Setting it up is a simple, straightforward process—just a few minutes and you’re set ongoing. And, at a quarter of the price compared to other cloud providers, Backblaze B2 Cloud Storage is one of the most affordable ways to add cloud sync and backup to your NAS.
To help you take advantage of these benefits, we’re sharing a step by step guide on connecting your Synology NAS to Backblaze B2. Big picture, there are two steps you need to take to set up your DiskStation with Backblaze B2:
If you already have a Backblaze B2 account, feel free to skip ahead to the next section. Otherwise, you can sign up for an account here and get started with 10GB of storage for free.
2. How to Set Up a Bucket, Application Key ID, and Application Key
Once you’ve signed up for a Backblaze B2 Account, you’ll need to create a bucket, Application Key ID, and Application Key. This may sound like a lot, but all you need are a few clicks, a couple names, and less than a minute!
On the Buckets page of your account, click the Create a Bucket button.
2. Give your bucket a name and enable features like encryption and Object Lock for added security. (Object Lock is a backup protection feature that prevents a file from being altered or deleted until a given date—learn more about it here.)
3. Click the Create a Bucket button and you should see your new Bucket on the Buckets page.
4. Navigate to the App Keys page of your account and click the Add Application Key button.
5. Name your Application Key and click the Create New Key button—make sure that your key has both Read and Write permissions (the default option).
6. Your Application Key ID and Application Key will appear on your App Keys page. Make sure to copy these somewhere secure as the Application Key will not appear again!
How to Set Up Synology Cloud Sync With Backblaze B2
Now you’re ready to set up your DiskStation with Backblaze B2 sync support. You’ll need to make sure the Cloud Sync application is installed on your DiskStation. Once you’re ready, you can follow these steps:
Open the DiskStation Management interface in your web browser.
Install Cloud Sync v2.1.0 or later from the Package Center, if it’s not already installed.
Click on the Main Menu.
Click on Cloud Sync.
Click the + button to create a new connection to Backblaze B2.
Select Backblaze B2 from the list of cloud providers.
7. Enter the Application Key ID, Application Key, and bucket name you’ll be using for the sync. This Application Key needs to have full read/write permissions for the bucket or section of the bucket the Synology will access. The bucket is the container that holds the DiskStation files you’d like to sync.
8. Select the folder on the DiskStation you’d like to sync. 9. Click Next. Confirm the Task Settings and the DiskStation will begin to sync to Backblaze B2. 10. Task Settings let you specify the folder you’re uploading, and lets you prevent the DiskStation and Backblaze B2 from syncing specific files or subdirectories. You can create exclusions for specific file types, too, as well as file names.
As a part of our Cloud University series, we teamed up with Synology to host a webinar that walks users through configuring Cloud Sync and Hyper Backup in multi-office scenarios. The webinar is available on-demand here.
You’re probably familiar with containers if you’re even remotely involved in software development or systems administration. In their 2020 survey, the Cloud Native Computing Foundation found that the use of containers in production has increased to 92%, up 300% since their first survey in 2016.
But, even if containers are a regular part of your day-to-day life, or if you, like me, are a worthy student just trying to understand what an operating system kernel is, it helps to have an understanding of some container basics.
Today, we’re explaining what containers are, how they’re used, and how cloud storage fits into the container picture—all in one neat and tidy containerized blog post package. (And, yes, the kernel is important, so we’ll get to that, too).
What Are Containers?
Containers are packaged units of software that contain all of the dependencies (e.g. binaries, libraries, programming language versions, etc.) they need to run no matter where they live—on a laptop, in the cloud, or in an on-premises data center. That’s a fairly technical definition, so you might be wondering, “OK, but what are they really?”
Well, typically, when it comes to naming things, the tech world is second only to the world of pharmaceuticals in terms of absurdity (I see your Comirnaty, and I’ll raise you a Kubernetes, good sir… More on that later.). But sometimes, we get it right, and containers are one example.
The generally accepted definition of the term applies almost exactly to what the technology does.
A container, generally = a receptacle for holding goods; a portable compartment in which freight is placed (as on a train or ship) for convenience of movement.
A container in software development = a figurative “receptacle” for holding software. The second part of the definition applies even better—shipping containers are often used as a metaphor to describe what containers do. In shipping, instead of stacking goods all higgledy piggledy, goods are packed into standard-sized containers that fit on whatever is hauling them—a ship, a train, or a trailer.
Likewise, instead of “shipping” an unwieldy mess of code, including the required operating system, containers package software into lightweight units that share the same operating system (OS) kernel and can run anywhere—on a laptop, on a server, in the cloud, etc.
What’s an OS Kernel?
As promised, here’s where the OS kernel becomes important. The kernel is the core programming at the center of the OS that controls all other parts of the OS. In yet another example of naming things in tech, the term makes sense if you consider the definition of “kernel” as “the central or essential part” as in “a kernel of truth,” but also begs the question, “Why didn’t they just call it a colonel? Because now all I can envision is a popcorn seed inside a computer.” …But that’s neither here nor there, and now you know what an OS kernel does.
Compared to older virtualization technology, namely virtual machines which are measured in gigabytes, containers are only megabytes in size. That means you can run quite a few of them on a given computer or server much like you can stack many containers onto a ship.
Indeed, the founders of Docker, the software that sparked widespread container adoption, looked to the port of Oakland for inspiration. Former Docker CEO, Ben Golub, explained in an interview with InfoWorld, “We could see all the container ships coming into the port of Oakland, and we were talking about the value of the container in the world of shipping. The fact it was easier to ship a car from one side of the world than to take an app from one server to another, that seemed like a problem ripe for solving.” In fact, it’s right there in their logo.
And that is exactly what containers, mainly via Docker’s popularity, did—they solved the problem of environment inconsistency for developers. Before containers became widely used, moving software between environments meant things broke, a lot. If a developer wrote an app on their laptop, then moved it into a testing environment on a server, for example, everything had to be the same—same versions of the programming language, same permissions, same database access, etc. If not, you would have had a very sad app.
Containers work their magic by way of virtualization. Virtualization is the process of creating a simulated computing environment that’s abstracted from the physical computing hardware—essentially a computer-generated computer, also referred to as a software-defined computer.
The first virtualization technology to really take off was the virtual machine (VM). A VM sits atop a hypervisor—a lightweight software layer that allows multiple operating systems to run in tandem on the same hardware. VMs allow developers and system administrators to make the most of computing hardware. Before VMs, each application had to run on its own server, and it probably didn’t use the server’s full capacity. After VMs, you could use the same server to run multiple applications, increasing efficiency and lowering costs.
Containers vs. Virtual Machines
While VMs increase hardware efficiency, each VM requires its own OS and a virtualized copy of the underlying hardware. Because of this, VMs can take up a lot of system resources, and they’re slow to start up.
Containers, on the other hand, virtualize the underlying hardware and the OS. Because they don’t include their own OS, containers are much smaller and faster to spin up than VMs. Want to know more? Check out our deep dive into the differences between VMs and containers.
The Benefits of Containers
Containers allow developers and system administrators to develop, test, and deploy software and applications faster and more efficiently than older virtualization technologies like VMs. The benefits of containers include:
Portability: Containers include all of the dependencies they need to run in any environment, provided that environment includes the appropriate OS. This reduces the errors and bugs that arise when moving applications between different environments, increasing portability.
Size: Containers share OS resources and don’t include their own OS image, making them lightweight—megabytes compared to VMs’ gigabytes. As such, one machine or server can support many containers.
Speed: Again, because they share OS resources and don’t include their own OS image, containers can be spun up in seconds compared to VMs which can take minutes to spin up.
Resource efficiency: Similar to VMs, containers allow developers to make the best use of hardware and software resources.
Isolation: Also similar to VMs, with containers, different applications or even component parts of a singular application can be isolated such that issues like excessive load or bugs on one don’t impact others.
Container Use Cases
Containers are nothing if not versatile, so they can be used for a wide variety of use cases. However, there are a few instances where containers are especially useful:
Enabling microservices architectures: Before containers, applications were typically built as all-in-one units or “monoliths.” With their portability and small size, containers changed that, ushering in the era of microservices architecture. Applications could be broken down into their component “services,” and each of those services could be built in its own container and run independently of the other parts of the application. For example, the code for your application’s search bar can be built separately from the code for your application’s shopping cart, then loosely coupled to work as one application.
Supporting modern development practices: Containers and the microservices architectures they enable paved the way for modern software development practices. With the ability to split applications into their component parts, each part could be developed, tested, and deployed independently. Thus, developers can build and deploy applications using modern development approaches like DevOps, continuous integration/continuous deployment (CI/CD), and agile development.
Facilitating hybrid cloud and multi-cloud approaches: Because of their portability, containers enable developers to utilize hybrid cloud and/or multi-cloud approaches. Containers allow applications to move easily between environments—from on-premises to the cloud or between different clouds.
Accelerating cloud migration or cloud-native development: Existing applications can be refactored using containers to make them easier to migrate to modern cloud environments. Containers also enable cloud-native development and deployment.
The two most widely recognized container tools are Docker and Kubernetes. They’re not the only options out there, but in their 2021 survey, Stack Overflow found that nearly 50% out of 76,000+ respondents use Docker and 17% use Kubernetes. But what do they do?
1. What Is Docker?
Container technology has been around for a while in the form of Linux containers or LXC, but the widespread adoption of containers happened only in the past five to seven years with the introduction of Docker.
Docker was launched in 2013 as a project to build single-application LXC containers, introducing several changes to LXC that make containers more portable and flexible to use. It later morphed into its own container runtime environment. At a high level, Docker is a Linux utility that can efficiently create, ship, and run containers.
Docker introduced more standardization to containers than previous technologies and focused on developers, specifically, making it the de facto standard in the developer world for application development.
2. What Is Kubernetes?
As containerization took off, many early adopters found themselves facing a new problem: how to manage a whole bunch of containers. Enter: Kubernetes. Kubernetes is an open-source container orchestrator. It was developed at Google (deploying billions of containers per week is no small task) as a “minimum viable product” version of their original cluster orchestrator, ominously named Borg. Today, it is managed by the Cloud Native Computing Foundation, and it helps automate management of containers including provisioning, load balancing, basic health checks, and scheduling.
Kubernetes allows developers to describe the desired state of a container deployment using YAML files (YAML stands for Yet Another Markup Language, which is yet another winning example of naming things in tech.). The YAML file uses declarative language to tell Kubernetes “this is what this container deployment should look like” and Kubernetes does all the grunt work of creating and maintaining that state.
Containers + Storage: What You Need to Know
Containers are inherently ephemeral or stateless. They get spun up, and they do their thing. When they get spun down, any data that was created while they were running is destroyed with them. But most applications are stateful, and need data to live on even after a given container goes away.
Object storage is inherently scalable. It enables the storage of massive amounts of unstructured data while still maintaining easy data accessibility. For containerized applications that depend on data scalability and accessibility, it’s an ideal solution for keeping stateful data stateful.
There are three essential use cases where object storage works hand in hand with containerized applications:
Backup and Disaster Recovery: Tools like Docker and Kubernetes enable easy replication of containers, but replication doesn’t replace traditional backup and disaster recovery just as sync services aren’t a good replacement for backing up the data on your laptop, for example. With object storage, you can replicate your entire environment and back it up to the cloud. There’s just one catch: some object storage providers have retention minimums, sometimes up to 90 days. If you’re experimenting and iterating on your container architecture, or if you use CI/CD methods, your environment is constantly changing. With retention minimums, that means you might be paying for previous iterations much longer than you want to. (Shameless plug: Backblaze B2 Cloud Storage is calculated hourly, with no minimum retention requirement.)
Primary Storage: You can use a cloud object storage repository to store your container images, then when you want to deploy them, you can pull them into the compute service of your choice.
Origin Storage: If you’re serving out high volumes of media, or even if you’re just hosting a simple website, object storage can serve as your origin store coupled with a CDN for serving out content globally. For example, CloudSpot, a SaaS platform that serves professional photographers, moved to a Kubernetes cluster environment and connected it to their origin store in Backblaze B2, where they now keep 120+ million files readily accessible for their customers.
Need Object Storage for Your Containerized Application?
Now that you have a handle on what containers are and what they can do, you can make decisions about how to build your applications or structure your internal systems. Whether you’re contemplating moving your application to the cloud, adopting a hybrid or multi-cloud approach, or going completely cloud native, containers can help you get there. And with object storage, you have a data repository that can keep up with your containerized workloads.
Ready to connect your application to scalable, S3-compatible object storage? You can get started today, and the first 10GB are free.
From time to time, we will reference the “bathtub curve” when talking about hard drive and SSD failure rates. This normally includes a reference or link back to a post we did in 2013 which discusses the topic. It’s time for an update. Not because the bathtub curve itself has changed, but because we have nearly seven times the number of drives and eight more years of data than we did in 2013.
In today’s post, we’ll take an updated look at how well hard drive failure rates fit the bathtub curve, and in a few weeks we’ll delve into the specifics for different drive models and even do a little drive life expectancy analysis.
Once Upon a Time, There Was a Bathtub Curve
Here is the classic version of the bathtub curve.
The curve is divided into three sections: decreasing failure rate, constant failure rate, and increasing failure rate. Using our 2013 drive stats data, we computed a failure rate and a timeframe for each of the three sections as follows:
2013 Drive Failure Rates
0 to 18 Months
18 Months to 3 Years
3 to 4 Years
Furthermore, we computed that at four years, the life expectancy of a hard drive in our system was about 80%, and forecasting that out, at six years, the life expectancy was 50%. In other words, we would expect a hard drive we installed to have a 50% chance of being alive after six years.
Drive Failure and the Bathtub Curve Today
Let’s begin by comparing the drive failure rates over time based on the data available to us in 2013 and the data available to us today in 2021.
Observations and Thoughts
Let’s start with an easy one: We have six years worth of data for 2021 versus four years for 2013. We have a wider bathtub. In reality, it is even wider, as we have more than six years of data available to us, but after six years the number of data points (drive failures) is small, less than 10 failures per quarter.
The left side of the bathtub, the area of “decreasing failure rate,” is dramatically lower in 2021 than in 2013. In fact, for our 2021 curve, there is almost no left side of the bathtub, making it hard to take a bath, to say the least. We have reported how Seagate breaks in and tests their newly manufactured hard drives before shipping in an effort to lower the failure rates of their drives. Assuming all manufacturers do the same, that may explain some or all of this observation.
The right side of the bathtub, the area of “increasing failure rate,” moves right in 2021. Obviously, drives installed after 2013 are not failing as often in years three and four, or most of year five for that matter. We think this may have something to do with the aftermath of the Thailand drive crisis back in 2011. Drives got expensive, and quality (in the form of reduced warranty periods) went down. In addition, there was a fair amount of manufacturer consolidation as well.
It is interesting that for year two, the two curves, 2013 and 2021, line up very well. We think this is so because there really is a period in the middle in which the drives just work. It was just shorter in 2013 due to the factors noted above.
The Life Expectancy of Drives Today
As noted earlier, back in 2013, the 80% of the drives installed would be expected to survive four years. That fell to 50% after six years. In 2021, the life expectancy of a hard drive being alive at six years is 88%. That’s a substantial increase, but it basically comes down to the fact that hard drives are failing less in our system. We think it is a combination of better drives, better storage servers, and better practices by our data center teams.
For 2021, our bathtub curve looks more like a hockey stick, although saying, “When you review our hockey stick curve…” doesn’t sound quite right. We’ll try to figure out something by our next post on the topic. One thing we also want to do in that next post is to break down the drive failure data by model and see if the different drive models follow the bathtub curve, the hockey stick curve, or some other unnamed curve. We’ll also chart out the life expectancy curves for all the drives as a whole and by drive model as well.
Well, time to get back to the data, our next Drive Stats report is coming up soon.
Object Lock is a powerful backup protection tool that makes data immutable. It allows you to store objects using a Write Once, Read Many (WORM) model, meaning after it’s written, data cannot be modified or deleted for a defined period of time. Any attempts to manipulate, copy, encrypt, change, or delete the file will fail during that time. The files may be accessed, but no one can change them, including the file owner or whoever set the Object Lock.
This makes Object Lock a great tool as part of a robust cybersecurity program. However, when Object Lock is used inconsistently, it can consume unnecessary storage resources. For example, if you set a retention period of one year, but you don’t end up needing to keep the data that long, you’re out of luck. Once the file is locked, it cannot be deleted. That’s why it’s important to develop a consistent approach.
In this post, we’ll outline five different use cases for Object Lock and explain how to add Object Lock to your IT security policies to ensure your company gets all the protection Object Lock offers while managing your storage footprint.
When to Use Object Lock: Five Use Cases
There are at least five situations where Object Lock is helpful. Keep in mind that these requirements may change over time. Compliance requirements, for example, might be relatively simple today. However, those requirements may become more complex if your company onboards customers in a highly regulated sector like finance or health care.
1. Reducing Cybersecurity Risk
Cybersecurity threats are increasing. In 2015, there were approximately 1,000 ransomware attacks per day, but this figure has increased to more than 4,000 per day since 2016, according to the U.S. government. To be clear, using Object Lock does not prevent a ransomware attack. Instead, data protected by Object Lock is immutable. In the event of a ransomware attack, it cannot be altered by malicious software. Ultimately, your organization may be able to recover from a cyber attack more quickly by restoring data protected by Object Lock.
2. Meet Compliance Requirements With Object Lock
Some industries have extensive record retention requirements. Preserving digital records with Object Lock is one way to fulfill those expectations. Several regulatory and legal requirements direct companies to retain records for a certain period of time.
Banks insured by FDIC generally must retain account records for five years after the “account is closed or becomes dormant.” Beyond FDIC, there are many other state and federal compliance requirements on the financial industry. Preserving data with Object Lock can be helpful in these situations.
You may also have to retain data for tax purposes. The IRS generally suggests keeping tax-related records for up to seven years. However, there are nuances to these requirements (i.e., shorter retention periods in some cases and potentially longer retention periods for property records).
3. Fulfilling a Legal Hold
When a company is sued, preserving all relevant records is wise. An article published by the American Bar Association points out that failing to preserve records may “undermine a litigant’s claims and defenses.” Given that many companies keep many (if not all) of their records in digital form, preserving digital records is essential. In this situation, using Object Lock to preserve records may be beneficial.
4. Meeting a Retention Period for Other Needs
Higher-risk business activities may benefit from preserving data with Object Lock. For example, an engineering company working on designing a bridge might use Object Lock to maintain records during the project. In software development, new versions of software may become unstable. Restoring to a previous version of the software, preserved from tampering or accidental deletion with Object Lock, can be valuable.
5. Replacing an LTO Tape System
In an LTO tape system, data immutability is conferred by a physical “air gap,” meaning there’s a literal gap of air between production data and backups stored on tape—the two are not physically connected in any way. Object Lock creates a virtual air gap, replacing the need for expensive physical infrastructure.
How to Add Object Lock to Your Security Policy
No matter the reason for implementing Object Lock, consistent usage is key. To encourage consistent usage, consider adding Object Lock as an option in your company’s security policy. Use the following tips as a guide on when and how to use Object Lock.
Set Up Object Lock Governance: Assign responsibility to a single manager in IT or IT security to develop Object Lock governance policies. Then, periodically review Object Lock governance and update retention policies as necessary as the security landscape evolves.
Evaluate the Application of Object Lock in Your Context: Are you subject to retention regulations? Do you have certain data you need to keep for an extended period of time? Take an inventory of your data and any specific retention considerations you may want to keep in mind when implementing Object Lock.
Document Object Lock Requirements: There are different ways to explain and communicate Object Lock guidelines. If your IT security policy focuses on high-level principles, consider adding Object Lock to a data management procedure instead.
Add Object Lock to Your Policy for Cloud Tools: Review your cloud solutions to see which providers support Object Lock. Only a few storage platforms currently offer the feature, but if your provider is one of them, you can enable Object Lock and specify the length of time an object should be locked in the storage provider’s user interface, via your backup software, or by using API calls.
Use Change Management to Promote the Change to the Policy Internally: Writing Object Lock into your policy is a good step, but it is not the end of the process. You also need to communicate the change internally and ensure employees who need to use Object Lock are trained on the Object Lock policies and procedures.
Testing and Monitoring: Periodically review if Object Lock is being used per the established policies and if data is being properly protected as outlined. As a starting point, review Object Lock usage quarterly and spot check data to ensure it’s locked.
Adding Object Lock to Your Security Tool Kit
Object Lock is a helpful way to protect data from being changed. It can help your organization meet records retention requirements and make it easier to recover from a cyber attack. It’s one tool that can strengthen a robust IT security practice, but you first need a well-developed backup program to keep your company operating in the event of a disruption. To find out more about emerging backup strategies, check out our explainer, “What’s the Diff: 3-2-1 vs. 4-3-2-1-0 vs. 4-3-2” to keep your valuable company data safe. And, for a comprehensive ransomware prevention playbook, check out our Complete Guide to Ransomware.
The dreaded CORS error is the bane of many a web developer’s day-to-day life. Even for experts, it can be eternally frustrating. Today, we’re digging into CORS, starting with the basics and working up to sharing specific code samples to help you enable CORS for your web development project. What is CORS? Why do you need it? And how can you enable it to save time and resources?
We’ll answer those questions so you can put the CORS bane behind you.
What Is CORS?
CORS stands for cross-origin resource sharing. To define CORS, we first need to explain its counterpoint—the same-origin policy, or SOP. The SOP is a policy that all modern web browsers use for security purposes, and it dictates that a web address with a given origin can only request data from the same origin. CORS is a mechanism that allows you to bypass the SOP so you can request data from websites with different origins.
Let’s break that down piece by piece, then we’ll get into why you’d want to bypass the same-origin policy in the first place.
What Is an Origin?
All websites have an origin, and it is defined by the protocol, domain, and port of its URL.
You’re probably familiar with the working parts of a URL, but they each have a different function. The protocol, also known as the scheme, identifies the method for exchanging data. Typical protocols are http, https, or mailto. The domain, also known as the hostname, is a specific webpage’s unique identifier. You may not be as familiar with the port as it’s not normally visible in a typical web address. Just like a port on the water, it’s the connection point where information comes in and out of a server. Different port numbers specify the types of information the port handles.
When you understand what an origin is, the “cross-origin” part of CORS makes a bit more sense. It simply means web addresses with different origins. In web addresses with the same origin, the protocol, domain, and port all match.
What Is the Same-origin Policy?
Let’s say you log in to Netflix on your laptop to add Ridley Scott’s 1982 classic, “Blade Runner” to your queue, as one does. You click “Remember Me” so you don’t have to log in every time, and your browser keeps your credentials stored in a cookie so that the Netflix site knows you are logged in no matter where you navigate within their site.
Afterwards, you’re bored, so you fall down an internet rabbit hole wondering why “Blade Runner” is called “Blade Runner” when there are few blades and little running. You end up on a site about samurai swords that happens to be malicious—it has a script in its code that uses your authentication credentials stored in that cookie to make a request to Netflix that can change your address and add a bunch of DVDs to your queue (also, it’s 2006, and this actually happened). You’ve become a victim of cross-site request forgery.
To thwart this threat, browsers enabled the same-origin policy to prohibit requests from one origin to another.
Why Do You Need CORS?
While the same-origin policy helped stop bad actors from nefariously accessing websites, it posed a problem—sometimes you need or want assets and data from different origins. This is where the “resource sharing” part of cross-origin resource sharing comes in.
CORS allows you to set rules governing which origins are allowed to bypass the same-origin policy so you can share resources from those origins.
For example, you might host your website’s front end at www.catblaze.com, but you host your back-end API at api.catblaze.com. Or, you might need to display a bunch of cat videos stored in Backblaze B2 Cloud Storage on your website, www.catblaze.com (more on that below).
Do I Need CORS?
Let’s say you have a website, and you dabble in some coding. You’re probably thinking, “I can already use images and stuff from other websites. Do I need CORS?” And you’re right to ask. Most browsers allow simple http requests like get, head, and post without requiring CORS rules to be set in advance. Embedding images from other sites, for example, typically requires a get request to grab that data from a different origin. If that’s all you need, you’re good to go. You can use simple requests to embed images and other data from other websites without worrying about CORS.
But some assets, like fonts or iframes, might not work—then, you can use CORS—but if you’re a casual user, you can probably stop here.
Coding Explainer: What Is an http Request?
An http request is the way you, the client, use code to talk to a server. A complete http request includes a request line, request headers, and a message body if necessary. The request line specifies the method of the request. There are generally eight types:
get: “I want something from the server.”
head: “I want something from the server, but only give me the headers, not the content.”
post: “I want to send something to the server.”
put: “I want to send something to the server that replaces something else.”
patch: “I want to change something on the server.”
options: “Is this request going to go through?” (This one is important for CORS!)
trace: “Show me what you just did.” Useful for debugging.
After the method, the request line also specifies the path of the URL the method applies to as well as the http version.
The request headers communicate some specifics around how you want to receive the information. There are usually a whole bunch of these. (Wikipedia has a good list.) They’re typically formatted thusly: name:value. For example:
accept:text/plain means you want the response to be in text format.
Finally, the message body contains anything you might want to send. For example, if you use the method post, the message body would contain what you want to post.
Do I Need CORS With Cloud Storage?
People use cloud storage for all manner of data storage purposes, and most do not need to use CORS to access resources stored in a cloud instance. For example, you can make API calls to Backblaze B2 from any computer to use the resources you have stored in your storage buckets. If you’re running a mobile application and transferring data back and forth to Backblaze B2, for instance, you don’t need CORS. The mobile application doesn’t rely on a web browser.
You only need CORS if you’re specifically running code inside of a web browser and you need to make API calls from the browser to Backblaze B2. For example, if you’re using an in-browser video player and want to play videos stored in Backblaze B2.
Fortunately, if you do need CORS, Backblaze B2 allows you to configure CORS to your exact specifications while other cloud providers may have completely open CORS policies. Why is that important? An open CORS policy makes you vulnerable to CSRF attacks. To continue with the video example, let’s say you’re storing a bunch of videos that you want to make available on your website. If they’re stored with a cloud provider that has an open CORS policy, you have two choices—open or closed. You pick open so that your website visitors can call up those videos on demand, but that leaves you vulnerable to a CSRF that could allow a bad actor to download your videos. With Backblaze, you can specify the exact CORS rules you need.
If you are using Backblaze B2 to store data that will be displayed in a browser, or you’re just curious, read on to learn more about using CORS. CORS has saved developers lots of time and money by reducing maintenance effort and code complexity.
How Does CORS Work?
Unlike simple get, head, and post requests, some types of requests can alter the origin’s data. These include requests like delete, put, and patch. Any type of request that could alter the origin’s data will trigger CORS, as will simple requests that have non-standard http headers or requests in certain programming languages like AJAX. When CORS is triggered, the browser sends what’s called a preflight request to see if the CORS rules allow the request.
What Is a Preflight Request?
A preflight request, also known as an options request, asks the server if it’s okay to make the CORS request. If the preflight request comes back successfully, then the browser will complete the actual request. Few other systems in computing do this by default, so it’s important to understand when using CORS.
A preflight request has the following headers:
origin: Identifies the origin from which the CORS request originates.
access-control-request-method: Identifies the method of the CORS request.
access-control-request-headers: Lists the headers that will be included in the CORS request.
The web server then responds with the following headers:
access-control-allow-origin: Confirms the origin is allowed.
access-control-allow-method: Confirms the methods are allowed.
access-control-allow-headers: Confirms the headers are allowed.
The values that follow these headers must match the values specified in the preflight request. If so, the browser will permit the actual CORS request to come through.
Setting CORS Up: An Example
To provide an example for setting CORS up, we’ll use Backblaze B2. By default, the Backblaze B2 servers will say “no” to preflight requests. Adding CORS rules to your bucket tells Backblaze B2 which preflight requests to approve. You can enable CORS in the Backblaze B2 UI if you only need to allow one, specific origin or if you want to be able to share the bucket with all origins.
If you need more specificity than that, you can select the option for custom rules and use the Backblaze B2 command line tool.
When a CORS preflight or cross-origin download is requested, Backblaze B2 evaluates the CORS rules on the file’s bucket. Rules may be set at the time you create the bucket with b2_create_bucket or updated on an existing bucket using b2_update_bucket.
CORS rules only affect Backblaze B2 operations in their “allowedOperations” list. Every rule must specify at least one in their allowedOperations.
CORS Rule Structure
Each CORS rule may have the following parameters:
corsRuleName: A name that humans can recognize to identify the rule.
allowedOrigins: A list of the origins you want to allow.
allowedOperations: A list that specifies the operations you want to allow, including:
B2 Native API Operations:
S3 Compatible Operations:
allowedHeaders: A list of headers that are allowed in a preflight request’s Access-Control-Request-Headers value.
exposeHeaders: A list of headers that may be exposed to an application inside the client.
maxAgeSeconds: The maximum number of seconds that a browser can cache the response to a preflight request.
The following sample configuration allows downloads, including range requests, from any https origin and will tell browsers that it’s okay to expose the ‘x-bz-content-sha1’ header to the web page.
You may add up to 100 CORS rules to each of your buckets. Backblaze B2 uses the first rule that matches the request. A CORS preflight request matches a rule if the origin header matches one of the rule’s allowedOrigins, if the operation is in the rule’s allowedOperations, and if every value in the Access-Control-Request-Headers is in the rule’s allowedHeaders.
Using CORS: Examples in Action
Using your browser’s console, you can copy and paste the following examples to see CORS requests succeed or fail. As a handy guide for you, the text files we’ll be requesting include the bucket configuration of the Backblaze B2 buckets we’re calling.
In the first example, we’ll make a request to get the text file bucket_info.txt from a bucket named “cors-allow-none” that does not allow CORS requests:
When you run the code, you will see some text output to the console indicating that, indeed, the bucket allows CORS with all origins, but specific headers:
We didn’t include any headers in our request, so the request was successful and the text file we wanted—bucket_info.txt—appears below the text output in the console. As you can see in the text output, the bucket is configured with an asterisk “*,” also known as a “wildcard,” to allow all origins (more on that later).
Next, we’ll try the same thing on the bucket that allows CORS with all origins, but this time triggers a pre-flight check for a header that is not allowed:
Our bucket is configured to only allow the headers authorization and range, but we’ve included the header X-Fake-Header with the value breaking-cors-for-fun—definitely not allowed—in the request.
When we run this request, we can see another type of failure:
Below the request, but above the CORS errors, you’ll see that the browser sent an options request. As we mentioned earlier, this is the pre-flight request that asks the server if it’s okay to make the get request. In this case, the pre-flight request failed.
However, this request will succeed if we change our bucket settings to allow all headers.
Below, you can see the text output “This bucket allows CORS with all origins and any header values.”
The request was successful, and the text file we requested appears in the console.
At this point, it’s important to note that when configuring your own buckets, you should use caution when using the wildcard “*” to allow any origin or header. It’s probably best to avoid the wildcard if possible. It’s okay to allow any origin to access your bucket, but, if so, you’ll probably want to enumerate the headers that matter to avoid CSRF attacks.
For more information on using CORS with Backblaze B2, including some tips on using CORS with the Backblaze S3 Compatible API, check out our documentation here.
Stay on CORS
Ah, another inevitable CORS pun. Did you see it coming? I hope so. In conclusion, here are a few things to remember about CORS and how you can use it to avoid CORS errors in the future:
The same-origin policy was developed to make websites less vulnerable to threats, and it prevents requests between websites with different origins.
CORS bypasses the same-origin policy so that you can share and use data from different origins.
You only need to configure CORS rules for your Backblaze B2 data if you are making calls to Backblaze B2 from code within a web browser.
By setting CORS rules, you can specify which origins are allowed to request data from your Backblaze B2 buckets.
Are you using CORS? Do you have any other questions? Let us know in the comments.
Containers have changed the way development teams build, deploy, and scale their applications. Unfortunately, the adoption of containers often brings with it ever-growing complexity and fragility that leads to developers spending more time managing their deployment rather than developing applications.
Today’s announcement offers developers a path to easier container orchestration. The Cycle and Backblaze B2 Cloud Storage integration enables companies that utilize containers to seamlessly automate and control stateful data across multiple providers—all from one dashboard.
This partnership empowers developers to:
Easily deploy containers without dealing with complex solutions like Kubernetes.
Unify their application management, including automating or scheduling backups via an API-connected portal.
Choose the microservices and technologies they need without compromising on functionality.
Getting started with Cycle and Backblaze is simple:
Associate your Backblaze Application Key with your Cycle hub via the Integrations menu.
Utilize the backup settings within a container config.
For more in-depth information on the integration, check out the documentation.
We recently sat down with Jake Warner, Co-founder and CEO of Cycle, to dig a little deeper into the major cloud orchestration challenges today’s developers face. You can find Jake’s responses below, or you can check out a conversation between Jake and Elton Carneiro, Backblaze Director of Partnerships, on Jake’s podcast.
Cycle set out to become the most developer-friendly container orchestration platform. By simplifying the processes around container and infrastructure deployments, Cycle enables developers to spend more time building and less time managing. With automatic platform updates, standardized deployments, a powerful API, and bottleneck crushing automation—the platform empowers organizations to have the capabilities of an elite DevOps team at one-tenth the cost. Founded in 2015, the company is headquartered in Reno, NV. To learn more, please visit https://cycle.io.
A Q&A With Cycle
Q: Give us a background on Cycle and what problem you are trying to solve? What is the problem with existing infrastructure like Kubernetes and/or similar?
A: We believe that cloud orchestration, and more specifically, container orchestration, is broken. Instead of focusing on the most common use cases, too many companies within this space end up chasing a never-ending list of edge cases and features. This lack of focus yields overly complex, messy, and fragile deployments that cause more stress than peace-of-mind.
Cycle takes a bold approach on container orchestration: focus on the 80%. Most companies don’t require, or need, a majority of the features and capabilities that come with platforms like Kubernetes. By staying hyper-focused on where we spend our time, and prioritizing quality over quantity of features, the Cycle platform has become an incredibly stable and powerful foundation for companies of all sizes.
The goal for Cycle is simple: Be the most developer-friendly container platform for businesses.
We believe that developers should be able to easily utilize containers on their own infrastructure without having to deal with the mundane tasks commonly involved with managing infrastructure and orchestration platforms. Cycle makes that a reality.
Q: What are the major challenges developers face today with container orchestration?
Complexity. Most deployments today require piecing together a wide variety of different tools and technologies for even the most basic deployments. Instead of adopting solutions that empower development teams without increasing bloat, too many organizations are chasing hype and increasing their ever-growing pile of technical debt.
Additionally, most of today’s platforms require constant hand-holding. While there are many tools that help reduce the amount of time to get a first deployment online, we see a major drop-off when it comes to “day two operations.” What happens when there’s a new update, or security patch? How often are these updates released? How many developers/DevOps personnel are needed to apply these updates? How much downtime should be expected?
With Cycle, we reduce complexity by providing a single turnkey solution for developers—no extra tools required. Additionally, our platform is built around the idea that everything should be capable of automatically updating. On average, we deploy platform updates to our customer infrastructure once every 10-14 days.
Q: Before announcing our integration, had you implemented Backblaze internally? Could you expand on that?
Absolutely! In the early days of Cycle, we made the decision to standardize and flatten all container images into raw OCI images. This was our way of hedging against different technologies going through hype waves. At the time, Docker was the “top dog” in the container space but there was also CoreOS and a number of others.
In an effort to control as much of the vertical stack as possible, we decided that, beyond flattening images, we should also store the resulting images ourselves. This way, if Docker Hub or another container registry unexpectedly changed their APIs or pricing, our platform and users would be insulated from those changes. As you can see, we put a lot of thought into limiting external variables.
Given the above, we knew that having an infinitely scalable storage solution was critical for the platform. After testing a number of providers, Backblaze B2 was the perfect fit for our needs.
Fast-forward to today, where all base images are stored on Backblaze.
Q: As alluded to above, you’re currently building a customer-facing integration. What’s the new feature? Have customers been asking for this?
We’re excited to announce that Cycle now supports automatic backups for stateful containers. A number of customers have been requesting this feature for a while and we’re thrilled to finally release it.
At Cycle, data ownership is very important to us—our platform was built specifically to empower developers while ensuring they, and their employers, still retain full ownership and control of their data. This automated backups feature is no different. By associating a Backblaze B2 API Key with Cycle, organizations can maintain ownership of their backups.
Q: What sparked the decision to partner and integrate with Backblaze specifically?
While there are a number of reasons this partnership makes a ton of sense, narrowing it down to a top three would be:
Performance: As we were testing different Object Storage providers, Backblaze B2 routinely was one of the most reliable while also offering solid upload and download speeds. We also liked that Backblaze B2 wasn’t overly bloated with features—it had exactly what we needed.
Cost: As Cycle continues to grow, and our storage needs increase, it’s incredibly important to keep costs in check. Beyond predictable and transparent pricing, the base cost per terabyte of data is impressive.
Team: Working with the Backblaze team has been incredible. From our early conversations with Nilay, Backblaze’s VP of Sales, to the expanded conversations with much of the Backblaze team today, everyone has been super eager to help.
Q: Backblaze and Cycle share a similar vision in making life easier for developers. It goes beyond just dollars saved, though that is a bonus, but what is it about “simple” that is so important? Infrastructure of the future?
Good question! There are a number of different ways to answer this, but for the sake of not turning this into an essay, let’s focus purely on what we, the Cycle team, refer to as “think long term.”
Anyone can make a process complex, but it takes a truly focused effort to keep things simple. Rules and guidelines are needed. You need to be able to say “No” to certain feature requests and customer demands. To be able to provide a polished and clean experience, you have to be purposeful in what you’re building. Far too often, companies will chase short term wins while sacrificing long-term gains, but true innovation takes time. In a world where most tech companies are built off venture capital, long-term gambles and innovations are deprioritized.
From the way both Cycle and Backblaze have been funded, to a number of other aspects, we’ve both positioned our companies to take those long term risks and focus on simplifying otherwise complex processes. It’s part of our culture, it’s who we are as teams and organizations.
As we talk about developers, we see a common pattern. Developers always love testing new technologies, they enjoy getting into the weeds and tweaking all of the variables. But, as time goes on, developers shift away from “I want to control every variable” into more of a “I just want something that works and gets out of the way.” This is where Cycle and Backblaze both excel.
Q: What can we look forward to as this partnership matures? Anything exciting on the horizon for Cycle that you’d like to share?
We’re really looking forward to expanding our partnership with Backblaze as the Cycle platform continues to grow. Combining powerful container orchestration with a seamless object storage solution can empower developers to more easily build the next generation of products and services.
While we now host our base container images and customer backups on Backblaze, this is still just the start. We have a number of really exciting features launching in 2022 that’ll further strengthen the partnership between our companies and the value we can provide to developers around the world.
Interested in learning more about the developer solutions available with Backblaze B2 Cloud Storage? Join us, free, for Developer Day on October 21 for announcements, tech talks, lessons, SWAG, and more to help you understand how B2 Cloud Storage can work for you. Register today.
When we first published our Drive Stats data back in February 2015, we did it because it seemed like the Backblaze thing to do. We had previously open-sourced our Storage Pod designs, so publishing the Drive Stats data made sense for two reasons. The first was transparency. We were publishing our Drive Stats reports based on that data and we wanted people to trust the accuracy of those reports.
The second reason was that it gave people, many of whom are much more clever than us, the ability to play with the data, and that’s what they did. Over the years, the Drive Stats data has been used in projects ranging from training sets for college engineering and statistics students to being a source for scientific and academic papers and articles. In fact, using Google Scholar, you will find 105 papers and articles since 2018 where the Drive Stats data was cited as a source.
One of those papers is “Interpretable predictive maintenance for hard drives” by Maxime Amram et al., which describes methodology for predicting hard drive failure using various machine learning techniques. At the center of the research team’s analysis is the Drive Stats data, but before we dive into that paper, let’s take a minute or two to understand the data itself.
Join us for a live webinar on October 14th at 10 a.m. PST to hear directly from Daisy Zhuo, PhD, co-founding partner of Interpretable AI, and Drive Stats author, Andy Klein, as they discuss the findings from the research paper. Register today.
The Hard Drive Stats Data
Each day, we collect operational data from all of the drives in our data centers worldwide. There is one record per drive per day. As of September 30, 2021, there were over 191,000 drives reporting each day. In total, there are over 266 million records going back to April 2013, and each data record consists of basic drive information (model, serial number, etc.) and any SMART attributes reported by a given drive. There are 255 pairs of SMART attributes possible for each drive model, but only 20 to 40 are reported by any given drive model.
The data reported comes from each drive itself and the only fields we’ve added manually are the date and the “failure” status: zero for operational and one if the drive has failed and been removed from service.
Predicting Drive Failure
Using the SMART data to predict drive failure is not new. Drive manufacturers have been trying to use various drive-reported statistics to identify pending failure since the early 1990s. The challenge is multifold, as you need to be able to:
Determine a SMART attribute or group of attributes which predict failure is imminent in a given drive model.
Keep the false positive rate to a minimum.
Be able to test the hypothesis on a large enough data set with both failed and operational drives.
Even if you can find a combination of SMART attributes which fit the bill, you are faced with two other realities:
By the time you determine and test your hypothesis, is the drive model being tested still being manufactured?
Is the prediction really useful? For example, what is the value of an attribute that is 95% accurate at predicting failure, but only occurs in 1% of all failures.
The paper sets out to examine whether the application of algorithms for interpretable machine learning would provide meaningful insights about short-term and long-term drive health and accurately predict drive failure in the process.
Analysis of Backblaze and Google Methodologies
Backblaze (Beach/2014 and Klein/2016) and Google (Pinheiro et al./2007) analyzed SMART data they collected to determine drive failure in a population of hard drives. Each identified similar SMART attributes which correlated to some degree to drive failure:
5 (Reallocated Sectors Count).
187 (Reported Uncorrectable Errors).
188 (Command Timeout)—Backblaze only.
197 (Current Pending Sectors Count).
198 (Offline Uncorrectable Sectors Count).
Given that both were univariate analyses i.e., only considering correlation between drive failure and a single metric at a time, the results, while useful, left open the opportunity for validation using more advanced methods. That’s where machine learning comes in.
Predicting Long-term Drive Health
For their analysis in the paper, Interpretable AI focused on the Seagate 12TB drive, model: ST12000NM0007, for the period ending Q1 2020, and analyzed daily records of over 35,000 drives.
An overview of the methodology used by Interpretable AI to predict long-term drive health is as follows:
Compute the remaining useful life of each drive until failure for each drive, each day.
Use that data combined with the daily SMART data to train an Optimal Survival Tree that models how the remaining life of each drive is affected by the SMART values.
Each SMART attribute is represented by one node in the Optimal Survival Tree. Each node splits in two leaf nodes as determined by the analysis. Hard drives are recursively routed down the tree based on their SMART values, with the node value and hierarchy adjusting for each drive that passes through. The model learns the best values for each node as more drives pass through until all the drives are divided into collections which best represent their collective data. Below is the Optimal Decision Tree for predicting long-term health (“Interpretable predictive maintenance for hard drives,” Figure 3).
At the top of the tree is SMART 5 (raw value), which is deemed the most important SMART value to determine drive failure in this case, but it is not alone. Traveling down the branches of the tree, other SMART attributes become part of a given branch, adding or subtracting their value towards predicting drive health along the way. The analysis leads to some interesting results that univariate analysis cannot see:
Poor Drive Health: The path to Node 11 is the set of conditions (SMART attribute values) that if present, predicts the failure of the drive within 50 days.
Healthy Drives: The path to Node 18 is the set of conditions (SMART attribute values) that predicts that at least half of the drives that meet those conditions will not fail within two years.
Predicting Short-term Drive Health
The same methodology used on predicting long-term drive health is used for predicting short-term drive health as well. The difference is that for the short-term use case, only data for a 90-day period is used. In this case, this is the data from Q1 2020 for the same Seagate drives analyzed in the previous section. The goal is to determine the ability to predict hard drives failures 30, 60, and 90 days out.
The paper also discussed a second methodology which treats the analysis as a classification problem that occurs in a specific time window. The results are similar to the Optimal Survival Tree methodology for the period and as such, that methodology is not discussed here. Please refer to the paper for additional details.
Applying the Optimal Survival Tree methodology to the Q1 2020 data, we find that while SMART 5 is still the primary factor, the contribution of other SMART attributes has changed versus the long-term health process. For example, SMART 187 is more important, while SMART 197 has diminished in value so much that it is not considered important in assessing the short-term health of the drives. Below is the Optimal Decision Tree for predicting short-term health (“Interpretable predictive maintenance for hard drives,” Figure 6).
Traveling down the branches of the tree, we can once again see some interesting results that univariate analysis cannot see:
Poor Drive Health: Nodes 21 and 24 identify a set of conditions (SMART attribute values) that, if present, predict almost certain failure within 90 days.
Healthy Drives: Nodes 12 and 15 identify a set of conditions (SMART attribute values) that, if present, identify healthy drives with little chance of failure within 90 days.
How Much Data Do You Need?
One of the challenges we noted earlier with predicting drive failure was the amount of data needed to achieve the results. In predicting the long-term health of the drives, the Interpretable AI researchers first used three years of drive data. Once they determined their results, they reduced the data used to one year, 557,936 observations, and then randomly resampled 50,000 observations from that initial data set to train their model with the remainder used for testing.
The resulting Optimized Survival Tree was similar to that of the long-term health survival tree in that they were still able to identify nodes where accelerated failure was evident.
There have been many other papers attempting to apply machine learning techniques to predicting hard drive failure. The Interpretable paper was the focus of this post as I found their paper to be approachable and transparent, two traits I admire in writing. Those two traits are also defining characteristics of the word, “interpretable,” so there’s that. As for the other papers, a majority are able to predict drive failure at various levels of accuracy and confidence using a wide variety of techniques.
Hopefully it is obvious that predicting drive failure is possible, but will never be perfect. We at Backblaze don’t need it to be. If a drive fails in our environment, there are a multitude of backup strategies in place. We manage failure every day, but tools like those described in the Interpretable paper make our lives a little easier. On the flip side, if you trust your digital life to one hard drive or SSD, forget about predicting drive failure—assume it will happen today and back up your data somewhere, somehow before it does.
92% time savings. 71% storage cost savings. 3.7 times lower total cost than the competition.
These are just some of the findings Enterprise Strategy Group (ESG) reported in a proprietary, economic validation analysis of Backblaze B2 Cloud Storage. To develop these findings, the ESG analysts did their proverbial research. They talked to customers. They validated use cases. They used our product and verified the accuracy of our listed pricing and cost calculator. And then, they took those results along with the knowledge they’ve gathered over 20 years of experience to quantify the bonafide benefits that organizations can expect by using the Backblaze B2 Cloud Storage platform.
Their findings are now available to the public in the new ESG Economic Validation report, “Analyzing the Economic Benefits of the Backblaze B2 Cloud Storage Platform.”
If you’re already a B2 Cloud Storage customer—first, thank you! You can feel even more confident in your choice to work with Backblaze. Have a colleague or contact who you think would benefit from working with Backblaze, too? Feel free to share the report with your network.
Backblaze B2 Cloud Storage enables thousands of developers—from video streaming applications like Kanopy to gaming platforms like Nodecraft—to easily store and use data in the cloud. Those files are always available for download either through a browser-compatible URL or APIs.
Backblaze supports two different suites of APIs—the Backblaze B2 Native API and the Backblaze S3 Compatible API. Sometimes, folks come to our platform knowing exactly which API they need to use, but the differences between the two are not always immediately apparent. If you’re not sure which API is best for you and your project, we’re explaining the difference today.
B2 Native API vs. S3 Compatible API: What’s the Diff?
Put simply, an application programming interface, or API, is a set of protocols that lets one application or service talk to another. They typically include a list of operations or calls developers can use to interact with said application (inputs) and a description of what happens when those calls are used (outputs). Both the B2 Native and S3 Compatible APIs handle the same basic functions:
Bucket Management: Creating and managing the buckets that hold files.
Upload: Sending files to the cloud.
Download: Retrieving files from the cloud.
List: Data checking/selection/comparison.
The main difference between the two APIs is that they use different syntax for the various calls. We’ll dig into other key differences in more detail below.
The Backblaze B2 Native API
The B2 Native API is Backblaze’s custom API that enables you to interact with Backblaze B2 Cloud Storage. We’ve written in detail about why we developed our own custom API instead of just implementing an S3-compatible interface from the beginning. In a nutshell, we developed it so our customers could easily interact with our cloud while enabling Backblaze to offer cloud storage at a quarter of the price of S3.
To get started, simply create a Backblaze account and enable Backblaze B2. You’ll then get access to your Application Key and Application Key ID. These let you call the B2 Native API.
The Backblaze S3 Compatible API
Over the years since we launched Backblaze B2 and the B2 Native API, S3 compatibility was one of our most requested features. When S3 launched in 2006, it solved a persistent problem for developers—provisioning and maintaining storage hardware. Prior to S3, developers had to estimate how much storage hardware they would need for their applications very accurately or risk crashes from having too little or, on the flip side, paying too much as a result of over-provisioning storage for their needs. S3 gave them unlimited, scalable storage that eliminated the need to provision and buy hardware. For developers, the service was a game-changer, and in the years that followed, the S3 API essentially became industry standard for object storage.
In those years as well, other brands (That’s us!) entered the market. AWS was no longer the only game in town. Many customers wanted to move from Amazon S3 to Backblaze B2, but didn’t want to rewrite code that already worked with the S3 API.
The Backblaze S3 Compatible API does the same thing as the B2 Native API—it allows you to interact with B2 Cloud Storage—but it follows the S3 syntax. With the S3 Compatible API, if your application is already written to use the S3 API, B2 Cloud Storage will just work, with minimal code changes on your end. The launch of the S3 Compatible API provides developers with a number of benefits:
You don’t have to learn a new API.
You can use your existing tools that are written to the S3 API.
Performance will be just as good and you’ll get all the benefits of B2 Cloud Storage.
To get started, create a Backblaze account and head to the App Keys page. The Master Application Key will not be S3 compatible, so you’ll want to create a new key and key ID by clicking the “Add a New Application Key” button. Your new key and key ID will work with both the B2 Native API and S3 Compatible API. Just plug this information into your application to connect it to Backblaze B2.
If your existing tools are written to the S3 API—for example, tools like Cohesity, rclone, and Veeam—they’ll work automatically once you enter this information. Additionally, many tools—like Arq, Synology, and MSP360—were already integrated with Backblaze B2 via the B2 Native API, but now customers can choose to connect with them via either API suite.
B2 Native and S3 Compatible APIs: How Do They Compare?
Beyond the syntax, there are a few key differences between the B2 Native and Backblaze S3 Compatible APIs, including:
Key management is unique to the B2 Native API. The S3 Compatible API does not support key management. With key management, you can create, delete, and list keys using the following calls:
Some of our Alliance Partners asked us if we had an SDK they could use. To answer that request, we developed an official Java SDK and Python SDK on GitHub so you can manage and configure your cloud resources via the B2 Native API.
What Is an SDK?
SDK stands for software development kit. It is a set of software development tools, documentation, libraries, code snippets, and guides that come in one package developers can install. Developers use SDKs to build applications for the specific platform, programming language, or system the SDK serves.
By default, access to private buckets is restricted to the account owner. If you want to grant access to a specific object in that bucket to anyone else—for example, a user or a different application or service—they need proper authorization. The S3 Compatible API and the B2 Native API handle access to private buckets differently.
The S3 Compatible API handles authorization using pre-signed URLs. It requires the user to calculate a signature (code that says you are who you say you are) before sending the request. Using the URL, a user can either read an object, write an object, or update an existing object. The URL also contains specific parameters like limitations or expiration dates to manage their usage.
The S3 Compatible API supports pre-signed URLs for downloading and uploading. Pre-signed URLs are built into AWS SDKs. They can also be generated in a number of other ways including the AWS CLI and AWS Tools for PowerShell. You can find guides for configuring those tools here. Many integrations, for example, Cyberduck, also offer a simple share functionality that makes providing temporary access possible utilizing the underlying pre-signed URL.
The B2 Native API figures out the signature for you. Instead of a pre-signed URL, the B2 Native API requires an authorization token to be part of the API request itself. The b2_authorize_account request gets the authorization token that you can then use for account-level operations. If you only want to authorize downloads instead of all account-level operations, you can use the request b2_get_download_authorization to generate an authorization token, which can then be used in the URL to authenticate the request.
With the S3 Compatible API, you upload files to a static URL that never changes. Our servers automatically pick the best route for you that delivers the best possible performance on our backend.
The B2 Native API requires a separate call to get an upload URL. This URL can be used until it goes stale (i.e. returns a 50X error), at which point another upload URL must be requested. In the event of a 50X error, you simply need to retry the request with the new URL. The S3 Compatible API does this for you in the background on our servers, which makes the experience of using the S3 Compatible API smoother.
This difference in the upload process is what enabled Backblaze B2 to offer substantially lower prices at the expense of a little bit more complexity. You can read more about that here.
Try It Out for Yourself
So, which API should you use? In a nutshell, if your app is already written to work with S3, if you’re using tools that are written to S3, or if you’re just unsure, the S3 Compatible API is a good choice. If you’re looking for more control over access and key management, the B2 Native API is the way to go. Either way, now that you understand the differences between the two APIs you can use to work with B2 Cloud Storage, you can align your use cases to the functionality that best suits them and get started with the API that works best for you.
If you’re ready to try out the B2 Native or S3 Compatible APIs for yourself, check out our documentation:
Solid-state drives (SSDs) continue to become more and more a part of the data storage landscape. And while our SSD 101 series has covered topics like upgrading, troubleshooting, and recycling your SSDs, we’d like to test one of the more popular declarations from SSD proponents: that SSDs fail much less often than our old friend, the hard disk drive (HDD). This statement is generally attributed to SSDs having no moving parts and is supported by vendor proclamations and murky mean time between failure (MTBF) computations. All of that is fine for SSD marketing purposes, but for comparing failure rates, we prefer the Drive Stats way: direct comparison. Let’s get started.
What Does Drive Failure Look Like for SSDs and HDDs?
In our quarterly Drive Stats reports, we define hard drive failure as either reactive, meaning the drive is no longer operational, or proactive, meaning we believe that drive failure is imminent. For hard drives, much of the data we use to determine a proactive failure comes from the SMART stats we monitor that are reported by the drive.
SMART, or S.M.A.R.T., stands for Self-monitoring, Analysis, and Reporting Technology and is a monitoring system included in HDDs and SSDs. The primary function of SMART is to report on various indicators related to drive reliability with the intent being to anticipate drive failures. Backblaze records the SMART attributes for every data and boot drive in operation each day.
As with HDDs, we also record and monitor SMART stats for SSD drives. Different SSD models report different SMART stats, with some overlap. To date, we record 31 SMART stats attributes related to SSDs. 25 are listed below.
Read Error Rate
Reallocated Sectors Count
Hardware ECC Recovered
Uncorrectable Sector Count
Power Cycle Count
UltraDMA CRC Error Count
Soft Read Error Rate
Soft Read Error Rate
SSD Wear Leveling Count
Data Address Mark Errors
Unexpected Power Loss Count
Wear Range Delta
Used Reserved Block Count Total
Media Wearout Indicator
Unused Reserved Block Count Total
Good Block Count
Program Fail Count Total
Total LBAs Written
Erase Fail Count
Total LBAs Read
Unsafe Shutdown Count
For the remaining six (16, 17, 168, 170, 218, and 245), we are unable to find their definitions. Please reach out in the comments if you can shed any light on the missing attributes.
All that said, we are just at the beginning of using SMART stats to proactively fail a SSD. Many of the attributes cited are drive model or vendor dependent. In addition, as you’ll see, there are a limited number of SSD failures. This limits the amount of data we have for research. As we add and monitor more SSDs to our farm, we intend on building out our rules for proactive SSD drive failure. In the meantime, all of the SSDs which have failed to date are reactive failures, that is: They just stopped working.
Comparing Apples to Apples
In the Backblaze data centers, we use both SSDs and HDDs as boot drives in our storage servers. In our case, describing these drives as boot drives is a misnomer as boot drives are also used to store log files for system access, diagnostics, and more. In other words, these boot drives are regularly reading, writing, and deleting files in addition to their named function of booting a server at startup.
In our first storage servers, we used hard drives as boot drives as they were inexpensive and served the purpose. This continued until mid-2018 when we were able to buy 200GB SSDs for about $50, which was our top-end price point for boot drives for each storage server. It was an experiment, but things worked out so well that beginning in mid-2018 we switched to only using SSDs in new storage servers and replaced failed HDD boot drives with SSDs as well.
What we have are two groups of drives, SSDs and HDDs, which perform the same functions, have the same workload, and operate in the same environment over time. So naturally, we decided to compare the failure rates of the SSD and HDD boot drives. Below are the lifetime failure rates for each cohort as of Q2 2021.
SSDs Win… Wait, Not So Fast!
It’s over, SSDs win. It’s time to turn your hard drives into bookends and doorstops and buy SSDs. Although, before you start playing dominoes with your hard drives, there are a couple of things to consider which go beyond the face value of the table above: average age and drive days.
The average age of the SSD drives is 14.2 months, and the average age of the HDD drives is 52.4 months.
The oldest SSD drives are about 33 months old and the youngest HDD drives are 27 months old.
Basically, the timelines for the average age of the SSDs and HDDs don’t overlap very much. The HDDs are, on average, more than three years older than the SSDs. This places each cohort at very different points in their lifecycle. If you subscribe to the idea that drives fail more often as they get older, you might want to delay your HDD shredding party for just a bit.
By the way, we’ll be publishing a post in a couple of weeks on how well drive failure rates fit the bathtub curve; SPOILER ALERT: old drives fail a lot.
The other factor we listed was drive days, the number of days all the drives in each cohort have been in operation without failing. The wide disparity in drive days causes a big difference in the confidence intervals of the two cohorts as the number of observations (i.e. drive days) varies significantly.
To create a more accurate comparison, we can attempt to control for the average age and drive days in our analysis. To do this, we can take the HDD cohort back in time in our records to see where the average age and drive days are similar to those of the SDDs from Q2 2021. That would allow us to compare each cohort at the same time in their life cycles.
Turning back the clock on the HDDs, we find that using the HDD data from Q4 2016, we were able to create the following comparison.
Suddenly, the annualized failure rate (AFR) difference between SSDs and HDDs is not so massive. In fact, each drive type is within the other’s 95% confidence interval window. That window is fairly wide (plus or minus 0.5%) because of the relatively small number of drive days.
Where does that leave us? We have some evidence that when both types of drives are young (14 months on average in this case), the SSDs fail less often, but not by much. But you don’t buy a drive to last 14 months, you want it to last years. What do we know about that?
Failure Rates Over Time
We have data for HDD boot drives that go back to 2013 and for SSD boot drives going back to 2018. The chart below is the lifetime AFR for each drive type through Q2 2021.
As the graph shows, beginning in 2018, the HDD boot drive failure rate accelerated. This continued in 2019 and 2020 before leveling off in 2021 (so far). To state the obvious, as the age of the HDD boot drive fleet increased, so did the failure rate.
One point of interest is the similarity in the two curves through their first four data points. For the HDD cohort, year five (2018) was where the failure rate acceleration began. Is the same fate awaiting our SSDs as they age? While we can expect some increase in the AFR as the SSD age, will it be as dramatic as the HDD line?
Decision Time: SSD or HDD
Where does that leave us in choosing between buying a SSD or a HDD? Given what we know to date, using the failure rate as a factor in your decision is questionable. Once we controlled for age and drive days, the two drive types were similar and the difference was certainly not enough by itself to justify the extra cost of purchasing a SSD versus a HDD. At this point, you are better off deciding based on other factors: cost, speed required, electricity, form factor requirements, and so on.
Over the next couple of years, as we get a better idea of SSD failure rates, we will be able to decide whether or not to add the AFR to the SSD versus HDD buying guide checklist. Until then, we look forward to continued debate.
Join us for our inaugural Backblaze Developer Day on October 21st. This event is jam-packed with announcements, tech talks, lessons, SWAG, and more to help you understand how Backblaze B2 Cloud Storage can work for you. And it’s free, the good news just keeps coming.
What’s New: Learn about brand new and recent partner alliances and integrations to serve more of your development needs.
Tour With Some Legends: Join Co-founder and CTO, Brian Wilson, and our Director of Evangelism, Andy Klein (of Drive Stats fame), for a decidedly unscripted, sure-to-be unexpected tour through the B2 Cloud Storage architecture, including APIs, SDKs, and CLI.
How to Put It Together: Get a rapid demo on one of our popular B2 Cloud Storage + compute + CDN combinations to meet functionality that will free your budget and your tech to do more.
A Panel on Tomorrow’s Development: The sunset of monolithic, closed ecosystems is here, so join us to discuss the future of microservices and interoperability.
What Comes Next: Finally, hear what’s next on the B2 Cloud Storage roadmap—and tell our head of product what you think should come next.
And so much more: We’ll be posting updates on partners and friends that will be joining us, as well as information about getting SWAG from the inaugural Backblaze Developer Day. Keep an eye on this space… So register today for free to grab your spot and we’ll see you on October 21st.
Just before 5 a.m. on May 7, 2021, a ransom note from the DarkSide ransomware syndicate flashed across a Colonial Pipeline Co. employee’s computer screen. More than a month later, Joseph Blount, Colonial’s CEO, testified before Congress that recovery from the attack was still not complete.
Like most companies, Colonial maintained backups they could use to recover. They eventually did rely on those backups, but not before paying the ransom and attempting to decrypt files using a tool provided by the cybercriminals that was too painfully slow to wait for. The lesson here: Backup is vital as part of a disaster recovery plan, but the actual “recovery”—how you get your business back online using that backup data—is just as important. Few businesses can survive the hit of weeks or months spent offline.
Maybe your backup system is well-established, but the lift of recovery planning is overshadowed by more immediate demands on your team when you’re already stretched too thin. Or maybe you’ve looked into disaster recovery solutions, but they’re not right-sized for your business. Either way, if you’re using Veeam to manage backups, you can now consider your disaster recovery playbook complete.
Announcing Backblaze Instant Recovery in Any Cloud
We’re excited to announce Backblaze Instant Recovery in Any Cloud—an infrastructure as code (IaC) package that makes ransomware recovery into a VMware/Hyper-V based cloud easy to plan for and execute for any IT team.
“Most businesses know that backing up is critical for disaster recovery. But we see time and again that organizations under duress struggle with getting their systems back online, and that’s why Backblaze’s new solution can be a game-changer.”
—Mark Potter, CISO, Backblaze
When you discover a ransomware attack on your network, scrambling to get servers up and running is the last thing you want to worry about. Instant Recovery in Any Cloud provides businesses an easy and flexible path to as-soon-as-possible disaster recovery in the event of a ransomware attack by spinning up servers in a VMware/Hyper-V based cloud.
Teams can use an industry-standard automation tool to run a single command to quickly bring up an orchestrated combination of on-demand servers, firewalls, networking, storage, and other infrastructure in phoenixNAP, drawing data for your VMware/Hyper-V based environment over immediately from Veeam® Backup & Replication backups, so businesses can get back online with minimal disruption or expense. Put simply, it’s an on-demand path to a rock solid disaster recovery plan.
Below, we’ll explain the why and how of this solution, but if you’d like to learn more or ask questions, we’re offering a webinar on October 20 at 10 a.m. PST and have a full Knowledge Base article available here.
From 3-2-1 to Immutable Backups to Disaster Recovery
For many years, the 3-2-1 backup strategy was the gold standard for data protection. However, bad actors have become much more sophisticated. It’s a bad day for cybercriminals when victims can just restore and move on with their lives, so they started targeting backups alongside production data.
The introduction of Object Lock functionality allowed businesses to protect their backups from ransomware by making them immutable, meaning they are safe from encryption or deletion in case of attack. With immutable backups, you can access a working, uncorrupted copy of your data. But that is only the first step. The critical second step is making that data useful. The time to get back to business after an attack often depends on how quickly backup data can be brought online—more than any other factor. Few businesses can afford to go weeks or months offline, but even with immutable backups, they may be forced to if they don’t have a plan for putting those backups to work.
“For more than 400,000 Veeam customers, flexibility around disaster recovery options is essential,” said Andreas Neufert, Vice President of Product Management, Alliances at Veeam. “They need to know not only that their backups are safe, but that they’re practically usable in their time of need. We’re very happy to see Backblaze offering instant restore for all backups to VMware and Hyper-V based cloud offerings to help our joint customers thrive during challenging times.”
Disaster Recovery That Fits Your Needs
The most robust disaster recovery plans are built for enterprise customers and enterprise budgets. They typically involve paying for compute functionality on an ongoing basis as an “insurance policy” to have the ability to quickly spin up a server in case of an attack. Instant Recovery in Any Cloud opens disaster recovery to a huge number of businesses that were left without affordable solutions.
For savvy IT teams, this is essentially a cut and paste setup—an incredibly small amount of work to architect a recovery plan. The solution is written to work with phoenixNAP, and can be customized for other compute providers without difficulty.
Just as virtualized environments allow you to pay for processing power as needed rather than provisioning dedicated servers for every part of your business, Backblaze Instant Recovery in Any Cloud allows you to provision compute power on demand in a VMware and Hyper-V based cloud. The capacity is always there from Backblaze and phoenixNAP, but you don’t pay for it until you need it.
The code also allows you to implement a multi-cloud, vendor-agnostic disaster recovery approach rather than relying on just one platform—you can spin up a server in any compute environment you prefer. And because the recovery is entirely cloud based, you can execute this recovery plan from anywhere you’re able to access your accounts. Even if your whole network is down, you can still get your recovery plan rolling.
IT hero: 1
How It Works and What You Need
Instant Recovery in Any Cloud works through a pre-built code package IT staff can use to create a digital mirror image of the infrastructure they have deployed on-premises in a VMware or Hyper-V based cloud. The code package is built in Ansible, an open-source tool which enables IaC. Running an Ansible playbook allows you to provision and configure infrastructure and deploy applications as needed. All components are pre-configured within the script. In order to get started, you can find the appropriate instructions on our GitHub page.
If you haven’t already, you also need to set up Backblaze B2 Cloud Storage as part of a Scale-Out Backup Repository with Immutability in Veeam using the Backblaze S3 Compatible API, and your data needs to be backed up securely before deploying the command.
With ransomware on the rise, disaster recovery is more important than ever. With tools like Object Lock and Backblaze Instant Recovery in Any Cloud, it doesn’t have to be complicated and costly. Make sure your backups are immutable with Object Lock, and keep the Ansible playbook and instructions on hand as part of a bigger ransomware recovery plan so that you’re ready in the event of an attack. Simply spin up servers and restore backups in a safe environment to minimize disruption to your business.
If you already have Veeam, you can create a Backblaze B2 account to get started. It’s free, easy, and quick, and you can stay one step ahead of cybercriminals.
Growing up, a common conversation I overheard between my mom and grandma went like this: “Do you have that recipe from our great aunt?”
“Sure, I do. Let me email it to you. Also, I have some funny jokes to forward along.”
My mom, and I’m guessing many others too, have kept every email they’ve ever received from their parents, family, and friends because they don’t want to lose the funny jokes, family recipes, announcements, and more that they’ve sent back and forth over the years. In the moment, our email accounts can feel like a day-to-day concern, or worse, a repository of spam. But for most of us, every email account holds some amount of treasured memories.
Nowadays, my mom has many different email accounts. But, she wanted to find a way to keep all of those emails she loved without having to keep the accounts themselves. She also found that she had so many emails in her inbox that she was running out of storage space.
Buying more storage can become expensive and doesn’t guarantee that those emails are safely backed up and remain accessible. One option is to download the emails, delete them in the client, and back them up somewhere reliable and accessible for the long term.
If you’re looking for a way to keep old emails or just want to clean up your inbox storage because you’re running out of space, this post walks you through the steps of how to download your data from various email platforms.
We’ve gathered a handful of guides to help you protect content across many different platforms—including social media, sync services, and more. We’re working on developing this list—please comment below if you’d like to see another platform covered.
If you know the exact email you want to make sure you have a copy of, it’s very easy to download it from any client.
For this example, we are going to use Gmail, but this should work for most email clients. If you run into an email client that it does not work with, feel free to note it in the comments below and we’ll update the guidance.
Log in to the email address you would like to download a copy of the email from. (I’m using Gmail.)
Find the email you would like to download. For this example, I will be downloading a family recipe sent by my mom.
Select “Print” in the top right corner.
When the print screen appears, save the email as a PDF on to your computer.
And presto, you have a copy of that email you would like to save forever.
This process can be a bit tedious as you would have to download each email one at a time. It also can be tough if you don’t remember how to find the email you would like to save. If this is true, there are also ways that you can download all of your email data.
While there are other file formats you can download individual emails in, we strongly recommend that—if you want to be able to manage or search your old emails—you download all of your emails (which we explain how to do below). This provides the data in easily manageable formats and is far more time efficient.
Getting Serious: How to Download All of Your Emails
Below, I explain how to download your email data from two top free email websites. Don’t see the email platform you use? Leave a comment below and we’ll work to add material to help you!
How to Download Outlook Emails
A lot of people use Outlook for various reasons, often for work or school. If you downloaded Microsoft 365, then you also have access to Outlook email. To export your email from Outlook and save it as a PST file (don’t worry about what a PST file is quite yet, we’ll explain below), do the following:
Sign in to your Outlook account.
Click the gear button in the upper right corner.
Scroll down on the settings panel to “View all Outlook settings.”
Click on the button with a gear symbol labeled “General.”
Select “Privacy and data” on the second panel that appears.
On the right side, there will be a button labeled “Export mailbox.” Select this button.
The button will grey out and a status update will appear to let you know the download is in progress.
When the export is complete, we’ve found that Outlook may not notify your inbox. If this is the case, you will need to repeat steps one through five and navigate to the “Download here” button. This button will only appear once your emails are ready to download.
Click “Download here” to download your PST file with all of your email data. (Scroll past the section on downloading Gmail data to learn what to do with this file type.)
How to Download Gmail Emails
In a previous post, we explained how to download all of your data from Google Drive. But, if you are just looking to download your Gmail data, here is a more detailed way to just do that.
Log in to the Google Account you’d like to download your emails from.
Once signed in, you will want to go to: myaccount.google.com.
Go to the “Privacy & personalization” section and select “Manage your data & privacy.”
On the next screen it takes you to, you’ll want to scroll down to a section labeled “Data from apps and services you use.” Here, you’ll select “Download your data” in the “Download or delete your data” section.
From here, it’ll take you to the Google Takeout page. On this page, you’ll be given the option to select to download all of your Gmail emails and also your Google Chrome bookmarks, transactions from various Google services, locations stored in Google Maps, Google Drive contents, and other Google-related products you may use.
If you want to download all your Google data, keep everything selected. If you just want a copy of your emails, deselect all and only select Google Mail to be downloaded.
Click the next step on the bottom of the page.
On the next page, you’ll decide what file type you would like it sent as, the frequency you would like this action to happen (Example: If you would like your data to be downloaded every six months, this is where you can set that to happen.), and the destination you would like your data to be sent to. For this example, I picked a one time download.
Select “Create export” and you’ll see an export in progress page.
An email will appear in a few minutes, hours, or a couple of days (depending on the size of data you are downloading), informing you that your Google data is ready to download. Once you have this email in your inbox, you have a week to download the data. Click the “Download your files” button in the email and you will have a ZIP file or a TGZ file (depending on what type of file you picked) on your computer with your Google data.
When you open the ZIP, you will have all of your emails (including spam and trash) in an MBOX file.
What Is a PST File? What Is a MBOX File? How Do I Open Them?
A PST file is used by Microsoft programs to store data and items such as email messages, calendar events, and contacts. By moving items to an Outlook Data File (also known as a PST file) saved to your computer, you can free up storage space in the mailbox on your mail server. If you would like to make this file usable by other email clients, here’s a guide on how to convert your newly downloaded PST file to a MBOX file type.
An MBOX file is an email mailbox saved in a mail storage format used for organizing email messages in a single text file. It saves messages in a connected order where each message is stored after another, starting with the “From” header.
To open a MBOX file, you will need a third-party email program, such as Apple Mail or Mozilla Thunderbird. We recommend Mozilla Thunderbird, as it’s a free email client and it’s supported by both Macs and PCs.
This step is helpful if you would like to view the emails you downloaded. It also helps if you were looking to take the emails you downloaded and move them to a new inbox. For example, if you are afraid the email account you’ve used to sign up for everything over the past 10 years is vulnerable, you can download the emails from that inbox and move them to a new inbox using Apple Mail or Mozilla Thunderbird.
Great, now you’ve downloaded your emails. You’re not done yet! Read on to learn how to safely back up your emails so that you can hold on to them forever.
Use Backblaze B2 Cloud Storage Buckets to Keep an Organized Archive of Your Emails
Once you have your email data downloaded to your computer, it’s best practice to make sure that you have at least one copy of your data stored off-site in the cloud. Storing it in the cloud alongside two local copies ensures you never lose all those important emails.
A simple way to do this is with Backblaze B2, where you can upload and organize your files in buckets. To upload your files to a bucket, follow the steps below.
Sign in to your Backblaze account.
In the left hand column, select “Buckets” under the section “B2 Cloud Storage.”
Click on the button “Create a bucket.”
In the next step, you will need to create a unique name for your bucket and select some settings for it, like if it will be public or private or if you would like to enable encryption.
Once the bucket is created, it will take you to a page where you can upload your files. You will want to drag and drop the email files you want to upload to it. If the MBOX file is too large to drag and drop into the bucket, you can use a third-party integration like Cyberduck to facilitate the upload. You can read the guide to using Cyberduck for Backblaze B2 bucket uploads here.
Alternatively, if you’re not worried about organizing or working with your email archives and just want to know they’re stored away safely, you can keep your downloaded files on your computer. If you follow this route, remember to sign up for a backup service that makes a copy of all of your computer’s files in the cloud. In the case of any data loss, a service like Backblaze Computer Backup would have a copy of all of your data ready for you to restore. If your email applications are locally stored on your computer, Backblaze will automatically back up your emails. You can learn more about how this works here. This approach will take up more room on your computer, but it’s a simple path to peace of mind.
From here, your MBOX file with all your emails from your family, friends, and reminders to yourself (We all have those!) will be safe in the cloud. If you ever want to pull out the archive and read the emails you saved, remember to use the third-party tools mentioned above. What’s important is that you have all your memories stored, safely with a provider who will ensure their redundancy and reliability.
Have questions or want to see a guide for an email client we didn’t mention above? Feel free to let us know in the comments!
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.