Tag Archives: Featured 1

Raising Prices is Hard

Post Syndicated from Yev original https://www.backblaze.com/blog/raising-prices-is-hard/

computer and a crossed out $5 replaced by 6

Raising prices should not be an 18-month project.

Everyone is faced with the decision to raise prices at some point. It sucks, but in some cases you have to do it. Most companies, especially SaaS businesses, will look at their revenue forecasts, see a dip, run a calculation predicting the difference between the revenue increase and how many customers might leave, and then raise prices if the math looks favorable. Backblaze is not most companies — here’s how we did it.

In February of 2019, we made the announcement that one month later, our prices for our Personal Backup and Business Backup services would be going up by $1: our first price increase for our Computer Backup service since launching the service over a decade ago. What was announced in February 2019 actually started in December 2016, more than two years before the actual price increase would take effect. Why the long wait? We wanted to make sure that we did it right, not just mechanically (there’s a lot of billing code that has to change), but also in how we communicated to our customers and and took them through the process. Oh, and a big reason for the delay was our main competitor leaving the consumer space, but more on that later.

In this post I’ll dive in to our process for how we wanted the price increase to go, why we decided to build the extension program for existing customers, what went in to our communication strategy, and what the reactions were to the price increase, including looking at churn numbers.

Is Raising Prices a Smart Move?

Raising prices, especially on a SaaS product where you’ve built a following, is never an easy decision. There are a ton of factors that come into play when considering what, if any, is the best course of action. Each factor needs to be considered individually and then as a whole to determine whether the price increase will actually benefit the business long term.

Why Raise Prices?

There are many reasons why companies raise prices. Typically it’s to either increase revenue or adjust to the market costs (the total cost associated with providing goods or services) in their sector. In our case it was the latter. In the price increase announcement, we discussed our reasoning in-depth, but it boiled down to two things: 1) adjusting to the market cost of storage (it was no longer decreasing at the rate it was when we first launched the product), and 2) we had spent years enhancing the service and making it easier for people to store more and more data with us, thereby increasing our costs.

Backblaze Average Cost per Drive Size
Cost Per Drive (2009-2017) — No Longer Decreasing Rapidly

One of the core values of Backblaze is to make backup astonishingly easy and affordable. Maintaining a service that is easy to use, has predictable pricing, and takes care of the heavy lifting for our customers was and is very important to us. When we started considering increasing prices we knew that we were going to be messing with the affordable part of that equation, but it was time for us to adjust to the market.

How to Raise Prices?

Most companies say that they love their customers, and many actually do. When we first started discussing exactly how we were going to raise prices we rejected the easiest path, which was to create a pricing table, update the website, and flip a switch. That was the easy way, but it was important for us to do something for the customers who have trusted us with their important files and memories throughout the years. We would still need to build out the pricing table (fun fact: from 2008 to 2017 our prices were hard-coded) but we started thinking about creating an extension program for our existing customers and fans.

The Extension Program

The extension program was a way for existing Backblaze users to prepay for one year of service, essentially delaying their price increase. They would buy 12 months of backup credits for $50 for each computer on their account, and after those credits were used up, the new prices would go into effect on their next renewal. It was a way to say thank you to our existing customers, but there was just one problem — it didn’t exist.

Building the extension program became a six month project in and of itself. First we needed to build a crediting system. Then, we needed to build the mechanism for our customers to actually buy that block of credits and have them applied to their account. Afterwards, we’d need FAQs, confirmation emails, and website changes to help explain the program to our customers. This became a full-time job for a handful of our most senior engineers, and resulted in a six month project before we were ready to put it through our QA testing. The long development time of the project was a large point of consideration, but there were also financial implications that we had to consider.

The extension program was great for customers, but good/bad for Backblaze. Why? By allowing folks to sign up for an extension we were essentially delaying their price increase, therefore delaying our ability to collect the additional revenue. While that was not ideal, the extension program brought in additional revenue from people purchasing those extensions, which was good. However, since those purchases were for credits, that additional revenue was deferred, and we still had to provide the service. So, while good from a cash flow perspective (we moved up about $2M in cash), we had to be very careful about how we accounted for and earmarked that money.

Continuing to Provide Value

Extensions were only part of the puzzle. We didn’t want customers to feel like we were simply raising prices to line our pockets. Our goal is to continue making backup easy and affordable, and we wanted to show our fans that we were still actively developing the service. The simplest way to show forward progress is to make…forward progress. We decided that before the announcement date we needed to have a product release that substantially improved the backup service, and that’s when we started to plan Backblaze Version 5.0, what we dubbed the Rapid Access Release.

Adding to the development time of creating extensions were the projects to speed up both the backup and restore functions of the Backblaze app (those changes were good for customers, but actually increased our cost of providing the service). In addition, customers could now preview, access, and share backed up files by leveraging our B2 Cloud Storage product. To top it off we strengthened our security by adding ToTP as a two-factor verification method. All those features were rolled up into the 5.0 release and were released a few weeks before we were set to announce our price increase, which was scheduled to be announced on August 22nd, 2017.

Oversharing

Another of our core values is open communication, which we equate to being as open as possible. If you have followed Backblaze over the years, you know that we’ve open sourced our storage pod design, shared our hard drive failure statistics, and have told entertaining stories about how we survived the Thailand drive crisis, and the time we were almost acquired. Most companies would never talk about topics like these, but we don’t shy away from hard conversations. In keeping with that tradition, we made the decision to be honest in our announcement about why we were raising prices (market costs and our own enhancements). We also made the decision to not mention one valid reason: inflation.

Our price back in 2008 was $5/month. With the inflation rate, in 2019 that would be around $5.96, so our price increase to $6 was right in-line with the inflation rate. So why not talk about it? We wanted the conversation to be about our business and the benefits that we’re providing for our customers in building a service that they feel is a good value. Bringing up global economics seemed like an odd tactic, considering that we weren’t even keeping up with inflation and ultimately customers got there on their own.

Disaster and Opportunity Strike

We started down the increase path in 2016. In 2017, we designed and released version 5.0, we built and tested our extension program, we lined up our blog post, we wrote up FAQs, and we created customer service emails to let people know what was happening. After all that, we were ready to announce the following month’s price increase at 10am Pacific Time on August 22nd, 2017.

On August 22nd, at 8am, we pulled the plug and cancelled the announcement.

What Happened?

Early that morning news broke that our main competitor, Crashplan, was leaving the consumer backup space. You may be saying: Wait a minute, a main competitor is leaving the market and you have a mechanism to increase your prices in place — that sounds like the perfect day to raise prices! Nope. Another one of our values, is to be fair and good. Raising prices on a day when consumers found out that there were fewer choices in the market felt predatory and ultimately gross. Once we saw the news, we got in a room, quickly decided that we couldn’t raise prices for at least 6 months, and instead we would write a quick blog post inviting orphaned customers to give us a try.

The year following Crashplan’s announcement we saw a huge increase in customers, which is simultaneously good and bad. It was good because of the increased revenue from our newfound customers, but less ideal from an operations perspective, as we were not anticipating an influx of customers. In fact, we were anticipating an increase in churn coinciding with our cancelled price increase announcement. That meant we had to scramble to deploy enough storage to house all of the new incoming data.

We wouldn’t revisit the price increase until a year after the Crashplan announcement.

That decision was not without financial repercussions. Put simply, we gave up $10 per customer per year. And, the decision affected not only our existing customers on August 22nd, but also all of those we would gain over the coming months and years. While this doesn’t factor in potential churn and other variables, when the size of our customer base is fully accounted for, the revenue left on the table was significant. In purely financial terms, raising prices on the day when the industry started having fewer options would have been the right financial decision, but not the right Backblaze decision.

Hindsight Is 20/20

Looking back, releasing version 5.0 earlier that month was a happy accident. What originally was intended to show forward progress to our existing customers was now being looked at by a lot of new customers and prospects as well. The speed increase that we built into the app as part of the release made it possible for people exiting Crashplan’s service to transition to us and get fully backed up more quickly. Because these were people who understood the importance of keeping a backup, having no downtime in their coverage was a huge benefit.

Picking Up Where We Left Off — The Price Increase

Around August of 2018, we decided that enough time had passed and we were comfortable dusting off our price increase playbook. The process proved harder than we thought as we uncovered edge-cases that we had missed the first time around — another happy accident.

The Problem With Long Development Gaps

The new plan was to announce the price increase in December and raise prices in January 2019. When we started unpacking our playbook and going over the plan, we realized that the simple decisions we had made over a year ago were either flawed or outdated. A good example of this was how we would treat two-year licenses. At one point in the original project spec, we decided that we were simply going to slide the renewal date by one year for anyone with a two-year license that purchased an extension, pushing their actual renewal date out a year. Upon thinking about it again, we realized this would cause a lot of customer issues and had to re-do the entire plan for two-year customers, a large part of our install base.

While we did have project sheets and spec documents, we also realized that we had lost a lot of the in the moment knowledge that comes in project development. We found ourselves constantly saying things like, “why did we make this choice,” and “are we sure that’s what we meant here?” The long gap between the original project start date and the day we picked it back up meant that the ramp-up time for the extension program was a lot longer than we expected. We realized that we wouldn’t be able to announce the price increase in December, with prices going up at the start of the year: we needed more time, both to QA the extension program and create version 6.0.

Version 6.0

Part of the original playbook was to provide value for customers by releasing version 5.0, and we wanted to stick to the original plan. We started thinking about what it would take to have another meaningful release and version 6.0, the Larger Longer Faster Better release was born.

First, we doubled the size of physical media restores, allowing people to get back more of their data more quickly and affordably (this was an oft-requested change, and one that is an example of a good-for-the-customer feature that incurs Backblaze extra costs). We leveraged B2 Cloud Storage again and built in functionality that would allow people to save their backed up data to B2, building off of the previous year’s preview and share capabilities. We made the service more efficient, increased backup speeds again, and also added network management tools. Looking past the Mac and PC apps, we also revamped our mobile offerings by refreshing our iOS and Android apps. All of that added development time again, and our new time table for the price increase was a February 2019 announcement, with the price increase going into effect in March.

Wait a Minute…

You might be saying, you released version 5.0 in a run-up to a price increase, then scrapped it, and then released version 6.0 in a run-up to a price increase. Does that mean that every new version number increase will be followed by a price increase? Absolutely not. The first five versions of Backblaze didn’t precipitate a price increase, and we’re already hard at work on version 7.0 with no planned price increases on the horizon.

Price Increase Announcement

We’ve all been subjected to price increases that were clandestine, then abruptly announced and put into effect the same day, or were not well explained. That never feels great and we really wanted to give customers one month of warning before the prices actually increased. That would give people time to buy the extensions that we worked so hard to build. Conversely, if people were on monthly licenses, or had a renewal date coming up after the price increase went into effect, it would give them an opportunity to cancel their service ahead of the increase. Of course we didn’t want anyone to leave, but realized that any change in our subscription plans would cause a stir and people who were more price-sensitive would likely have second thoughts about renewing.

Another goal was to be as communicative as possible. We wanted our customers to know exactly what we were doing, why we were doing it, and we didn’t want anyone to fall through the cracks of not knowing that this was happening. That meant writing a blog post, creating emails for all Personal Backup customers and Group administrators, and even briefing some members of the press and reviews sites who’d need to update their pricing tables. It might seem silly to pitch the press on a price increase (something that is usually a negative event), but we’ve had some wonderful relationships develop with journalists over the years and it felt like the right thing to do to let them know ahead of time.

Once all of those things were in back in place, it was time to press go, this time for real. The price increase was announced on February 12th and went into effect March 12th.

The Reaction & Churn Analysis

Customer Reaction — Plan for the Worst, Hope for the Best

We didn’t expect the response to be positive. Planning is great, but you never know exactly what’s going to happen until it’s actually happening. We were ready with support responses, FAQs, and a communications plan in case the response was overwhelmingly negative, but were lucky and that didn’t turn out to be the case.

Customers wrote to us and said, finally. Some people went out of their way to express how relieved they were that we were finally going to raise prices, concerned that we had been burning cash over the years. Other sentiments made it clear that we communicated the necessity for the increase and priced it correctly, saying that a $1 increase after 12 years is more than fair.

When the press picked up the story, they had similar sentiments. Yes, it was news that Backblaze was increasing prices, but the reports were positive and very fair. One of the press members that we sent the news to early responded with: “Seems reasonable…”

There were of course some people who were angry and annoyed, and while some of our customers did come to our defense, we did see an increase in churn.

Churn Rate Analysis

Over the next few months we monitored churn carefully to see the true impact on our existing customers from the price increase.

Every time a person leaves Backblaze we send one final email thanking them for their time with us, wishing them well, and asking if they have any feedback. Those emails go directly into our ticketing system where I read all of them every month to get a picture of why people are leaving Backblaze. Sometimes they are reasons we cannot address, but if we can, they go on our roadmap. After the price increase we’ve seen about a 30 percent increase in people saying that they are leaving for billing reasons. It makes sense that more people are citing the price increase as they leave Backblaze, but we’ve had a lot of positive feedback as well from the issues we addressed in versions 5.0 and 6.0.

What about the people who didn’t necessarily write back to our email? We dove deep into the analytics and found that our typical consumer backup service churn rate six months before announcing the price increase was about 5.38 percent. The six months after announcement saw a churn rate of 5.75 percent, which indicates an increase in churn of about 7 percent. In our estimates we anticipated that number being a bit higher for the first year and then coming back down to historical averages after the bulk of our customers had their first renewal at the new price.

Annualized churn rate
Increase in churn of about 7 percent from Jan 2018 to July 2019

New Customer Acquisition

People leaving the service after you increase prices is only half of the equation. The other half lies in your new customer acquisition. Due to the market having competition, raising prices can cause prospective customers to look elsewhere when comparing products. This number was a bit hard for us to calculate since the year prior our biggest competitor for our consumer service went out of business. The best comparable we had was to look at 2017 versus. 2019. We went back to 2017 to look at the historical data and found that even with the increase, and six months afterwards, two year growth rate of our Personal Backup service was a healthy 42 percent.

Lessons Learned From Raising Prices

We learned a lot during this whole process. One of the most important lessons is treating your customers well and not taking them for granted. At the outset we’d sometimes say things like, “it’s only a dollar, who is going to care,” and we’d quickly nip those remarks in the bud and take the process seriously. A dollar may not seem like much, but to a lot of people and our global customers, it was an increase that they felt and that was evidenced in the churn going up by 7 percent.

Some might think, well a 7 percent increase in churn isn’t so bad, you could have raised prices even more, but that’s the wrong lesson to take away. Any changes to the plan we had in place could have yielded very different results.

Extensions

The extension program was a hit for our existing customers and a welcome option for many. Taking the time to build it resulted in over 30,000 Backblaze Personal Backup accounts buying extensions, which resulted in about $1.8M in revenue. There is a flip-side to this. If those 30,000 accounts had simply renewed at the increased price, we would have made $2.2M, resulting in $366,000 of lost revenue. But that’s only if you assume that all of those customers would have renewed. Some may have churned, and by buying an extension they signaled to us that they were willing to stay with us, even after the price increase goes into effect for them.

Being Engaged Helps

Having a good foundation of community and an open dialog with your customers is helpful. When we made the announcement, we weren’t met with the anger that we were somewhat anticipating. In large part this was due to our customers trusting us, and knowing that this was not something we were doing because we simply wanted to make a few extra bucks.

When your community trusts you, they are willing to hear you out even when the news is not great. Build a good rapport with your customers and it will hopefully buy you the benefit of the doubt once or twice, but be careful not to abuse that privilege.

Over-Sharing Helps

Similar to having a good community relationship, explaining the why of what is happening helps educate customers and continues to strengthen your connection with them. When I was on reddit and in the blog post comments discussing the price increase, the people on reddit and on our blog who have grown accustomed to our answering questions were comfortable asking some pretty hard ones, and appreciated when we would respond with thoughtful and long-form answers. I cannot stress enough how much we enjoy the conversations we have on these platforms. We learn a lot about who is using Backblaze, what their pain points are, and if there’s something we can do to help them. These conversations really do affect how we create and consider our product roadmap.

Final Thoughts on Raising Prices

Rarely does anyone want to increase their prices — especially when it affects customers who have been with them for a decade. Many companies don’t want to discuss their decision making process or playbooks, but there are a lot of organizations that face the need to raise prices. Unfortunately, there are few resources to help them thread the needle between something they have to do, and something that their current and future customers will understand and accept.

I wanted to share our journey through our price increase process in hopes that people find it both informative and interesting. Thinking about your customers first may sound like a trope, but if you spend the time to really sit back and consider their reactions and what you can do as a way to thank your existing customers or clients, you can be successful, or at the very least mitigate some of the downside.

If you’ve ever raised prices at your company, or have examples of companies that have done a great job (or a bad job), we’d love to hear those examples in the comments below!

The post Raising Prices is Hard appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Stranger in a Tech Land

Post Syndicated from Nicole Perry original https://www.backblaze.com/blog/stranger-in-a-tech-land/

Silicon Valley Passport Stamp

I never considered myself to be extremely techy. My family and friends would occasionally come to me with computer problems that I could solve with the help of Google and FAQ pages, but I would not go much further than that.

When I came across Backblaze’s job posting for a marketing position, I applied mostly on a lark. My background looked similar to the job description, but I never expected to hear anything from them. When I received the email that I had gotten an interview with Backblaze, my initial thought was, how? Backblaze was the type of company I feared when first arriving in the Bay Area from Ohio. Worry began to bubble up in me about being in a room filled with people who were all smarter or more experienced than me. My family teased me that I would walk into my first day and it would be an episode of Punk’d.

Silicon Valley has a stigma for most everyone who isn’t located in the Bay Area. We assume it will be filled with competitive geniuses and be too expensive to survive. “You may be Ohio smart, but that’s a different kind of smart,” is something I have heard in actual conversations. And I, too, had similar thoughts as I considered trying to fit in at a startup.

Having watched the HBO show, Silicon Valley, my perspective of how my future coworkers would act could not have been more different from reality. The show portrays Silicon Valley workers as smug, arrogant, anti-social coders who are ready to backstab their coworkers on their way to the top of the industry. At Backblaze, I have found the opposite to be true: everyone has been supportive, fun to be around, and team-oriented.

Now that I live in Silicon Valley, rather than watching it, I have to say I let the intimidation get to me. One of my favorite quotes that helps me during times of high stress is by the Co-Founder of Lumi Labs, Marissa Mayer, in reference to how she’s succeeded in her career, “I always did something I was not ready to do. I think that’s how you grow.” That’s an important thing to remember when you are starting a new job, adventure, or experience: On the other side of the challenge, no matter how it goes, you’ll have grown. Here are some of the things that I have learned during my first few weeks of growth at Backblaze and living in the Bay Area. Hopefully, they’ll help you to try something you’re not ready for, too.

Nine Lessons Learned

Don’t be Thrown by Big Words

Write them down. Google is your best friend. There may be words, companies, software, acronyms, and a bunch of other things that come up in meetings that you have never heard before. Take notes. Research them and do research on how they apply to your company or work position. Most of the time it’s something you might have known about but didn’t know the correct word or phrase for.

No One Understands Your Thought Process

Show your work. Something that’s hard when it comes to talking to your boss or your team is that they cannot see inside your brain. Talk them through how you got to where you are with your thoughts and conclusions. There are plenty of times where I have had to remind myself to over-explain an idea or a thought so the people around me could understand and help.

You Don’t Have to Know Everything

Own up to your lack of knowledge. This one is tough because when you are new to a position you have the inclination to not lift the veil and reveal yourself as someone who does not know something. This could be something as big as not knowing how a core feature works or as small as not knowing how the coffee machine works. When you are new to a company you are never going to walk in and know exactly how everything works. At the moment you don’t understand something, admit it and most people you work with will help or at least point you in the direction of where and how to learn.

Living in Someone’s Backyard in an In-Law Suite is Normal

Look everywhere before choosing where to live. Moving to Silicon Valley while trying to establish a stable income sounds impossible, and indeed it is very hard. When talking to people before my move everyone would say, “ugh, the housing payments!” This was not encouraging to hear. But that doesn’t mean there aren’t creative ways to lower your housing costs. While living with roommates to drive housing costs down, I found a family that wanted to make a little extra money and had an unused in-law suite . While it’s not owning your own home or having a full-size apartment to yourself, it’s different and that can be fun! Plus, like with roommates, you never know what connections you will make.

Not Understanding the Software Doesn’t Mean You Don’t Get It

You have the experience, use it. I came to Backblaze with a very surface-level idea of coding, no idea about the different ways to back up my computer, and no knowledge of how the cloud actually works, but I did understand that it was important to have backups. Just because you don’t understand how something works initially doesn’t mean you don’t understand the value it has. You can use that understanding to pitch ideas and bring an outside perspective to the group.

Talk to People with Important Titles

They all have been in your shoes. The CEOs, presidents, directors, and managers of the world all have been in your position at one point. Now they hold those titles, so obviously they did something right. Get to know them and what they enjoy. They are human and they would love to share their wisdom with you, whether it’s about the company, their favorite food places nearby, or where they go to relax.

Don’t Let Things Slip

Follow up. If someone said they were going to show you something in a meeting or in the hallway, send them a note and see if you should schedule a chat. Have a question during an important meeting that you didn’t want to ask? Follow up! Someone mentioned they knew of a class that could teach something you wanted to learn? Make sure they send you a link! All work environments can feel busy but most people would rather you follow up with them rather than let them forget about something that might be important later on.

Soak In the Environment

Be a fly on the wall. Watch how the office operates and how people talk to each other. Get an idea of when people leave for lunch, when to put your headphones on, and what’s normal to wear around the office. Also, pay attention to who talks in meetings and what it is like to pitch an idea. Observing before fully immersing yourself helps you figure out where your experience fits in and how you can best contribute.

Know Yourself and Know Your Worth

You can figure it out. It may take time, patience, research, and understanding to stand confidently in a room full of experts in the field and pitch ideas. You’ve done it before. Maybe when you were little and asked your parents to take the training wheels off your bicycle? It took a few falls but you figured it out and you can do it again.

We hope that this was a little bit helpful or informative or at least entertaining to read! Have you ever joined a company in an industry you weren’t familiar with? What are some tips or hints that you wish you had known? Share them in the comments below!

The post Stranger in a Tech Land appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing Our First European Data Center

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/announcing-our-first-european-data-center/

city view of Amsterdam, Netherlands

Big news: Our first European data center, in Amsterdam, is open and accepting customer data!

This is our fourth data center (DC) location and the first outside of the western United States. As longtime readers know, we have two DCs in the Sacramento, California area and one in the Phoenix, Arizona area. As part of this launch, we are also introducing the concept of regions.

When creating a Backblaze account, customers can choose whether that account’s data will be stored in the EU Central or US West region. The choice made at account creation time will dictate where all of that account’s data is stored, regardless of product choice (Computer Backup or B2 Cloud Storage). For customers wanting to store data in multiple regions, please read this knowledge base article on how to control multiple Backblaze accounts using our (free) Groups feature.

Whether you choose EU Central or US West, your pricing for our products will be unchanged:

  • For B2 Cloud Storage — it’s $0.005/GB/Month. For comparison, storing your data in Amazon S3’s Ireland region will cost ~4.5x more
  • For Computer Backup — $60/Year/Computer is the monthly cost of our industry leading, unlimited data backup for desktops/laptops

Later this week we will be publishing more details on the process we undertook to get to this launch. Here’s a sneak preview:

  • Wednesday, August 28: Getting Ready to Go (to Europe). How do you even begin to think about opening a DC that isn’t within any definition of driving distance? For the vast majority of companies on the planet, simply figuring out how to get started is a massive undertaking. We’ll be sharing a little more on how we thought about our requirements, gathered information, and the importance of NATO in the whole equation.
  • Thursday, August 29: The Great European (Non) Vacation. With all the requirements done, research gathered, and preliminary negotiations held, there comes a time when you need to jump on a plane and go meet your potential partners. For John & Chris, that meant 10 data center tours in 72 hours across three countries — not exactly a relaxing summer holiday, but vitally important!
  • Friday, August 30: Making a Decision. After an extensive search, we are very pleased to have found our partner in Interxion! We’ll share a little more about the process of narrowing down the final group of candidates and selecting our newest partner.
If you’re interested in learning more about the physical process of opening up a data center, check out our post on the seven days prior to opening our Phoenix DC.

New Data Center FAQs:

Q: Does the new DC mean Backblaze has multi-region storage?
A: Yes, by leveraging our Groups functionality. When creating an account, users choose where their data will be stored. The default option will store data in US West, but to choose EU Central, simply select that option in the pull-down menu.

Region selector
Choose EU Central for data storage

If you create a new account with EU Central selected and have an existing account that’s in US West, you can put both of them in a Group, and manage them from there! Learn more about that in our Knowledge Base article.

Q: I’m an existing customer and want to move my data to Europe. How do I do that?
A: At this time, we do not support moving existing data within Backblaze regions. While it is something on our roadmap to support, we do not have an estimated release date for that functionality. However, any customer can create a new account and upload data to Europe. Customers with multiple accounts can administer those accounts via our Groups feature. For more details on how to do that, please see this Knowledge Base article. Existing customers can create a new account in the EU Central region and then upload data to it; they can then either keep or delete the previous Backblaze account in US West.

Q: Finally! I’ve been waiting for this and am ready to get started. Can I use your rapid ingest device, the B2 Fireball?
A: Yes! However, as of the publication of this post, all Fireballs will ship back to one of our U.S. facilities for secure upload (regardless of account location). By the end of the year, we hope to offer Fireball support natively in Europe (so a Fireball with a European customer’s data will never leave the EU).

Q: Does this mean that my data will never leave the EU?
A: Any data uploaded by the customer does not leave the region it was uploaded to unless at the explicit direction of the customer. For example, restores and snapshots of data stored in Europe can be downloaded directly from Europe. However, customers requesting an encrypted hard drive with their data on it will have that drive prepared from a secure U.S. location. In addition, certain metadata about customer accounts (e.g. email address for your account) reside in the U.S. For more information on our privacy practices, please read our Privacy Policy.

Q: What are my payment options?
A: All payments to Backblaze are made in U.S. dollars. To get started, you can enter your credit card within your account.

Q: What’s next?
A: We’re actively working on region selection for individual B2 Buckets (instead of Backblaze region selection on an account basis), which should open up a lot more interesting workflows! For example, customers who want can create geographic redundancy for data within one B2 account (and for those who don’t want to set that up, they can sleep well knowing they have 11 nines of durability).

We like to develop the features and functionality that our customers want. The decision to open up a data center in Europe is directly related to customer interest. If you have requests or questions, please feel free to put them in the comment section below.

The post Announcing Our First European Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Interview With Filmmaker Laura D’Antoni

Post Syndicated from Yev original https://www.backblaze.com/blog/interview-with-filmmaker-laura-dantoni/

Laura D'Antoni, filmmaker

I first met Laura D’Antoni when we were shooting B2 Cloud Storage customer videos for Youngevity and Austin City Limits. I enjoyed talking about her filmmaking background and was fascinated by her journey as a director, editor, and all around filmmaker. When she came to the Backblaze office to shoot our Who We Are and What We Do video, I floated the idea of doing an interview with her to highlight her journey and educate our blog readers who may be starting out or are already established in the filmmaking world. We’ve finally gotten around to doing the interview, and I hope you enjoy the Q&A with Laura below!

Q: How did you get involved in visual storytelling?
My interest in directing films began when I was 10 years old. Back then I used my father’s Hi8 camera to make short films in my backyard using my friends as actors. My passion for filmmaking continued through my teens and I ended up studying film and television at New York University.

Q: Do you have a specialty or favorite subject area for your films?
I’ve always been drawn to dramatic films, especially those based on real life events. My latest short is a glimpse into a difficult time in my childhood, told in reverse Memento-style from a little girl’s perspective.

Most of my filmmaking career I actually spent in the documentary world. I’ve directed a few feature documentaries about social justice and many more short docs for non-profit organizations like the SPCA.

Q: Who are you visual storyteller inspirations? What motivates you to tell your stories?
The film that inspired me the most when I was just starting out was The Godfather: Part II. The visuals and the performances are incredible, and probably my father being from Sicily really drew me in (the culture, not the Mafia, ha!). Lately I’ve been fascinated by the look of The Handmaid’s Tale, and tried to create a similar feel for my film on a much, much tinier budget.
As far as what motivates me, it’s the love for directing. Collaborating with a team to make your vision on paper a reality is an incredible feeling. It’s a ton of work that involves a lot of blood, sweat, and tears, but in the end you’ve made a movie! And that’s pretty cool.

Q: What kind of equipment do you take on shoots? Favorite camera, favorite lens?
For shoots I bring lights, cameras, tripods, a slider and my gimbal. I use my Panasonic EVA-1 as my main camera and also just purchased the Panasonic GH5 as B-cam to match. Most of my lenses are Canon photo lenses; the L-glass is fantastic quality and I like the look of them. My favorite lens is the Canon 70-200mm f2.8.

Q: How much data per day does a typical shoot create?
If I’m shooting in 4K, around 150GB.

Q: How do you back up your daily shoots? Copy to a disk? Bunch of disks?
I bring a portable hard drive and transfer all of the footage from the cards to that drive.

Q: Tell us a bit about your workflow from shooting to editing.
Generally, if the whole project fits onto a drive, I’ll use that drive to transfer the footage and then edit from it as well. If I’ve shot in 4K then the first step before editing is creating proxies in Adobe Premiere Pro of all of the video files so it’s not so taxing on my computer. Once that’s done I can start the edit!

Q: How do you maintain your data?
If it’s a personal project, I have two copies of everything on separate hard drives. For clients, they usually have a backup of the footage on a drive at their office. The data doesn’t really get maintained, it just stays on the drive and may or may not get used again.

Q: What are some best practices for keeping track of all your videos and assets?
I think having a Google Docs spreadsheet and numbering your drives is helpful so you know what footage/project is where.

Q: How has having a good backup and archive strategy helped in your filmmaking?
Well, I learned the hard way to always back up your footage. Years ago while editing a feature doc, I had an unfortunate incident with PluralEyes software and it ate the audio of one of my interview subjects. We ended up having to use the bad camera audio and nobody was happy. Now I know. I think the best possible strategy really is to have it backed up in the cloud. Hard drives fail, and if you didn’t back that drive up, you’re in trouble. I learned about a great cloud storage solution called Backblaze when I created a few videos for them. For the price it’s absolutely the best option and I plan on dusting off my ancient drives and getting them into the cloud, where they can rest safely until someday someone wants to watch a few of my very first black and white films!

Q: What advice do you have for filmmakers and videographers just starting out?
Know what you want to specialize in early on so you can focus on just that instead of many different specialties, and then market yourself as just that.

It also seems that the easiest way into the film world (unless you’re related to Steven Spielberg or any other famous person in Hollywood) is to start from the bottom and work your way up.

Also, remember to always be nice to the people you work with, because in this industry that PA you worked with might be a big time producer before you know it.

Q: What might our readers find surprising about challenges you face in your work?
In terms of my directing career, the most challenging thing is to simply be seen. There is so much competition, even among women directors, and getting your film in front of the right person that could bring your career to the next level is nearly impossible. Hollywood is all about who you know, not what you know, unfortunately. So I just keep on making my films and refuse to give up on my dream of winning an Academy Award for best director!

Q: How has your workflow changed since you started working with video?
I only worked with film during my college years. It definitely teaches you to take your time and set up that shot perfectly before you hit record,; or triple check where you’re going to cut your film before it ends up on the floor and you have to crawl around and find it to splice it back in. Nowadays that’s all gone. A simple command- z shortcut and you can go back several edits on your timeline, or you can record countless hours on your video camera because you don’t have to pay to have it developed. My workflow is much easier, but I definitely miss the look of film.

Q: Where can we see your work?
The trailer for my latest film Cycle can be viewed here: https://vimeo.com/335909934
And my website is: www.leprika.com

Trailer from Cycle, Leprika Productions

The post Interview With Filmmaker Laura D’Antoni appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Hard Drive Stats Q2 2019

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-q2-2019/

Backblaze Drive Stats Q2 2019
< ve models that have been around for several years, take a look at how our 14 TB Toshiba drives are doing (spoiler alert: great), and along the way we’ll provide a handful of insights and observations from inside our storage cloud. As always, we’ll publish the data we use in these reports on our Hard Drive Test Data web page and we look forward to your comments.

Hard Drive Failure Stats for Q2 2019

At the end of Q2 2019, Backblaze was using 108,660 hard drives to store data. For our evaluation we remove from consideration those drives that were used for testing purposes and those drive models for which we did not have at least 60 drives (see why below). This leaves us with 108,461 hard drives. The table below covers what happened in Q2 2019.

Backblaze Q2 2019 Hard Drive Failure Rates

Notes and Observations

If a drive model has a failure rate of 0 percent, it means there were no drive failures of that model during Q2 2019 — lifetime failure rates are later in this report. The two drives listed with zero failures in Q2 were the 4 TB and 14 TB Toshiba models. The Toshiba 4 TB drive doesn’t have a large enough number of drives or drive days to be statistically reliable, but only one drive of that model has failed in the last three years. We’ll dig into the 14 TB Toshiba drive stats a little later in the report.

There were 199 drives (108,660 minus 108,461) that were not included in the list above because they were used as testing drives or we did not have at least 60 of a given drive model. We now use 60 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics as there are 60 drives in all newly deployed Storage Pods — older Storage Pod models had a minimum of 45.

2,000 Backblaze Storage Pods? Almost…

We currently have 1,980 Storage Pods in operation. All are version 5 or version 6 as we recently gave away nearly all of the older Storage Pods to folks who stopped by our Sacramento storage facility. Nearly all, as we have a couple in our Storage Pod museum. There are currently 544 version 5 pods each containing 45 data drives, and there are 1436 version 6 pods each containing 60 data drives. The next time we add a Backblaze Vault, which consists of 20 Storage Pods, we will have 2,000 Backblaze Storage Pods in operation.

Goodbye Western Digital

In Q2 2019, the last of the Western Digital 6 TB drives were retired from service. The average age of the drives was 50 months. These were the last of our Western Digital branded data drives. When Backblaze was first starting out, the first data drives we deployed en masse were Western Digital Green 1 TB drives. So, it is with a bit of sadness to see our Western Digital data drive count go to zero. We hope to see them again in the future.

WD Ultrastar 14 TB DC HC530

Hello “Western Digital”

While the Western Digital brand is gone, the HGST brand (owned by Western Digital) is going strong as we still have plenty of the HGST branded drives, about 20 percent of our farm, ranging in size from 4 to 12 TB. In fact, we added over 4,700 HGST 12 TB drives in this quarter.

This just in; rumor has it there are twenty 14 TB Western Digital Ultrastar drives getting readied for deployment and testing in one of our data centers. It appears Western Digital has returned: stay tuned.

Goodbye 5 TB Drives

Back in Q1 2015, we deployed 45 Toshiba 5 TB drives. They were the only 5 TB drives we deployed as the manufacturers quickly moved on to larger capacity drives, and so did we. Yet, during their four plus years of deployment only two failed, with no failures since Q2 of 2016 — three years ago. This made it hard to say goodbye, but buying, stocking, and keeping track of a couple of 5 TB spare drives was not optimal, especially since these spares could not be used anywhere else. So yes, the Toshiba 5 TB drives were the odd ducks on our farm, but they were so good they got to stay for over four years.

Hello Again Toshiba 14 TB Toshiba Drives

We’ve mentioned the Toshiba 14 TB drives in previous reports, now we can dig in a little deeper given that they have been deployed almost nine months and we have some experience working with them. These drives got off to a bit of a rocky start, with six failures in the first three months of being deployed. Since then, there has been only one additional failure, with no failures reported in Q2 2019. The result is that the lifetime annualized failure rate for the Toshiba 14 TB drives has decreased to a very respectable 0.78% as shown in the lifetime table in the following section.

Lifetime Hard Drive Stats

The table below shows the lifetime failure rates for the hard drive models we had in service as of June 30, 2019. This is over the period beginning in April 2013 and ending June 30, 2019.

Backblaze Lifetime Hard Drive Annualized Failure Rates

The Hard Drive Stats Data

The complete data set used to create the information used in this review is available on our Hard Drive Test Data web page. You can download and use this data for free for your own purpose. All we ask are three things: 1) You cite Backblaze as the source if you use the data, 2) You accept that you are solely responsible for how you use the data, and, 3) You do not sell this data to anyone; it is free. Good luck and let us know if you find anything interesting.

If you just want the tables we used to create the charts in this blog post you can download the ZIP file containing the MS Excel spreadsheet.

The post Backblaze Hard Drive Stats Q2 2019 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Creating Great Content Marketing

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/creating-great-content-marketing/

In Cinema | Coming Soon: Content Marketing

Once the hot new marketing strategy, content marketing has lost some of its luster. If you follow marketing newsletters and blogs, you’ve likely even seen the claim that content marketing is dead. Some say it’s no longer effective because consumers are oversaturated with content. Others feel that much of content marketing is too broad a strategy and it’s more effective to target those who can directly affect the behavior of others using influencer marketing. Still others think that the hoopla over content marketing is over and money is better spent on keyword purchases, social media, SEO, and other techniques to direct customers into the top of the marketing funnel.

Backblaze has had its own journey of discovery in figuring out which kind of marketing would help it grow from a small backup and cloud storage business to a serious competitor to Amazon, Google, and Microsoft and other storage and cloud companies. Backblaze’s story provides a useful example of how a company came to content marketing after rejecting or not finding success using a number of other marketing approaches. Content marketing worked for Backblaze in large part due to the culture of the company, which will reinforce our argument a little bit later that content marketing is a lot about your company culture. But first things first: what exactly is content marketing?

What is Content Marketing?

Content marketing is the practice of creating, publishing, and sharing content with the goal of building the reputation and visibility of your brand.

The goal of content marketing is to get customers to come to you by providing them with something they need or enjoy. Once you have their attention, you can promote (overtly or covertly) whatever it is you wish to sell to them.

Conceptually, content marketing is similar to running a movie theatre. The movie gets people into the theatre where they can be sold soft drinks, popcorn, Mike & Ikes and Raisinets, which is how theatre owners make most of their money, not from ticket sales. Now you know why movie theatre snacks and drinks are so expensive; they have to cover the cost of the loss leader, the movie itself, as well as give the owner some profit.

Movie theatre concession stand
The movie gets the audience in the theater, but the theater owner’s profit comes from the popcorn.
Movie theatre snack concession. Image from Wikipedia.

The Growth of Content Marketing

Marketing in recent years has increasingly become a game of metrics. Marketers today have access to a wealth of data about customer and marketing behavior and an ever growing number of apps and tools to quantify and interpret that data. We have all this data because marketing has become largely an online game and it’s fairly easy to collect behavioral data when users interact with websites, emails, webinars, videos, and podcasts. Metrics existed before for conventional mail campaigns and the like, and focus groups provided some confirmation of what marketers guessed was true, but it was generally a matter of manually counting heads, responses, and sales. Now that we’re online, just adding snippets of code to websites, apps, and emails can provide a wealth of information about consumers’ behavior. Conversion, funnel, nurturing, and keyword ranking are in the daily lexicon of marketers who look to numbers to demystify consumer behavior and justify the funding of their programs.

A trend contrary to marketing metrics grew in importance alongside the metrics binge and that trend is modern content marketing. While modern content marketing takes advantage of the immediacy and delivery vehicles of the internet, content marketing itself is as old as any marketing technique. It isn’t close to being the world’s oldest profession, but it does go back to the first attempts by humans to lure consumers to products and services with a better or more polished pitch than the next guy.

Benjamin Franklin used his annual Poor Richard’s Almanack as early as 1732 to promote his printing business and made sure readers knew where his printing shop was located. Farming equipment manufacturer John Deere put out the first issue of The Furrow in 1895. Today it has a circulation of 1.5 million in 40 countries and 12 different languages.

Benjamin Franklin’s Poor Richard’s Almanac from 1739
Benjamin Franklin’s Poor Richard’s Almanac from 1739
Ben’s conversion pitch -- The location of his printing office “near the Market”
Ben’s conversion pitch — The location of his printing office “near the Market”

John Deere's The Furrow, started in 1895
John Deere’s The Furrow, started in 1895

One might argue that long before these examples, stained glass windows in medieval cathedrals were another example of content marketing. They presented stories that entertained and educated and were an enticement to bring people to services.

Much later, the arrival of the internet and the web, and along with them, fast and easy content creation and easy consumer targeting, fueled the rapid growth of content marketing. We now have many more types of media beyond print suitable for content marketing, including social media, blogs, video, photos, podcasts and the like, which enabled content marketing to gain even more power and importance.

What’s the Problem With So Much Content Marketing?

If content marketing is so great, why are we hearing so many statements about content marketing being dead? My view is that content marketing isn’t any more dead now than in was in Benjamin Franklin’s time, and people aren’t going to stop buying popcorn at movie theaters. The problem is that there is so much content marketing that doesn’t reach its potential because it is empty and meaningless.

Unfortunately, too many people who are running content marketing programs have the same mindset as the people running poor metrics marketing programs. They look at what’s worked in the past for themselves or others and assume that repeating an earlier campaign will be as successful as the original. The approach that’s deadly for content marketing is to think that since a little is good, more must be better, and more of the very same thing.

When content marketing isn’t working, it’s usually not the marketing vehicle that’s to blame, it’s the content itself. Hollywood produces some great and creative content that gets people into theaters, but it also produces a lot of formulaic, repetitive garbage that falls flat. If a content marketing campaign is just following a formula and counting on repeating a past success, no amount of obscure performance metric optimization is going to make the content itself any better. That applies just as much to marketing technology products as it does to marketing Hollywood movies.

When content marketing isn’t working, it’s usually not the marketing vehicle that’s to blame, it’s the content itself.

The screenwriter William Goldman (Butch Cassidy and the Sundance Kid, All the President’s Men, Marathon Man, The Princess Bride) once famously said, “In Hollywood, no one knows anything.” He meant that no matter how much experience a producer or studio might have, it’s hard to predict what’s going to resonate with an audience because what always resonates is what is fresh and authentic, which are the hardest qualities to judge in any content and eludes simple formulas. Movie remakes sometimes work, but more often they fail to capture something that audiences responded to in the original: a fresh concept, great performances by engaged actors, an inspired director, and a great script. Just reproducing the elements in a previous success doesn’t guarantee success. The experience in the new version has to capture the magic in the original that appealed to the audience.

The Dissatisfaction With So Much Content

A lot of content just dangles an attractive hook to entice content consumers to click, and that’s all it does. Anyone can post a cute animal video or a suggestive or revealing photo, but it doesn’t do anything to help your audience understand who you are or help solve their problems.

Unfortunately for media consumers, clickbait works in simply getting users to click, which is the reason it hasn’t disappeared. As long as people click on the enticing image, celebrity reference, or promised secret revelation, we’ll have to suffer with clickbait. Even worse, clickbait is often used to tip the scales of value from the reader, where it belongs, to the publisher. Many viral tests, quizzes and celebrity slideshows plant advertising cookies that benefit the publisher by increasing the cost and perceived value of advertising on their site, leaving the consumer feeling that they’ve been used, which of course is exactly what has happened.

Another, and I think more important reason that content marketing isn’t succeeding for many is not that it’s not interesting or even useful, but that the content isn’t connected in a meaningful way with the content publisher. Just posting memes, how-tos, thought pieces, and stories unrelated to who you are as a business, or not reflecting who your employees are and the values you hold as a company, doesn’t do anything to connect your visitors to you. Empty content is like empty calories in junk food; it doesn’t nourish and strengthen the relationship you should be building with your audience.

Is SEO the Enemy?

SEO is not the enemy, but focusing on only some superficial SEO tactics above other approaches is not going to create a long term bond with your visitors. Keyword stuffing and optimization can damage the user experience if the user feels manipulated. Google might still bring people to your content as a result of these techniques, but it’s a hollow relationship that has no staying power. When you create quality content that your audience will like and will recommend to others, you produce backlinks and social signals what will improve your search rankings, which is the way to win in SEO.

Despite all the supposed secret formulas and tricks to get high search engine ranking, the real secret is that Google loves quality content and will reward it, so that’s the smart SEO strategy to follow.

What is Good Content Marketing?

Similar to coming up with an idea for the next movie blockbuster to get people into theaters, content marketing is about creating good and useful content that entertains, educates, creates interest, or is useful in some way. It works best when it is the kind of content that people want to share with others. The viral effect will compound the audience you earn. That’s why content marketing has really taken off in the age of social media. Word-of-mouth and good write-ups have always propelled good content, but they are nothing compared to the effect viral online sharing can have on a good blog post, video, photograph, meme or other content.

How do you create this great content? We’re going to cover three steps that will take you from ho-hum content marketing to good and possibly even great content marketing. If you follow these three steps, you’ll be ahead of 90 percent of the businesses out there that are trying to crack the how-to of content marketing.

First — Start with Why You Do What You Do

Simon Sinek in his book, Start with Why, and in his presentations, especially his TED Talk, How Great Leaders Inspire Action, argues that people don’t base their purchasing decisions primarily on what a company does, but on why they do it. This might be hard to envision for some products, like toothpaste or laundry detergent, but I think it does apply to every purchase we make, even if in some cases it’s to a small degree. For some things it’s much more apparent. People identify with iOS or Android, Ford or Chevy, Ducati or Suzuki, based on much more than practical considerations of price, effectiveness, and other qualities. People want to use products and services that bolster their image of who they are, or who they want to be. Some companies are great at using this desire (Apple, BMW, Nike, Sephora, Ikea, Whole Foods, REI) and have a distinct identity that is the foundation for every message they put out.

Golden Circle: Why? How? What?
From Simon Sinek, Start With Why

To communicate the why of your products and services, you can’t just put out generic content that works for anyone. You have to produce content that shows specifically who you are. The best content marketing is cultural. The content you deliver tells your audience what kind of company you are, what your values are, who are the people in the company, and why they work there and do the things they do. That means you must be authentic and transparent. That takes courage, and isn’t easy, which is why so few companies are good at it. It takes vision, leadership, and a constant reminder from company leaders of what you’re doing and why it matters.

Unfortunately, this is hard to maintain as companies grow. The organizations that have grown dramatically and yet successfully maintained the core company values have either had a charismatic leader who represented and reiterated the company’s values at every opportunity (Apple), or have built them into every communication, event, and presentation by the company, no matter who is delivering them (Salesforce).

If your company isn’t good at this, don’t despair. These skills can be learned, so if your company needs to get better at understanding and communicating the why of who they are, there’s still hope that with some effort, it can still happen.

Second — Put Yourself in Your Customers’ Shoes

You not only need to understand yourself and your company and present yourself authentically, you have to really understand your customer — really, really understand your customer. That takes time, research, and empathy to walk a mile in their shoes. You need to visit your customers, spend a day fielding support calls or working customer service, go places, do things, and ask questions that you’ve never asked. Are they well off with cash to burn, or do they count every penny? Do they live for themselves, their parents, their children, their community, their church, their livelihood? How could your company help them solve their problems or make their lives better?

The best marketers have imagination and empathy. They, like novelists, playwrights, and poets, are able to imagine what it would be like to live like someone else. Some marketing organizations formalize this function by having one person who is assigned to represent the customers and always advocate for their interests. This can help prevent falling into the mindset of thinking of the customer only as a source of revenue or problems that have to be solved.

One common marketing technique is to create a persona or personas that represent your ideal customer(s). What is their age, sex, occupation? What are their interests, fears, etc.? This can help make sure that the customer is never just an unknown face or potential revenue source, but instead is a real person whom you need to be close to and understand as deeply as possible.

Once you’ve made the commitment to understand your customers, you’re ready to help solve their problems.

Third — Focus on Solving Your Customers’ Problems

Once you have your authentic voice down and you really know who your customer is and how they think, the third thing you need to do is focus on providing useful content. Useful content for your customers is content that solves a real problem they have. What’s causing them pain or what’s impeding them doing what they need or want to do? The customer may or may not know they have this pain. You might be creating a new need or desire for them by telling a story about how their life will be if they only had this thing, service, or experience. Help them dream of being on a riverboat in Europe, enjoying the pool in their backyard on a summer’s day, or showing off a new mobile phone to their friends at work.

By speaking to the needs of your customers, you’re helping them solve problems, but also forging a bond of trust and usefulness that will go forward in your relationship with them.

Mastering Blogging for Content Marketing

There are many ways to create and deliver content that is authentic and serves a need. Podcasts, Vlogs, events, publications, words, pictures, music, and videos all can be effective delivery vehicles for quality content. Let’s focus on one vehicle that can return exceptional results when done right, and that is blogging, which has worked well for Backblaze.

Backblaze didn’t just create a blog that then turned into an overnight success. Backblaze tried a number of marketing approaches that didn’t perform as the company hoped. The company wrote about these efforts on its blog, which is a major reason why the blog became a marketing success — it showed that the company was willing to talk about both its successes and its failures. You can read about some of these marketing adventures at As Seen on Ellen and How to Save Marketing Money by Being Nice. Forbes wrote about Backblaze’s marketing history in an article in 2013, One Startup Tried Every Marketing Ploy From ‘Ellen’ To Twitter: Here’s What Worked.

Blendtec on the Ellen Show

Backblaze on the Ellen Show

Backblaze billboard on Highway 101 in Silicon Valley, 2011

Backblaze billboard on Highway 101 in Silicon Valley

Backblaze decided early on that it would be as transparent as possible in its business practices. That meant that if there were no good reason not to release information, the company should release it, and the blog became the place where the company made that information public. Backblaze’s CEO Gleb Budman wrote about this commitment to transparency, and the results from it, in a blog post in 2017, The Decision on Transparency. An early example of this transparency is a 2010 post in which Backblaze analyzed why a proposed acquisition of the company failed, Backblaze online backup almost acquired — Breaking down the breakup. Companies rarely write about acquisitions that fall through.

Backblaze’s blog really took off in 2015 when the company decided to publish the statistics it had collected on the failure rate of hard drives in its data centers, Reliability Data Set For 41,000 Hard Drives Now Open Source. While many cloud companies routinely collected this kind of data, including Amazon, Google, and Microsoft, none had ever made it public. It turned out that readers were tremendously hungry for data on how hard drives performed, and Backblaze’s blog readership subsequently increased by hundreds of thousands of readers. Readers analyzed the drive failure data and debated which drives were the best for their own purposes. This was despite Backblaze’s disclaimer that how Backblaze used hard drives in its data centers didn’t really reflect how the drives would perform in other applications, including homes and businesses. Customers didn’t care. They were starved for the information and waited anxiously for the release of each new Drive Stats post.

It Turns Out That Blogging with Authenticity and Transparency is Rewarded

As Gilmore and Pine wrote in their book, Authenticity, “People increasingly see the world in terms of real and fake, and want to buy something real from someone genuine, not a fake from some phony.” How do you convince your customers that you’re real and genuine? The simple answer is to communicate honestly about who you are, which means sometimes telling them about your failures and mistakes and owning up to less than stellar performances by yourself or your company. Consider lifting the veil occasionally to reveal who you are. If you put the customer first, that shouldn’t be too hard even when you fall short. If your intentions are good, being transparent will almost always be rewarded with forgiveness and greater trust and loyalty from your customers.

Many companies created blogs thinking they had to because everyone else was and they started posting articles by their executives and product marketers going on about how great their products were. Then they were surprised when they got little traffic. These people didn’t get the message about how content should be used to help customers with their problems and build a relationship with them through authenticity and transparency.

If you have a blog, you could use that as a place to write about how you do business, the lessons you’ve learned, and yes, even the mistakes you’ve made. Don’t assume that all your company information needs to be protected. If at all possible, write about the tough issues and why you made the decisions you did. Your customers will respond because they don’t see that kind of frankness elsewhere and because they appreciate understanding the kind of company they’re paying for the product or service.

Your Blog Isn’t One Audience of Thousands or Millions, But Many Audiences of One

Don’t be afraid to write to a specific audience or group on your blog. You might have multiple audiences, but you might have specialized ones, as well. When you’re writing to an audience with specialized vocabulary or acronyms, don’t be afraid to use them. Other readers will recognize that the post is not for them and skip over it, or they’ll use it as an entry to a new area that interests them. If you try to make all your posts suitable for a homogeneous reader, you’ll end up with many readers leaving because you’re not speaking directly to them using their language.

If the piece is aimed at a novice or general audience, definitely take the time to explain unfamiliar concepts and spell out abbreviations and acronyms that might not be familiar to the reader. However, if the piece is aimed at a professional audience, you should avoid doing that because the reader might think that the post isn’t aimed at professionals and they could dismiss the post and the blog thinking it’s not suitable for them.

Strive to match the content, vocabulary, graphics, technical argot, and level of reading to the intended market segment. The goal is to make each reader feel that the piece was written specifically for him or her.

Taking Just OK Content Marketing and Making It Great

Authenticity, honesty, frankness, and sincerity are all qualities that to some degree or other are present in the best content. Unfortunately, marketers have the reputation for producing content that’s at the opposite end of the spectrum. Comedian George Burns could have been parodying a modern marketing course when he wrote, “To be a fine actor, you’ve got to be honest. And if you can fake that, you’ve got it made.”

There’s a reason that the recommendation to be authentic sounds like advice you’d get from your mom or dad about how to behave on your first date. We all learn sooner or later that if someone doesn’t like you for who you are, there’s no amount of faking being someone else that will make them like you for more than just a little while.

Be yourself, say something that really means something to you, and tell a story that connects with your audience and gives them some value. Those intangibles are hard to measure in metrics, but, when done well, might earn you an honest response, some respect, and perhaps a repeat visit.

The post Creating Great Content Marketing appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Reading, Writing, and Backing Up — Are You Ready to Go Back to School?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/back-to-school-backup-plan/

It's That Time of the Year

Dear students,

We’re very sorry to interrupt your time enjoying the beach, pool, and other fun outdoor and urban places.

We’ve got some important advice you need to hear so that you can be responsible students when you go back to school this fall.

Now that all the students have stopped listening and likely it’s just us now, I’d like to address the parents of students who are starting or about to return to school in the fall.

You’re likely spending a large amount of money on your children’s education. That money is well spent as it will help your child succeed and be good adults and citizens in the future. We’d like to help by highlighting something you can do to protect your investment, and that is to ensure the safety of your students’ data.

Where did summer go?

Our Lives Are Digital Now — Students’ Especially

We don’t have to tell you how everything in our lives has become digital. That’s true as well of schools and universities. Students now take notes, write papers, read, communicate, and record everything on digital devices.

You don’t want data damage or loss to happen to the important school or university files and records your child (and possible future U.S. president) has on his or her digital device.

Think about it.

  • Has your child ever forgotten a digital device in a vehicle, restaurant, or friend’s house?

We thought so.

  • How about water damage?

Yes, us too.

  • Did you ever figure out what that substance was clogging the laptop keyboard?

We’ve learned that parenting is full of unanswered questions, as well.

Maybe your student is ahead of the game and already has a plan for backing up their data while at school. That’s great, and a good sign that your student will succeed in life and maybe even solve some of the many challenges we’re leaving to their generation.

Parents Can Help

If not, you can be an exceptional parent by giving your student the gift of automatic and unlimited backup. Before they start school, you can install Backblaze Computer Backup on their Windows or Mac computer. It takes just a couple of minutes. Once that’s done, every time they’re connected to the internet all their important data will be automatically backed up to the cloud.

If anything happens to the computer, that file is safe and ready to be restored. It also could prevent that late night frantic call asking you to somehow magically find their lost data. Who needs that?

Let’s Hear From the Students Themselves

You don’t have to take our word for it. We asked two bona fide high school students who interned at Backblaze this summer for the advice they’d give to their fellow students.

Marina

My friends do not normally back up their data other than a few of them putting their important school work on Microsoft’s OneDrive.

With college essays, applications, important school projects and documents, there is little I am willing to lose.

I will be backing up my data when I get home for sure. Next year I will ensure that all of my data is backed up in two places.

Andrea

After spending a week at Backblaze, I realized how important it is to keep your data safe.

Always save multiple copies of your data. Accidents happen and data gets lost, but it is much easier to recover if there is another copy saved somewhere reliable. Backblaze helps with this by keeping a regularly updated copy of your files in one of their secure data centers.

When backing up data, use programs that make sense and are easy to follow. Stress runs high when files are lost. Having a program like Backblaze that is simple and has live support certainly makes the recovery process more enjoyable.

Relax! The pressures of performing well at school are high. Knowing your files are safe and secure can take a little bit of the weight off your shoulders during such a stressful time.

I definitely plan on using Backblaze in the future and I think all students should.

We couldn’t have said it better. Having a solid backup plan is a great idea for both parents and students. We suggest using Backblaze Personal Backup, but the important thing is to have a backup plan for your data and act on it no matter what solution you’re using.

Learning to Back Up is a Good Life Lesson

Students have a lot to think about these days, and with all the responsibilities and new challenges they’re going to face in school, it’s easy for them to forget some of the basics. We hope this light reminder will be just enough to set them on the right backup track.

Have a great school year everyone!

P.S. If you know a student or the parent of a student going to school in the fall, why not share this post with them? You can use the Email or other sharing buttons to the left or at the bottom of this post.

The post Reading, Writing, and Backing Up — Are You Ready to Go Back to School? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

More From Our Annual Survey: Choosing the Best Cloud for Backing Up

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/choosing-the-best-cloud-for-backing-up/

plugging a cord into the cloud

Which cloud is best for backing up?

This is one of the most common questions we get asked at Backblaze, and we’ve addressed it many times on this blog, on our website, and at trade shows and conferences.

There are many uses for the cloud, and many services that provide storage drives, sync, backup, and sharing. It’s hard for computer users to know which service is best for which use.

Every spring for the past twelve years we’ve commissioned an online survey conducted by The Harris Poll to help us understand if and how computer users are backing up. We’ve asked the same question, “How often do you backup all the data on your computer?” every year. We just published the results of the latest poll, which showed that more surveyed computer owners are backing up in 2019 than when we conducted our first poll in 2008. We’re heartened that more people are protecting their valuable files, photos, financial records, and personal documents.

This year we decided to ask a second question that would help us understand how the cloud compares to other backup destinations, such as external drives and NAS, and which cloud services are being used for backing up.

This was the question we asked:

What is the primary method you use to backup all of the data on your computer?

1 Cloud backup (e.g., Backblaze, Carbonite, iDrive)
2 Cloud drive (e.g., Google Drive, Microsoft OneDrive)
3 Cloud sync (e.g., Dropbox, iCloud)
4 External hard drive (e.g., Time Machine, Windows Backup and Restore)
5 Network Attached Storage (NAS) (e.g., QNAP, Synology)
6 Other
7 Not sure

Where Computer Users are Backing Up

More than half of those who have ever backed up all the data on their computer (58 percent) indicated that they are using the cloud as one of the primary methods to back up all of the data on their computer. Nearly two in five (38 percent) use an external hard drive, and just 5 percent use network attached storage (NAS). (The total is greater than 100 percent because respondents were able to select multiple destinations.)

Backup Destinations
(Among Those Who Have Ever Backed Up All Data on Their Computer)

2019 survey backing up destinations
Among Those Who Have Ever Backed Up All Data On Computer — Primary Method Used

What Type of Cloud is Being Used?

The survey results tell us that the cloud has become a popular destination for backing up data.
Among those who have ever backed up all data on their computer, the following indicated what type of cloud service they used:

  • 38 percent are using cloud drive (such as Google Drive or Microsoft OneDrive)
  • 21 percent are using cloud sync (such as Dropbox or Apple iCloud)
  • 11 percent are using cloud backup (such as Backblaze Computer Backup)

Cloud Destinations
(Among Those Who Have Ever Backed Up All Data on Their Computer)

2019 survey cloud destinations

Choosing the Best Cloud for Backups

Backblaze customers or regular readers of this blog will immediately recognize the issue in these responses. There’s a big difference in what type of cloud service you select for cloud backup. Both cloud drive and cloud sync services can store data in the cloud, but they’re not the same as having a real backup. We’ve written about these differences in our blog post, What’s the Diff: Sync vs Backup vs Storage, and in our guide, Online Storage vs. Online Backup.

Put simply, those who use cloud drive or cloud sync are missing the benefits of real cloud backup. These benefits can include automatic backup of all data on your computer, not being limited to just special folders or directories that can be backed up, going back to earlier versions of files, and not having files lost when syncing, such as when a shared folder gets deleted by someone else.

Cloud backup is specifically designed to protect your files, while the purpose of cloud drives and sync is to make it easy to access your files from different computers and share them when desired. While there is overlap in what these services offer and how they can be used, obtaining the best results requires selecting the right cloud service for your needs. If your goal is to back up your files, you want the service to seamlessly protect your files and make sure they’re available when and if you need to restore them due to data loss on your computer.

As users have more time and experience with their selected cloud service(s), it will be interesting in future polls to discover how happy they are with the various services and how well their needs are being met. We plan to cover this topic in our future polls.

•  •  •

Survey Method
This survey was conducted online within the United States by The Harris Poll on behalf of Backblaze from June 6-10, 2019 among 2,010 U.S. adults ages 18 and older, among whom 1,858 own a computer and 1,484 have ever backed up all data on their computer. This online survey is not based on a probability sample and therefore no estimate of theoretical sampling error can be calculated. For complete survey methodology, including weighting variables and subgroup sample sizes, please contact Backblaze.

The post More From Our Annual Survey: Choosing the Best Cloud for Backing Up appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Shocking Truth — Managing for Hard Drive Failure and Data Corruption

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/managing-for-hard-drive-failures-data-corruption/

hard disk drive covered in 0s, 1s, ?s

Ah, the iconic 3.5″ hard drive, now approaching a massive 16TB of storage capacity. Backblaze storage pods fit 60 of these drives in a single pod, and with well over 750 petabytes of customer data under management in our data centers, we have a lot of hard drives under management.

Yet most of us have just one, or only a few of these massive drives at a time storing our most valuable data. Just how safe are those hard drives in your office or studio? Have you ever thought about all the awful, terrible things that can happen to a hard drive? And what are they, exactly?

It turns out there are a host of obvious physical dangers, but also other, less obvious, errors that can affect the data stored on your hard drives, as well.

Dividing by One

It’s tempting to store all of your content on a single hard drive. After all, the capacity of these drives gets larger and larger, and they offer great performance of up to 150 MB/s. It’s true that flash-based hard drives are far faster, but the dollars per gigabyte price is also higher, so for now, the traditional 3.5″ hard drive holds most data today.

However, having all of your precious content on a single, spinning hard drive is a true tightrope without a net experience. Here’s why.

Drivesaver Failure Analysis by the Numbers

Drive failures by possible external force

I asked our friends at Drivesavers, specialists in recovering data from drives and other storage devices, for some analysis of the hard drives brought into their labs for recovery. What were the primary causes of failure?

Reason One: Media Damage

The number one reason, accounting for 70 percent of failures, is media damage, including full head crashes.

Modern hard drives stuff multiple, ultra thin platters inside that 3.5 inch metal package. These platters spin furiously at 5400 or 7200 revolutions per minute — that’s 90 or 120 revolutions per second! The heads that read and write magnetic data on them sweep back and forth only 6.3 micrometers above the surface of those platters. That gap is about 1/12th the width of a human hair and a miracle of modern technology to be sure. As you can imagine, a system with such close tolerances is vulnerable to sudden shock, as evidenced by Drivesavers’ results.

This damage occurs when the platters receive shock, i.e. physical damage from impact to the drive itself. Platters have been known to shatter, or have damage to their surfaces, including a phenomenon called head crash, where the flying heads slam into the surface of the platters. Whatever the cause, the thin platters holding 1s and 0s can’t be read.

It takes a surprisingly small amount of force to generate a lot of shock energy to a hard drive. I’ve seen drives fail after simply tipping over when stood on end. More typically, drives are accidentally pushed off of a desktop, or dropped while being carried around.

A drive might look fine after a drop, but the damage may have been done. Due to their rigid construction, heavy weight, and how often they’re dropped on hard, unforgiving surfaces, these drops can easily generate the equivalent of hundreds of g-forces to the delicate internals of a hard drive.

To paraphrase an old (and morbid) parachutist joke, it’s not the fall that gets you, it’s the sudden stop!

Reason Two: PCB Failure

The next largest cause is circuit board failure, accounting for 18 percent of failed drives. Printed circuit boards (PCBs), those tiny green boards seen on the underside of hard drives, can fail in the presence of moisture or static electric discharge like any other circuit board.

Reason Three: Stiction

Next up is stiction (a portmanteau of friction and sticking), which occurs when the armatures that drive those flying heads actually get stuck in place and refuse to operate, usually after a long period of disuse. Drivesavers found that stuck armatures accounted for 11 percent of hard drive failures.

It seems counterintuitive that hard drives sitting quietly in a dark drawer might actually contribute to its failure, but I’ve seen many older hard drives pulled from a drawer and popped into a drive carrier or connected to power just go thunk. It does appear that hard drives like to be connected to power and constantly spinning and the numbers seem to bear this out.

Reason Four: Motor Failure

The last, and least common cause of hard drive failure, is hard drive motor failure, accounting for only 1 percent of failures, testament again to modern manufacturing precision and reliability.

Mitigating Hard Drive Failure Risk

So now that you’ve seen the gory numbers, here are a few recommendations to guard against the physical causes of hard drive failure.

1. Have a physical drive handling plan and follow it rigorously

If you must keep content on single hard drives in your location, make sure your team follows a few guidelines to protect against moisture, static electricity, and drops during drive handling. Keeping the drives in a dry location, storing the drives in static bags, using static discharge mats and wristbands, and putting rubber mats under areas where you’re likely to accidentally drop drives can all help.

It’s worth reviewing how you physically store drives, as well. Drivesavers tells us that the sudden impact of a heavy drawer of hard drives slamming home or yanked open quickly might possibly damage hard drives!

2. Spread failure risk across more drives and systems

Improving physical hard drive handling procedures is only a small part of a good risk-reducing strategy. You can immediately reduce the exposure of a single hard drive failure by simply keeping a copy of that valuable content on another drive.This is a common approach for videographers moving content from cameras shooting in the field back to their editing environment. By simply copying content over from one fast drive to another, the odds of both drives failing at once are less likely. This is certainly better than keeping content on only a single drive, but definitely not a great long-term solution.

Multiple drive NAS and RAID systems reduce the impact of failing drives even further. A RAID 6 system composed of eight drives not only has much faster read and write performance than a single drive, but two of its drives can fail and still serve your files, giving you time to replace those failed drives.

Mitigating Data Corruption Risk

The Risk of Bit Flips

Beyond physical damage, there’s another threat to the files stored on hard disks: small, silent bit flip errors often called data corruption or bit rot.

Bit rot errors occur when individual bits in a stream of data in files change from one state to another (positive or negative, 0 to 1, and vice versa). These errors can happen to hard drive and flash storage systems at rest, or be introduced as a file is copied from one hard drive to another.

While hard drives automatically correct single-bit flips on the fly, larger bit flips can introduce a number of errors. This can either cause the program accessing them to halt or throw an error, or perhaps worse, lead you to think that the file with the errors is fine!

Bit Flip Errors by the Book

In a landmark study of data failures in large systems, Disk failures in the real world:
What does an MTTF of 1,000,000 hours mean to you?
, Bianca Schroeder and Garth A. Gibson reported that “a large number of the problems attributed to CPU and memory failures were triggered by parity errors, i.e. the number of errors is too large for the embedded error correcting code to correct them.”

Flash drives are not immune either. Bianca Shroeder recently published a similar study of flash drives, Flash Reliability in Production: The Expected and the Unexpected, and found that “…between 20-63% of drives experienced at least one of the (unrecoverable read errors) during the time it was in production. In addition, between 2-6 out of 1,000 drive days were affected.”

“These UREs are almost exclusively due to bit corruptions that ECC cannot correct. If a drive encounters a URE, the stored data cannot be read. This either results in a failed read in the user’s code, or if the drives are in a RAID group that has replication, then the data is read from a different drive.”

Exactly how prevalent bit flips are is a controversial subject, but if you’ve ever retrieved a file from an old hard drive or RAID system and see sparkles in video, corrupt document files, or lines or distortions in pictures, you’ve seen the results of these errors.

Protecting Against Bit Flip Errors

There are many approaches to catching and correcting bit flip errors. From a system designer standpoint they usually involve some combination of multiple disk storage systems, multiple copies of content, data integrity checks and corrections, including error-correcting code memory, physical component redundancy, and a file system that can tie it all together.

Backblaze has built such a system, and uses a number of techniques to detect and correct file degradation due to bit flips and deliver extremely high data durability and integrity, often in conjunction with Reed-Solomon erasure codes.

Thanks to the way object storage and Backblaze B2 works, files written to B2 are always retrieved exactly as you originally wrote them. If a file ever changes from the time you’ve written it, say, due to bit flip errors, it will either be reproduced from a redundant copy of your file, or even mathematically reconstructed with erasure codes.

So the simplest, and certainly least expensive way to get bit flip protection for the content sitting on your hard drives is to simply have another copy on cloud storage.

Resources:

The Ideal Solution — Performance and Protection

With some thought, you can apply these protection steps to your environment and get the best of both worlds: the performance of your content on fast, local hard drives, and the protection of having a copy on object storage offsite with the ultimate data integrity.

The post The Shocking Truth — Managing for Hard Drive Failure and Data Corruption appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

12 Power Tips for Backing Up Business Data

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/12-power-tips-for-backing-up-business-data/

Business Backup Power Tips

In this, the fourth post in our Power Tips series, we provide some blazingly useful tips that we feel would benefit business users. Some of the tips apply to our Backblaze Business Backup product and some to B2 Cloud Storage.

Don’t miss our earlier posts on Power Tips for Backblaze Computer Backup, 12 B2 Power Tips for New Users, and 12 B2 Power Tips for Experts and Developers.

12 Power Tips for Business Users of Backblaze Business Backup and B2

Backblaze logo

1 Manage All Users of Backblaze Business Backup or B2

Backblaze Groups can be used for both Backblaze Business Backup and B2 to manage accounts and users. See the status of all accounts and produce reports using the admin console.

Backblaze logo

2 Restore For Free via Web or USB

Admins can restore data from endpoints using the web-based admin console. USB drives can be shipped worldwide to facilitate the management of a remote workforce.

Backblaze logo

3 Back Up Your VMs

Backblaze Business Backup can handle virtual machines, such as those created by Parallels, VMware Fusion, and VirtualBox; and B2 integrates with StarWind, OpenDedupe, and CloudBerry to back up enterprise-level VMs.

Backblaze logo

4 Mass Deploy Backblaze Remotely to Many Computers

Companies, organizations, schools, non-profits, and others can use the Backblaze Business Backup MSI installer, Jamf, Munki, and other tools to deploy Backblaze computer backup remotely across all their computers without any end-user interaction.

Backblaze logo

5 Save Money with Free Data Exchange with B2’s Compute Partners

Spin up compute applications with high speed and no egress charges using our partners Packet and Server Central.

Backblaze logo

6 Speed up Access to Your Content With Free Egress to Cloudflare

Backblaze offers free egress from B2 to Cloudflare’s content delivery network, speeding up access to your data worldwide.

Backblaze logo

7 Get Your Data Into the Cloud Fast

You can use Backblaze’s Fireball hard disk array to load large volumes of data without saturating your network. We ship a Fireball to you and once you load your data onto it, you ship it back to us and we load it directly into your B2 account.

Backblaze logo

8 Use Single Sign-On (SSO) and Two Factor Verification for Enhanced Security

Single sign-on (Google and Microsoft) improves security and speeds signing into your Backblaze account for authorized users. With Backblaze Business Backup, all data is automatically encrypted client-side prior to upload, protected during transfer, and stored encrypted in our secure data centers. Adding Two Factor Verification augments account safety with another layer of security.

Backblaze logo

9 Get Quick Answers to Your Backing Up Questions

Refer to an extensive library of FAQs, how-tos, and help articles for Business Backup and B2 in our online help library.

Backblaze logo

10 Application Keys Enable Controlled Sharing of Data for Users and Apps

Take control of your cloud data and share files or permit API access using configurable Backblaze application keys.

Backblaze logo

11 Manage Your Server Backups with CloudBerry MBS and B2

Automate and centrally manage server backups using CloudBerry Managed Backup Service (MBS) and B2. It’s easy to set up and once configured, you have a true set-it-and-forget-it backup solution in place.

Backblaze logo

12 Protect your NAS Data Using Built-in Sync Applications and B2

B2 is integrated with the leading tools and devices in the market for NAS backup. Native integrations from Synology, QNAP, FreeNAS, TrueNAS and more ensure that setups are simple and backups are automated.

Want to Learn More About Backblaze Business Backup and B2?

You can find more information on Backblaze Business Backup (including a free trial) on our website, and more tips about backing up in our help pages and in our Backup Guide.

The post 12 Power Tips for Backing Up Business Data appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

When Ransomware Strikes

Post Syndicated from Natasha Rabinov original https://www.backblaze.com/blog/how-to-deal-with-ransomware/

Ransomware Prevention & Survival

Does this sound familiar? An employee walks over with panic and confusion written all over their face. They approach holding their laptop and say that they’re not sure what happened. You open their computer to find that there is a single message displayed:

You want your files?
Your computer has been infected with ransomware and you will need to pay us to get them back.

They may not know what just happened, but the sinking feeling in your stomach has a name you know well. Your company has been hit with ransomware, which is, unfortunately, a growing trend. The business of ransomware is a booming one, bringing productivity and growth to a dead stop.

As ransomware attacks increase on businesses of all sizes, ransomware may prove to be the single biggest destructive force for business data, surpassing even hard drive failures as the leader of data loss.

When Ransomware Strikes

It’s a situation that most IT Managers will face at some point in their career. Per Security Magazine, “Eighty-six percent Small to Medium Business (SMB) clients were recently victimized by ransomware.” In fact, it happened to us at Backblaze. Cybersecurity company Ice Cybersecurity published that ransomware attacks occur every 40 seconds (that’s over 2,000 times per day!). Coveware’s Ransomware Marketplace Report says that the average ransom cost has increased by 89% to $12,762, as compared to $6,733 in Q4 of 2018. The downtime resulting from ransomware is also on the rise. The average number of days a ransomware incident lasts amounts to just over a week at 7.3 days, which should be factored in when calculating the true cost of ransomware. The estimated downtime costs per ransomware attack per company averaged $65,645. The increasing financial impact on businesses of all sizes has proven that the business of ransomware is booming, with no signs of slowing down.

How Has Ransomware Grown So Quickly?

Ransomware has taken advantage of multiple developments in technology, similar to other high-growth industries. The first attacks occurred in 1989 with floppy desks distributed across organizations, purporting to raise money to fund AIDS research. At the time, the users were asked to pay $189 to get their files back.

Since then, ransomware has grown significantly due to the advent of multiple facilitators. Sophisticated RSA encryption with increasing key sizes make encrypted files more difficult to decrypt. Per the Carbon Black report, ransomware kits are now relatively easy to access on the dark web and only cost $10, on average. With cryptocurrency in place, payment is both virtually untraceable and irreversible. As recovery becomes more difficult, the cost to business rises alongside it. Per the Atlantic, ransomware now costs businesses more than $75 billion per year.

If Your Job is Protecting Company Data, What Happens After Your Ransomware Attack?

Isolate, Assess, Restore

Your first thought will probably be that you need to isolate any infected computers and get them off the network. Next, you may begin to assess the damage by determining the origins of the infected file and locating others that were affected. You can check our guide for recovering from ransomware or call in a specialized team to assist you. Once you prevent the malware from spreading, your thoughts will surely turn to the backup strategy you have in place. If you have used either a backup or sync solution to get your data offsite, you are more prepared than most. Unfortunately, even for this Eagle Scout level of preparedness, too often the backup solution hasn’t been tested against the exact scenario it’s needed for.

Both backup and sync solutions help get your data offsite. However, sync solutions vary greatly in their process for backup. Some require saving data to a specific folder. Others provide versions of files. Most offer varying pricing tiers for storage space. Backup solutions also have a multitude of features, some of which prove vital at the time of restore.

If you are in IT, you are constantly looking for points of failure. When it comes time to restore your data after a ransomware attack, three weak points immediately come to mind:

1. Your Security Breach Has Affected Your Backups

Redundancy is key in workflows. However, if you are syncing your data and get hit with ransomware on your local machine, your newly infected files will automatically sync to the cloud and thereby, infect your backup set.

This can be mitigated with backup software that offers multiple versions of your files. Backup software, such as Backblaze Business Backup, saves your original file as is and creates a new backup file with every change made. If you accidentally delete a file or if your files are encrypted by ransomware and you are backed up with Backblaze Business Backup, you can simply restore a prior version of a file — one that has not been encrypted by the ransomware. The capability of your backup software to restore a prior version is the difference between usable and unusable data.

2. Restoring Data will be Cumbersome and Time-Consuming

Depending on the size of your dataset, restoring from the cloud can be a drawn out process. Moreover, for those that need to restore gigabytes of data, the restore process may not only prove to be lengthy, but also tedious.

Snapshots allow you to restore all of your data from a specific point in time. When dealing with ransomware, this capability is crucial. Without this functionality, each file needs to be rolled back individually to a prior version and downloaded one at a time. At Backblaze, you can easily create a snapshot of your data and archive those snapshots into cloud storage to give you the appropriate amount of time to recover.

You can download the files that your employees need immediately and request the rest of their data to be shipped to you overnight on a USB drive. You can then either keep the drive or send it back for a full refund.

3. All Critical Data Didn’t Get Backed Up

Unfortunately, human error is the second leading cause of data loss. As humans, we all make mistakes and some of those may have a large impact on company data. Although there is no way to prevent employees from spilling drinks on computers or leaving laptops on planes, others are easier to avoid. Some solutions require users to save their data to a specific folder to enable backups. When thinking about the files on your average employees’ desktops, are there any that may prove critical to your business? If so, they need to be backed up. Relying on those employees to change their work habits and begin saving files to specific, backed-up locations is certainly not the easiest nor reliable method of data protection.

In fact, it is the responsibility of the backup solution to protect business data, regardless of where the end user saves it. To that end, Backblaze backs up all user-generated data by default. The most effective backup solutions are ones that are easiest for the end users and require the least amount of user intervention.

Are you interested in assessing the risk to your business? Would you like to learn how to protect your business from ransomware? To better understand innovative ways that you can protect business data, we invite you to attend our Ransomware: Prevention and Survival webinar on July 17th. Join Steven Rahseparian, Chief Technical Officer at Ice CyberSecurity and industry expert on cybersecurity, to hear stories of ransomware and to learn how to take a proactive approach to protect your business data.

The post When Ransomware Strikes appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Vaults: Zettabyte-Scale Cloud Storage Architecture

Post Syndicated from Brian Beach original https://www.backblaze.com/blog/vault-cloud-storage-architecture/

A lot has changed in the four years since Brian Beach wrote a post announcing Backblaze Vaults, our software architecture for cloud data storage. Just looking at how the major statistics have changed, we now have over 100,000 hard drives in our data centers instead of the 41,000 mentioned in the post video. We have three data centers (soon four) instead of one data center. We’re approaching one exabyte of data stored for our customers (almost seven times the 150 petabytes back then), and we’ve recovered over 41 billion files for our customers, up from the 10 billion in the 2015 post.

In the original post, we discussed having durability of seven nines. Shortly thereafter, it was upped to eight nines. In July of 2018, we took a deep dive into the calculation and found our durability closer to eleven nines (and went into detail on the calculations used to arrive at that number). And, as followers of our Hard Drive Stats reports will be interested in knowing, we’ve just started using our first 16 TB drives, which are twice the size of the biggest drives we used back at the time of this post — then a whopping eight TB.

We’ve updated the details here and there in the text from the original post that was published on our blog on March 11, 2015. We’ve left the original 135 comments intact, although some of them might be non sequiturs after the changes to the post. We trust that you will be able to sort out the old from the new and make sense of what’s changed. If not, please add a comment and we’ll be happy to address your questions.

— Editor

Storage Vaults form the core of Backblaze’s cloud services. Backblaze Vaults are not only incredibly durable, scalable, and performant, but they dramatically improve availability and operability, while still being incredibly cost-efficient at storing data. Back in 2009, we shared the design of the original Storage Pod hardware we developed; here we’ll share the architecture and approach of the cloud storage software that makes up a Backblaze Vault.

Backblaze Vault Architecture for Cloud Storage

The Vault design follows the overriding design principle that Backblaze has always followed: keep it simple. As with the Storage Pods themselves, the new Vault storage software relies on tried and true technologies used in a straightforward way to build a simple, reliable, and inexpensive system.

A Backblaze Vault is the combination of the Backblaze Vault cloud storage software and the Backblaze Storage Pod hardware.

Putting The Intelligence in the Software

Another design principle for Backblaze is to anticipate that all hardware will fail and build intelligence into our cloud storage management software so that customer data is protected from hardware failure. The original Storage Pod systems provided good protection for data and Vaults continue that tradition while adding another layer of protection. In addition to leveraging our low-cost Storage Pods, Vaults take advantage of the cost advantage of consumer-grade hard drives and cleanly handle their common failure modes.

Distributing Data Across 20 Storage Pods

A Backblaze Vault is comprised of 20 Storage Pods, with the data evenly spread across all 20 pods. Each Storage Pod in a given vault has the same number of drives, and the drives are all the same size.

Drives in the same drive position in each of the 20 Storage Pods are grouped together into a storage unit we call a tome. Each file is stored in one tome and is spread out across the tome for reliability and availability.

20 hard drives create 1 tome that share parts of a file.

Every file uploaded to a Vault is divided into pieces before being stored. Each of those pieces is called a shard. Parity shards are computed to add redundancy, so that a file can be fetched from a vault even if some of the pieces are not available.

Each file is stored as 20 shards: 17 data shards and three parity shards. Because those shards are distributed across 20 Storage Pods, the Vault is resilient to the failure of a Storage Pod.

Files can be written to the Vault when one pod is down and still have two parity shards to protect the data. Even in the extreme and unlikely case where three Storage Pods in a Vault lose power, the files in the vault are still available because they can be reconstructed from any of the 17 pods that are available.

Storing Shards

Each of the drives in a Vault has a standard Linux file system, ext4, on it. This is where the shards are stored. There are fancier file systems out there, but we don’t need them for Vaults. All that is needed is a way to write files to disk and read them back. Ext4 is good at handling power failure on a single drive cleanly without losing any files. It’s also good at storing lots of files on a single drive and providing efficient access to them.

Compared to a conventional RAID, we have swapped the layers here by putting the file systems under the replication. Usually, RAID puts the file system on top of the replication, which means that a file system corruption can lose data. With the file system below the replication, a Vault can recover from a file system corruption because a single corrupt file system can lose at most one shard of each file.

Creating Flexible and Optimized Reed-Solomon Erasure Coding

Just like RAID implementations, the Vault software uses Reed-Solomon erasure coding to create the parity shards. But, unlike Linux software RAID, which offers just one or two parity blocks, our Vault software allows for an arbitrary mix of data and parity. We are currently using 17 data shards plus three parity shards, but this could be changed on new vaults in the future with a simple configuration update.

Vault Row of Storage Pods

For Backblaze Vaults, we threw out the Linux RAID software we had been using and wrote a Reed-Solomon implementation from scratch, which we wrote about in Backblaze Open Sources Reed-Solomon Erasure Coding Source Code. It was exciting to be able to use our group theory and matrix algebra from college.

The beauty of Reed-Solomon is that we can then re-create the original file from any 17 of the shards. If one of the original data shards is unavailable, it can be re-computed from the other 16 original shards, plus one of the parity shards. Even if three of the original data shards are not available, they can be re-created from the other 17 data and parity shards. Matrix algebra is awesome!

Handling Drive Failures

The reason for distributing the data across multiple Storage Pods and using erasure coding to compute parity is to keep the data safe and available. How are different failures handled?

If a disk drive just up and dies, refusing to read or write any data, the Vault will continue to work. Data can be written to the other 19 drives in the tome, because the policy setting allows files to be written as long as there are two parity shards. All of the files that were on the dead drive are still available and can be read from the other 19 drives in the tome.

Building a Backblaze Vault Storage Pod

When a dead drive is replaced, the Vault software will automatically populate the new drive with the shards that should be there; they can be recomputed from the contents of the other 19 drives.

A Vault can lose up to three drives in the same tome at the same moment without losing any data, and the contents of the drives will be re-created when the drives are replaced.

Handling Data Corruption

Disk drives try hard to correctly return the data stored on them, but once in a while they return the wrong data, or are just unable to read a given sector.

Every shard stored in a Vault has a checksum, so that the software can tell if it has been corrupted. When that happens, the bad shard is recomputed from the other shards and then re-written to disk. Similarly, if a shard just can’t be read from a drive, it is recomputed and re-written.

Conventional RAID can reconstruct a drive that dies, but does not deal well with corrupted data because it doesn’t checksum the data.

Scaling Horizontally

Each vault is assigned a number. We carefully designed the numbering scheme to allow for a lot of vaults to be deployed, and designed the management software to handle scaling up to that level in the Backblaze data centers.

The overall design scales very well because file uploads (and downloads) go straight to a vault, without having to go through a central point that could become a bottleneck.

There is an authority server that assigns incoming files to specific Vaults. Once that assignment has been made, the client then uploads data directly to the Vault. As the data center scales out and adds more Vaults, the capacity to handle incoming traffic keeps going up. This is horizontal scaling at its best.

We could deploy a new data center with 10,000 Vaults holding 16TB drives and it could accept uploads fast enough to reach its full capacity of 160 exabytes in about two months!

Backblaze Vault Benefits

The Backblaze Vault architecture has six benefits:

1. Extremely Durable

The Vault architecture is designed for 99.999999% (eight nines) annual durability (now 11 nines — Editor). At cloud-scale, you have to assume hard drives die on a regular basis, and we replace about 10 drives every day. We have published a variety of articles sharing our hard drive failure rates.

The beauty with Vaults is that not only does the software protect against hard drive failures, it also protects against the loss of entire Storage Pods or even entire racks. A single Vault can have three Storage Pods — a full 180 hard drives — die at the exact same moment without a single byte of data being lost or even becoming unavailable.

2. Infinitely Scalable

A Backblaze Vault is comprised of 20 Storage Pods, each with 60 disk drives, for a total of 1200 drives. Depending on the size of the hard drive, each vault will hold:

12TB hard drives => 12.1 petabytes/vault (Deploying today.)
14TB hard drives => 14.2 petabytes/vault (Deploying today.)
16TB hard drives => 16.2 petabytes/vault (Small-scale testing.)
18TB hard drives => 18.2 petabytes/vault (Announced by WD & Toshiba)
20TB hard drives => 20.2 petabytes/vault (Announced by Seagate)

Backblaze Data Center

At our current growth rate, Backblaze deploys one to three Vaults each month. As the growth rate increases, the deployment rate will also increase. We can incrementally add more storage by adding more and more Vaults. Without changing a line of code, the current implementation supports deploying 10,000 Vaults per location. That’s 90 exabytes of data in each location. The implementation also supports up to 1,000 locations, which enables storing a total of 90 zettabytes! (Also knowWithout changing a line of code, the current implementation supports deploying 10,000 Vaults per location. That’s 160 exabytes of data in each location. The implementation also supports up to 1,000 locations, which enables storing a total of 160 zettabytes! (Also known as 160,000,000,000,000 GB.)

3. Always Available

Data backups have always been highly available: if a Storage Pod was in maintenance, the Backblaze online backup application would contact another Storage Pod to store data. Previously, however, if a Storage Pod was unavailable, some restores would pause. For large restores this was not an issue since the software would simply skip the Storage Pod that was unavailable, prepare the rest of the restore, and come back later. However, for individual file restores and remote access via the Backblaze iPhone and Android apps, it became increasingly important to have all data be highly available at all times.

The Backblaze Vault architecture enables both data backups and restores to be highly available.

With the Vault arrangement of 17 data shards plus three parity shards for each file, all of the data is available as long as 17 of the 20 Storage Pods in the Vault are available. This keeps the data available while allowing for normal maintenance and rare expected failures.

4. Highly Performant

The original Backblaze Storage Pods could individually accept 950 Mbps (megabits per second) of data for storage.

The new Vault pods have more overhead, because they must break each file into pieces, distribute the pieces across the local network to the other Storage Pods in the vault, and then write them to disk. In spite of this extra overhead, the Vault is able to achieve 1,000 Mbps of data arriving at each of the 20 pods.

Backblaze Vault Networking

This capacity required a new type of Storage Pod that could handle this volume. The net of this: a single Vault can accept a whopping 20 Gbps of data.

Because there is no central bottleneck, adding more Vaults linearly adds more bandwidth.

5. Operationally Easier

When Backblaze launched in 2008 with a single Storage Pod, many of the operational analyses (e.g. how to balance load) could be done on a simple spreadsheet and manual tasks (e.g. swapping a hard drive) could be done by a single person. As Backblaze grew to nearly 1,000 Storage Pods and over 40,000 hard drives, the systems we developed to streamline and operationalize the cloud storage became more and more advanced. However, because our system relied on Linux RAID, there were certain things we simply could not control.

With the new Vault software, we have direct access to all of the drives and can monitor their individual performance and any indications of upcoming failure. And, when those indications say that maintenance is needed, we can shut down one of the pods in the Vault without interrupting any service.

6. Astoundingly Cost Efficient

Even with all of these wonderful benefits that Backblaze Vaults provide, if they raised costs significantly, it would be nearly impossible for us to deploy them since we are committed to keeping our online backup service affordable for completely unlimited data. However, the Vault architecture is nearly cost neutral while providing all these benefits.

Backblaze Vault Cloud Storage

When we were running on Linux RAID, we used RAID6 over 15 drives: 13 data drives plus two parity. That’s 15.4% storage overhead for parity.

With Backblaze Vaults, we wanted to be able to do maintenance on one pod in a vault and still have it be fully available, both for reading and writing. And, for safety, we weren’t willing to have fewer than two parity shards for every file uploaded. Using 17 data plus three parity drives raises the storage overhead just a little bit, to 17.6%, but still gives us two parity drives even in the infrequent times when one of the pods is in maintenance. In the normal case when all 20 pods in the Vault are running, we have three parity drives, which adds even more reliability.

Summary

Backblaze’s cloud storage Vaults deliver 99.999999% (eight nines) annual durability (now 11 nines — Editor), horizontal scalability, and 20 Gbps of per-Vault performance, while being operationally efficient and extremely cost effective. Driven from the same mindset that we brought to the storage market with Backblaze Storage Pods, Backblaze Vaults continue our singular focus of building the most cost-efficient cloud storage available anywhere.

•  •  •

Note: This post was updated from the original version posted on March 11, 2015.

The post Backblaze Vaults: Zettabyte-Scale Cloud Storage Architecture appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Out of Stock: How to Survive the LTO-8 Tape Shortage

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/how-to-survive-the-lto-8-tape-shortage/

Not Available - LTO-8 Tapes

Eighteen months ago, the few remaining LTO tape drive manufacturers announced the availability of LTO-8, the latest generation of the Linear Tape-Open storage technology. Yet today, almost no one is actually writing data to LTO-8 tapes. It’s not that people aren’t interested in upgrading to the denser LTO-8 format that offers 12 TB per cartridge, twice LTO-7’s six TB capacity. It’s simply that the two remaining LTO tape manufacturers are locked in a patent infringement battle. And that means LTO-8 tapes are off the market indefinitely.

The pain of this delay is most acute for media professionals who are always quick to adopt higher capacity storage media for video and audio files that are notorious storage hogs. As cameras get more sophisticated, capturing in higher resolutions and higher frame rates, the storage capacity required per hour of content shoots through the roof. For example, one hour of ProRes UltraHD requires 148.72 GB storage capacity, which is four times more than the 37.35 GB required for one hour of ProRes HD-1080. Meanwhile, falling camera prices are encouraging production teams to use more cameras per shoot, further increasing the capacity requirements.

Since its founding, the LTO Consortium has prepared for storage growth by setting a goal of doubling tape density with each LTO generation and committed to releasing a new generation every two to three years. While this lofty goal might seem admirable to the LTO Consortium, it puts customers with earlier generations of LTO systems in a difficult position. New generation LTO drives at best can only read tapes from the two previous generations. So once a new generation is announced, the clock begins ticking on data stored on deprecated generations of tapes. Until you migrate the data to a newer generation, you’re stuck maintaining older tape drive hardware that may be no longer supported by manufacturers.

How Manufacturer Lawsuits Led to the LTO-8 Shortage

How the industry and the market arrived in this painful place is a tangled tale. The lawsuit and counter-lawsuit that led to the LTO-8 shortage is a patent infringement dispute between Fuji and Sony, the only two remaining manufacturers of LTO tape media. The timeline is complicated, starting in 2016 with Fujifilm suing Sony, then Sony counter-suing Fuji. By March 2019, US import bans of LTO products of both manufacturers were in place.

In the middle of these legal battles, LTO-8 drive manufacturers announced product availability in late 2017. But what about the LTO-8 tapes? Fujifilm says they’re not currently manufacturing LTO-8 and have never sold them. And Sony says its US imports of LTO-8 have been stopped and won’t comment about when they will begin shipping again per the dispute. So no LTO-8 for you!

LTO-8 Ultrium Tape Cost

Note that having only two LTO tape manufacturers is a root cause of this shortage. If there were still six LTO tape manufacturers like there were when LTO was launched in 2000, a dispute between two vendors might not have left the market in the lurch.

Weighing Your Options — LTO-8 Shortage Survival Strategies

If you’re currently using LTO for backup or archive, you have a few options for weathering the LTO-8 shortage.

The first option is to keep using your current LTO generation and wait until the disputes settle out completely before upgrading to LTO-8. The downside here is you’ll have to buy more and more LTO-7 or LTO-6 tapes that don’t offer the capacity you probably need if you’re storing higher resolution video or other capacity-hogging formats. And while you’ll be spending more on tapes than if you were able to use the higher capacity newer generation tapes, you’ll also know that anything you write to old-gen LTO tapes will have to be migrated sooner than planned. LTO’s short two to three year generation cycle doesn’t leave time for legal battles, and remember, manufacturers guarantee at most two generations of backward compatibility.

A second option is to go ahead and buy an LTO-8 library and use LTO-7 tapes that have been specially formatted for higher capacity called LTO Type M (M8). When initialized as Type M media, LTO-7 can hold nine TB of data instead of the standard six TB LTO-7 cartridge initialized as Type A. That puts it halfway up to the 12 TB capacity of an LTO-8 tape. However, this extra capacity comes with several caveats:

  • Only new, unused LTO-7 cartridges can be initialized as Type M.
  • Once initialized as Type M, they cannot be changed back to LTO-7 Type A.
  • Only LTO-8 drives in libraries can read and write to Type M, not standalone drives.
  • Future LTO generations — LTO-9, LTO-10, etc. — will not be able to read LTO-7 Type M.

So if you go with LTO-7 Type M for greater capacity, realize it’s still LTO-7, not LTO-8, and when you move to LTO-9, you won’t be able to read those tapes.

LTO Cartridge Capacity (TB) vs. LTO Generation Chart

Managing Tape is Complicated

If your brain hurts reading this as much as mine does writing this, it’s because managing tape is complicated. The devil is in the details, and it’s hard to keep them all straight. When you have years or even decades of content stored on LTO tape, you have to keep track of which content is on which generation of LTO, and ensure your facility has the drive hardware available to read them, and hope that nothing goes wrong with the tape media or the tape drives or libraries.

In general, new drives can read two generations back, but there are exceptions. For example, LTO-8 can’t read LTO-6 because the standard changed from GMR (Giant Magneto-Resistance) heads to TMR (Tunnel Magnetoresistance Recording) heads. The new TMR heads can write data more densely, which is what drives the huge increase in capacity. But that means you’ll want to keep an LTO-7 drive available to read LTO-5 and LTO-6 tapes.

Beyond these considerations for managing the tape storage long-term, there are the day-to-day hassles. If you’ve ever been personally responsible for managing backup and archive for your facility, you’ll know that it’s a labor-intensive, never-ending chore that takes time from your real job. And if your setup doesn’t allow users to retrieve data themselves, you’re effectively on-call to pull data off the tapes whenever it’s needed.

A Third Option — Migrate from LTO to Cloud Storage

If neither of these options to the LTO-8 crisis sounds appealing, there is an alternative: cloud storage. Cloud storage removes the complexity of tape while reducing costs. How much can you save in media and labor costs? We’ve calculated it for you in LTO Versus Cloud Storage Costs — the Math Revealed. And cloud storage makes it easy to give users access to files, either through direct access to the cloud bucket or through one of the integrated applications offered by our technology partners.

At Backblaze, we have a growing number of customers who shifted from tape to our B2 Cloud Storage and never looked back. Customers such as Austin City Limits, who preserved decades of concert historical footage by moving to B2; Fellowship Church, who eliminated Backup Thursdays and freed up staff for other tasks; and American Public Television, who adopted B2 in order to move away from tape distribution to its subscribers. What they’ve found is that B2 made operations simpler and their data more accessible without breaking their budget.

Another consideration: once you migrate your data to B2 cloud storage, you’ll never have to migrate again when LTO generations change or when the media ages. Backblaze takes care of making sure your data is safe and accessible on object storage, and migrates your data to newer disk technologies over time with no disruption to you or your users.

In the end, the problem with tape isn’t the media, it’s the complexity of managing it. It’s a well-known maxim that the time you spend managing how you do your work takes time away from what you do. Having to deal with multiple generations of both tape and tape drives is a good example of an overly complex system. With B2 Cloud Storage, you can get all the economical advantages of tape as well as the disaster recovery advantages of your data being stored away from your facility, without the complexity and the hassles.

With no end in sight to this LTO-8 shortage, now is a good time to make the move from LTO to B2. If you’re ready to start your move to alway available cloud storage, Backblaze and our partners are ready to help you.

Migrate or Die, a Webinar Series on Migrating Assets and Archives to the Cloud

If you’re facing challenges managing LTO and contemplating a move to the cloud, don’t miss Migrate or Die, our webinar series on migrating assets and archives to the cloud.

Migrate or Die: Evading Extinction -- Migrating Legacy Archives

The post Out of Stock: How to Survive the LTO-8 Tape Shortage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Profound Benefits of Cloud Collaboration for Business Users

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/cloud-collaboration-for-business-users/

The Profound Benefits of Cloud Collaboration for Business Users

Apple’s annual WWDC is highlighting high-end desktop computing, but it’s laptop computers and the cloud that are driving a new wave of business and creative collaboration

WWDC, Apple’s annual megaconference for developers kicks off this week, and Backblaze has team members on the ground to bring home insights and developments. Yet while everyone is drooling over the powerful new Mac Pro, we know that the majority of business users use a portable computer as their primary system for business and creative use.

The Rise of the Mobile, Always On, Portable Workstation

Analysts confirm this trend towards the use of portable computers and the cloud. IDC’s 2019 Worldwide Quarterly Personal Computing Device Tracker report shows that desktop form-factor systems comprise only 22.6% of new systems and laptops and portables are chosen almost twice as much at 42.4%.

After all, these systems are extremely popular with users and the DevOps and IT teams that support them. Small and self-contained, with massive compute power, modern laptops have fast SSD drives and always-connected Wi-Fi, helping users be productive anywhere: in the field, on business trips, and at home. Surprisingly, companies today can deploy massive fleets of these notebooks with extremely lean staff. At the inaugural MacDevOps conference a few years ago Google’s team shared that they managed 65,000 Macs with a team of seven admins!

Laptop Backup is More Important Than Ever

With the trend towards leaner IT staffs, and the dangers of computers in the field being lost, dropped or damaged, having a reliable backup system that just works is critical. Despite the proliferation of teams using shared cloud documents and email, all of the other files on your laptop you’re working on — the massive presentation due next week or the project that’s not quite ready to share on Google Drive — all have no protection without backup, which is of course why Backblaze exists!

Cloud as a Shared Business Content Hub is Changing Everything

When a company is backing up users’ files comfortably to the cloud, the next natural step is to adopt cloud-based storage like Backblaze B2 for your teams. With over 750 petabytes of customer data under management, Backblaze has worked with businesses of every size as they adopt cloud storage. Each customer and business does so for different reasons.

In the past, a business department typically would get a share of a company’s NAS server and was asked to keep all of the department’s shared documents there. But outside the corporate firewall, it turns out these systems are hard to access remotely from the road. They require VPNs and a constant network connection to mount a corporate shared drive via SMB or NFS. And, of course, running out of space and storing large files was an ever present problem.

Sharing Business Content in the Cloud Can be Transformational for Businesses

When considering a move to cloud-based storage for your team, some benefits seem obvious, but others are more profound and show that cloud storage is emerging as a powerful, organizing platform for team collaboration.

Shifting to cloud storage delivers these well-known benefits:

  • Pay only for storage you actually need
  • Grow as large and as quickly as you might need
  • Service, management, and upgrades are built in to the service
  • Pay for service as you use it out of operating expenses vs. onerous capital expenses

But shifting to shared, cloud storage yields even more profound benefits:

Your Business Content is Easier to Organize and Manage: When your team’s content is in one place, it’s easier to organize and manage, and users can finally let go of stashing content all over your organization or leaving it on their laptops. All of your tools to mine and uncover your business’s content work more efficiently, and your users do as well.

You Get Simple Workflow Management Tools for Free: Storage can fit your business processes much easier with cloud storage and do it on the fly. If you ever need to set up separate storage for teams of users, or define read/write rules for specific buckets of content, it’s easy to configure with cloud storage.

You Can Replace External File-Sharing Tools: Since most email services balk at sending large files, it’s common to use a file sharing service to share big files with other users on your team or outside your organization. Typically this means having to download a massive file, re-upload it to a file-sharing service, and publish that file-sharing link. When your files are already in cloud, sharing it is as simple as retrieving a URL location.

In fact, this is exactly how Backblaze organizes and serves PDF content on our website like customer case studies. When you click on a PDF link on the Backblaze website, it’s served directly from one of these links from a B2 bucket!

You Get Instant, Simple Policy Control over Your Business or Shared Content: B2 offers simple-to-use tools to keep every version of a file as it’s created, keep just the most recent version, or choose how many versions you require. Want to have your shared content links time-out after a day or so? This and more is all easily done from your B2 account page:

B2 Lifecycle Settings
An example of setting up shared link rules for a time-sensitive download: The file is available for 3 days, then deleted after 10 days

You’re One Step Away from Sharing That Content Globally: As you can see, beyond individual file-sharing, cloud storage like Backblaze B2 can serve as your origin store for your entire website. With the emergence of content delivery networks (CDN), you’re now only a step away from sharing and serving your content globally.

To make this easier, Backblaze joined the Bandwidth Alliance, and offers no-cost egress from your content in Backblaze B2 to Cloudflare’s global content delivery network.

Customers that adopt this strategy can dramatically slash the cost of serving content to their users.

"The combination of Cloudflare and Backblaze B2 Cloud Storage saves Nodecraft almost 85% each month on the data storage and egress costs versus Amazon S3." - James Ross, Nodecraft Co-founder/CTO

Read the Nodecraft/Backblaze case study.

Get Sophisticated Content Discovery and Compliance Tools for Your Business Content: With more and more business content in cloud storage, finding the content you need quickly across millions of files, or surfacing content that needs special storage consideration (for GDPR or HIPAA compliance, for example) is critical.

Ideally, you could have your own private, customized search engine across all of your cloud content, and that’s exactly what a new class of solutions provide.

With Acembly or Aparavi on Backblaze, you can build content indexes and offer deep search across all of your content, and automatically apply policy rules for management and retention.

Where Are You in the Cloud Collaboration Trend?

The trend to mobile, always-on workers building and sharing ever more sophisticated content around cloud storage as a shared hub is only accelerating. Users love the freedom to create, collaborate and share content anywhere. Businesses love the benefits of having all of that content in an easily managed repository that makes their entire business more flexible and less expensive to operate.

So, while device manufacturers like Apple may announce exciting Pro level workstations, the need for companies and teams to collaborate and be effective on the move is an even more important and compelling issue than ever before. The cloud is an essential element of that trend that can’t be underestimated.

•  •  •

Upcoming Free Webinars

Wednesday, June 5, 10am PT
Learn how Nodecraft saved 85% on their cloud storage bill with Backblaze B2 and Cloudflare.
Join the Backblaze/Nodecraft webinar.

Thursday, June 13, 10am PT
Want to learn more about turning content in Backblaze B2 into searchable content with powerful policy rules?
Join the Backblaze/Aparavi webinar
.

The post The Profound Benefits of Cloud Collaboration for Business Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

These Aren’t Your Ordinary Data Centers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/these-arent-your-ordinary-data-centers/

Barcelona Supercomputing Center

Many of us would concede that buildings housing data centers are generally pretty ordinary places. They’re often drab and bunker-like with few or no windows, and located in office parks or in rural areas. You usually don’t see signs out front announcing what they are, and, if you’re not in information technology, you might be hard pressed to guess what goes on inside.

If you’re observant, you might notice cooling towers for air conditioning and signs of heavy electrical usage as clues to their purpose. For most people, though, data centers go by unnoticed and out of mind. Data center managers like it that way, because the data stored in and passing through these data centers is the life’s blood of business, research, finance, and our modern, digital-based lives.

That’s why the exceptions to low-key and meh data centers are noteworthy. These unusual centers stand out for their design, their location, what the building was previously used for, or perhaps how they approach energy usage or cooling.

Let’s take a look at a handful of data centers that certainly are outside of the norm.

The Underwater Data Center

Microsoft’s rationale for putting a data center underwater makes sense. Most people live near water, they say, and their submersible data center is quick to deploy, and can take advantage of hydrokinetic energy for power and natural cooling.

Project Natick has produced an experimental, shipping-container-size prototype designed to process data workloads on the seafloor near Scotland’s Orkney Islands. It’s part of a years-long research effort to investigate manufacturing and operating environmentally sustainable, prepackaged datacenter units that can be ordered to size, rapidly deployed, and left to operate independently on the seafloor for years.

Microsoft's Project Natick
Microsoft’s Project Natick at the launch site in the city of Stromness on Orkney Island, Scotland on Sunday May 27, 2018. (Photography by Scott Eklund/Red Box Pictures)
Natick Brest
Microsoft’s Project Natick in Brest, France

The Supercomputing Center in a Former Catholic Church

One might be forgiven for mistaking Torre Girona for any normal church, but this deconsecrated 20th century church currently houses the Barcelona Supercomputing Center, home of the MareNostrum supercomputer. As part of the Polytechnic University of Catalonia, this supercomputer (Latin for Our sea, the Roman name for the Mediterranean Sea), is used for a range of research projects, from climate change to cancer research, biomedicine, weather forecasting, and fusion energy simulations.

Torre Girona. a former Catholic church in Barcelona
Torre Girona, a former Catholic church in Barcelona
The Barcelona Supercomputing Center, home of the MareNostrum supercomputer
The Barcelona Supercomputing Center, home of the MareNostrum supercomputer

The Barcelona Supercomputing Center, home of the MareNostrum supercomputer

The Barcelona Supercomputing Center, home of the MareNostrum supercomputer

The Under-a-Mountain Bond Supervillain Data Center

Most data centers don’t have the extreme protection or history of the The Bahnhof Data Center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described Bond villain ambiance.

We previously wrote about this extraordinary data center in our post, The Challenges of Opening a Data Center — Part 1.

The Bahnhof Data Center under White Mountain in Stockholm, Sweden
The Bahnhof Data Center under White Mountain in Stockholm, Sweden

The Data Center That Can Survive a Class 5 Hurricane

Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category five hurricane winds.

The MI1 facility provides access for the Caribbean, South and Central America to “to more than 148 countries worldwide,” and is the primary network exchange between Latin America and the U.S., according to Equinix. Any outage in this data center could potentially cripple businesses passing information between these locations.

The center was put to the test in 2017 when Hurricane Irma, a class 5 hurricane in the Caribbean, made landfall in Florida as a class 4 hurricane. The storm caused extensive damage in Miami-Dade County, but the Equinix center survived.

Equinix NAP of the Americas Data Center in Miami
Equinix NAP of the Americas Data Center in Miami

The Data Center Cooled by Glacier Water

Located on Norway’s west coast, the Lefdal Mine Datacenter is built 150 meters into a mountain in what was formerly an underground mine for excavating olivine, also known as the gemstone peridot, a green, high- density mineral used in steel production. The data center is powered exclusively by renewable energy produced locally, while being cooled by water from the second largest fjord in Norway, which is 565 meters deep and fed by the water from four glaciers. As it’s in a mine, the data center is located below sea level, eliminating the need for expensive high-capacity pumps to lift the fjord’s water to the cooling system’s heat exchangers, contributing to the center’s power efficiency.

The Lefdal Mine Data Center in Norway
The Lefdal Mine Datacenter in Norway

The World’s Largest Data Center

The Tahoe Reno 1 data center in The Citadel Campus in Northern Nevada, with 7.2 million square feet of data center space, is the world’s largest data center. It’s not only big, it’s powered by 100% renewable energy with up to 650 megawatts of power.

The Switch Core Campus in Nevada
The Switch Core Campus in Nevada
Tahoe Reno Switch Data Center
Tahoe Reno Switch Data Center

An Out of This World Data Center

If the cloud isn’t far enough above us to satisfy your data needs, Cloud Constellation Corporation plans to put your data into orbit. A constellation of eight low earth orbit satellites (LEO), called SpaceBelt, will offer up to five petabytes of space-based secure data storage and services and will use laser communication links between the satellites to transmit data between different locations on Earth.

CCC isn’t the only player talking about space-based data centers, but it is the only one so far with 100 million in funding to make their plan a reality.

Cloud Constellation's SpaceBelt
Cloud Constellation’s SpaceBelt

A Cloud Storage Company’s Modest Beginnings

OK, so our current data centers are not that unusual (with the possible exception of our now iconic Storage Pod design), but Backblaze wasn’t always the profitable and growing cloud services company that it is today. hen Backblaze was just getting started and was figuring out how to make data storage work while keeping costs as low as possible for our customers.There was a time when Backblaze was just getting started, and before we had almost an exabyte of customer data storage, that we were figuring out how to make data storage work while keeping costs as low as possible for our customers.

The photo below is not exactly a data center, but it is the first data storage structure used by Backblaze to develop its storage infrastructure before going live with customer data. It was on the patio behind the Palo Alto apartment that Backblaze used for its first office.

Shed used for very early (pre-customer) data storage testing
Shed used for very early (pre-customer) data storage testing

The photos below (front and back) are of the very first data center cabinet that Backblaze filled with customer data. This was in 2009 in San Francisco, and just before we moved to a data center in Oakland where there was room to grow. Note the storage pod at the top of the cabinet. Yes, it’s made out of wood. (You have to start somewhere.)

Backblaze's first data storage cabinet to hold customer data (2009) (front)
Backblaze’s first data storage cabinet to hold customer data (2009) (front)
Backblaze's first data storage cabinet to hold customer data (2009) (back)
Backblaze’s first data storage cabinet to hold customer data (2009) (back)

Do You Know of Other Unusual Data Centers?

Do you know of another data center that should be on this list? Please tell us in the comments.

The post These Aren’t Your Ordinary Data Centers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Who We Are & What We Do

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/who-we-are-what-we-do/

Tina, Backblaze Director of Software Engineering

We recently celebrated our 12-year anniversary as a company (see our company timeline). We thought it’d be a great time to make a video showing who we are and what kind of company we’ve built.

In the video, we gave members of our team the opportunity to use their own words to describe what it’s like to work at Backblaze.

We’re still growing and we have openings in engineering, marketing, product management, devops, and operations. If, after viewing the video and reading over the job listings, you think there might be a fit for you, we’d love to have a conversation about joining the Backblaze family.

We hope you take a look at our video entitled, Who We Are & What We Do.

The post Who We Are & What We Do appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating 23TB from Amazon S3 to Backblaze B2 in Just Seven Hours

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/migrating-23tb-from-amazon-s3-to-backblaze-b2-in-just-seven-hours/

flowchart of data transfer - Cloudflare - Bandwidth Alliance FTW! - Backblaze B2 Cloud Storage - Free Bandwidth - Nodecraft

Like many Backblaze customers, Nodecraft realized they could save a fortune by shifting their cloud storage to Backblaze and invest it elsewhere in growing their business. In this post that originally appeared on Nodecraft’s blog, Gregory R. Sudderth, Nodecraft’s Senior DevOps Engineer, shares the steps they took to first analyze, test, and then move that storage.
— Skip Levens

Overview

TL;DR: Nodecraft moved 23TB of customer backup files from AWS S3 to Backblaze B2 in just 7 hours.

Nodecraft.com is a multiplayer cloud platform, where gamers can rent and use our servers to build and share unique online multiplayer servers with their friends and/or the public. In the course of server owners running their game servers, there are backups generated including the servers’ files, game backups and other files. It goes without saying that backup reliability is important for server owners.

In November 2018, it became clear to us at Nodecraft that we could improve our costs if we re-examine our cloud backup strategy. After looking at the current offerings, we decided we were moving our backups from Amazon’s S3 to Backblaze’s B2 service. This article describes how our team approached it, why, and what happened, specifically so we could share our experiences.

Benefits

Due to S3 and B2 being at least nearly equally* accessible, reliable, available, as well as many other providers, our primary reason for moving our backups now became pricing. As we started into the effort, other factors such as variety of API, quality of API, real-life workability, and customer service started to surface.

After looking at a wide variety of considerations, we decided on Backblaze’s B2 service. A big part of the costs of this operation is their bandwidth, which is amazingly affordable.

The price gap between the two object storage systems come from the Bandwidth Alliance between Backblaze and Cloudflare, a group of providers that have agreed to not charge (or heavily discount) for data leaving inside the alliance of networks (“egress” charges). We at Nodecraft use Cloudflare extensively and so this left only the egress charges from Amazon to Cloudflare to worry about.

In normal operations, our customers both constantly make backups as well as access them for various purposes and there has been no change to their abilities to perform these operations compared to the previous provider.

Considerations

As with any change in providers, the change-over must be thought out with great attention to detail. When there were no quality issues previously and circumstances are such that a wide field of new providers can be considered, the final selection must be carefully evaluated. Our list of concerns included these:

  • Safety: we needed to move our files and ensure they remain intact, in a redundant way
  • Availability: the service must both be reliable but also widely available ** (which means we needed to “point” at the right file after its move, during the entire process of moving all the files: different companies have different strategies, one bucket, many buckets, regions, zones, etc)
  • API: we are experienced, so we are not crazy about proprietary file transfer tools
  • Speed: we needed to move the files in bulk and not brake on rate limitations, and…

…improper tuning could turn the operation into our own DDoS.

All these factors individually are good and important, but when crafted together, can be a significant service disruption. If things can move easily, quickly, and, reliably, improper tuning could turn the operation into our own DDoS. We took thorough steps to make sure this wouldn’t happen, so an additional requirement was added:

Tuning: Don’t down your own services, or harm your neighbors

What this means to the lay person is “We have a lot of devices in our network, we can do this in parallel. If we do it at full-speed, we can make our multiple service providers not like us too much… maybe we should make this go at less than full speed.”

Important Parts

To embrace our own cloud processing capabilities, we knew we would have to take a two tier approach in both the Tactical (move a file) and Strategic (tell many nodes to move all the files) levels.

Strategic

Our goals here are simple: we want to move all the files, move them correctly, and only once, but also make sure operations can continue while the move happens. This is key because if we had used one computer to move the files, it would take months.

The first step to making this work in parallel was to build a small web service to allow us to queue a single target file to be moved at a time to each worker node. This service provided a locking mechanism so that the same file wouldn’t be moved twice, both concurrently or eventually. The timer for the lock to expire (with error message) was set to a couple hours. This service was intended to be accessed via simple tools such as curl.

We deployed each worker node as a Docker container, spread across our Docker Swarm. Using the parameters in a docker stack file, we were able to define how many workers per node joined the task. This also ensured more expensive bandwidth regions like Asia Pacific didn’t join the worker pool.

Tactical

Nodecraft has multiple fleets of servers spanning multiple datacenters, and our plan was to use spare capacity on most of them to move the backup files. We have experienced a consistent pattern of access of our servers by our users in the various data centers across the world, and we knew there would be availability for our file moving purposes.

Our goals in this part of the operation are also simple, but have more steps:

  • Get the name/ID/URL of a file to move which…
    • locks the file, and…
    • starts the fail timer
  • Get the file info, including size
  • DOWNLOAD: Copy the file to the local node (without limiting the node’s network availability)
  • Verify the file (size, ZIP integrity, hash)
  • UPLOAD: Copy the file to the new service (again without impacting the node)
  • Report “done” with new ID/URL location information to the Strategic level, which…
    • …releases the lock in the web service, cancels the timer, and marks the file DONE
diagram of Nodecraft data migration from AWS S3 to Backblaze B2 Cloud Storage
Diagram illustrating how the S3 to B2 move was coordinated

The Kill Switch

In the case of a potential run-away, where even the in-band Docker Swarm commands themselves, we decided to make sure we had a kill switch handy. In our case, it was our intrepid little web service–we made sure we could pause it. Looking back, it would be better if it used a consumable resource, such as a counter, or a value in a database cell. If we didn’t refresh the counter, then it would stop all its own. More on “runaways” later.

Real Life Tuning

Our business has daily, weekly, and other cycles of activity that are predictable. Most important is our daily cycle, that trails after the Sun. We decided to use our nodes that were in low-activity areas to carry the work, and after testing, we found that if we tune correctly this doesn’t affect the relatively light loads of the servers in that low-activity region. This was backed up by verifying no change in customer service load using our metrics and those of our CRM tools. Back to tuning.

Initially we tuned the DOWN file transfer speed equivalent to 3/4ths of what wget(1) could do. We thought “oh, the network traffic to the node will fit in-between this so it’s ok”. This is mostly true, but only mostly. This is a problem in two ways. The cause of the problems is that isolated node tests are just that—isolated. When a large number of nodes in a datacenter are doing the actual production file transfers, there is a proportional impact that builds as the traffic is concentrated towards the egress point(s).

Problem 1: you are being a bad neighbor on the way to the egress points. Ok, you say “well we pay for network access, let’s use it” but of course there’s only so much to go around, but also obviously “all the ports of the switch have more bandwidth than the uplink ports” so of course there will be limits to be hit.

Problem 2: you are being your own bad neighbor to yourself. Again, if you end-up with your machines being network-near to each other in a network-coordinates kind of way, your attempts to “use all that bandwidth we paid for” will be throttled by the closest choke point, impacting only or nearly only yourself. If you’re going to use most of the bandwidth you CAN use, you might as well be mindful of it and choose where you will put the chokepoint, that the entire operation will create. If one is not cognizant of this concern, one can take down entire racks of your own equipment by choking the top-of-rack switch, or, other networking.

By reducing our 3/4ths-of-wget(1) tuning to 50% of what wget could do for a single file transfer, we saw our nodes still functioning properly. Your mileage will absolutely vary, and there’s hidden concerns in the details of how your nodes might or might not be near each other, and their impact on hardware in between them and the Internet.

Old Habits

Perhaps this is an annoying detail: Based on previous experience in life, I put in some delays. We scripted these tools up in Python, with a Bourne shell wrapper to detect fails (there were) and also because for our upload step, we ended up going against our DNA and used the Backblaze upload utility. By the way, it is multi-threaded and really fast. But in the wrapping shell script, as a matter of course, in the main loop, that was first talking to our API, I put in a sleep 2 statement. This creates a small pause “at the top” between files.

This ended up being key, as we’ll see in a moment.

How It (The Service, Almost) All Went Down

What’s past is sometimes not prologue. Independent testing in a single node, or even a few nodes, was not totally instructive to what really was going to happen as we throttled up the test. Now when I say “test” I really mean, “operation”.

Our initial testing was concluded “Tactically” as above, for which we used test files, and were very careful in the verification thereof. In general, we were sure that we could manage copying a file down (Python loop) and verifying (unzip -T) and operate the Backblaze b2 utility without getting into too much trouble…but it’s the Strategic level that taught us a few things.

Remembering to a foggy past where “6% collisions on a 10-BASE-T network and its game over”…yeah that 6%. We throttled up the number of replicas in the Docker Swarm, and didn’t have any problems. Good. “Alright.” Then we moved the throttle so to speak, to the last detent.

We had nearly achieved self-DDoS.

It wasn’t all that bad, but, we were suddenly very, very happy with our 50%-of-wget(1) tuning, and our 2 second delays between transfers, and most of all, our kill switch.

Analysis

TL;DR — Things went great.

There were a couple files that just didn’t want to transfer (weren’t really there on S3, hmm). There were some DDoS alarms that tripped momentarily. There was a LOT of traffic…and, then, the bandwidth bill.

Your mileage may vary, but there’s some things to think about with regards to your bandwidth bill. When I say “bill” it’s actually a few bills.

diagram of bandwidth data flow savings switching away from AWS S3 to Backblaze B2 cloud storage
Cloudflare Bandwidth Alliance cost savings

As per the diagram above, moving the file can trigger multiple bandwidth charges, especially as our customers began to download the files from B2 for instance deployment, etc. In our case, we now only had the S3 egress bill to worry about. Here’s why that works out:

  • We have group (node) discount bandwidth agreements with our providers
  • B2 is a member of the Bandwidth Alliance…
  • …and so is Cloudflare
  • We were accessing our S3 content through our (not free!) Cloudflare account public URLs, not by the (private) S3 URLs.

Without saying anything about our confidential arrangements with our service partners, the following are both generally true: you can talk to providers and sometimes work out reductions. Also, they especially like it when you call them (in advance) and discuss your plans to run their gear hard. For example, on another data move, one of the providers gave us a way to “mark” our traffic a certain way, and it would go through a quiet-but-not-often-traveled part of their network; win win!

Want More?

Thanks for your attention, and good luck with your own byte slinging.

by Gregory R. Sudderth
Nodecraft Senior DevOps Engineer

* Science is hard, blue keys on calculators are tricky, and we don’t have years to study things before doing them

Free Webinar
Nodecraft’s Data Migration From S3 to B2

Wednesday, June 5, 2019 at 10am PT
Cloud-Jitsu: Migrating 23TB from AWS S3 to Backblaze B2 in 7 hours

The post Migrating 23TB from Amazon S3 to Backblaze B2 in Just Seven Hours appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Survey Says: Cloud Storage Makes Strong Gains for Media & Entertainment

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/cloud-storage-makes-strong-gains-in-media-entertainment/

Survey Reveals Growing Adoption of Cloud Storage by Media & Entertainment

Where Does the Media Industry Really Use Cloud Storage?

Our new cloud survey results might surprise you.

Predicting which promising new technologies will be adopted quickly, which ones will take longer, and which ones will fade away is not always easy. When the iPhone was introduced in 2007, only 6% of the US population had smartphones. In less than 10 years, over 80% of Americans owned smartphones. In contrast, video telephone calls demonstrated at the 1964 New York World’s Fair only became commonplace 45 years later with the advent of FaceTime. And those flying cars people have dreamed of since the 1950s? Don’t hold your breath.

What about cloud storage? Who is adopting it today and for what purposes?

“While M&E professionals are not abandoning existing storage alternatives, they increasingly see the public cloud in storage applications as simply another professional tool to achieve their production, distribution, and archiving goals. For the future, that trend looks to continue as the public cloud takes on an even greater share of their overall storage requirements.”

— Phil Kurz, contributing editor, TV Technology

At Backblaze, we have a front-line view of how customers use cloud for storage. And based on the media-oriented customers we’ve directly worked with to integrate cloud storage, we know they’re using cloud storage throughout the workflow: backing up files during content creation (UCSC Silicon Valley), managing production storage more efficiently (WunderVu), archiving of historical content libraries (Austin City Limits), hosting media files for download (American Public Television), and even editing cloud-based video (Everwell).

We wanted to understand more about how the broader industry uses cloud storage and their beliefs and concerns about it, so we could better serve the needs of our current customers and anticipate what their needs will be in the future.

We decided to sponsor an in-depth survey with TV Technology, a media company that for over 30 years has been an authority for news, analysis and trend reports serving the media and entertainment industries. While TV Technology had conducted a similar survey in 2015, we thought it’d be interesting to see how the industry outlook has evolved. Based on our 2019 results, it certainly has. As a quick example, security was a concern for 71% of respondents in 2015. This year, only 38% selected security as an issue at all.

Survey Methodology — 246 Respondents and 15 Detailed Questions

For the survey, TV Technology queried 246 respondents, primarily from production and post-production studios and broadcasters, but also other market segments including corporate video, government, and education. See chart below for the breakdown. Respondents were asked 15 questions about their cloud storage usage today and in the future, and for what purpose. The survey queried what motivated their move to the cloud, their expectations for access times and cost, and any obstacles that are preventing further cloud adoption.

Types of businesses responding to survey

Survey Insights — Half Use Public Cloud Today — Cloud the Top Choice for Archive

Overall, the survey reveals growing cloud adoption for media organizations who want to improve production efficiency and to reduce costs. Key findings from the report include:

  • On the whole, about half of the respondents from all organization types are using public cloud services. Sixty-four percent of production/post studio respondents say they currently use the cloud. Broadcasters report lower adoption, with only 26 percent using the public cloud.
  • Achieving greater efficiency in production was cited by all respondents as the top reason for adopting the cloud. However, while this is also important to broadcasters, their top motivator for cloud use is cost containment or internal savings programs.
  • Cloud storage is clearly the top choice for archiving media assets, with 70 percent choosing the public cloud for active, deep, or very deep archive needs.
  • Concerns over the security of assets stored in a public cloud remain, however they have been assuaged greatly compared to the 2015 report, so much so that they are no longer the top obstacle to cloud adoption. For 40%, pricing has replaced security as the top concern.

These insights only scratch the surface of the survey’s findings, so we’re making the full 12 page report available to everyone. To get a deeper look and compare your experiences to your peers as a content creator or content owner, download and read Cloud Storage Technologies Establish Their Place Among Alternatives for Media today.

How are you using cloud storage today? How do you think that will change three years from now? Please tell us in the comments.

The post Survey Says: Cloud Storage Makes Strong Gains for Media & Entertainment appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How to Have Fun This Summer and Keep Your Data Safe, Too

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/protecting-your-data-when-traveling/

Man in hat taking goofy summer photos

If you’re like me, you can hardly wait for summer to be here. Summer is the time to get outdoors, go swimming, hang out with friends, and enjoy the weather. For many, it’s also a time for graduations, weddings, vacations, visiting family, and grilling in the backyard.

We’re likely to take more photos and go places we haven’t been before. And we take along all our portable gadgets, especially our cameras, phones, and digital music devices.

Unfortunately, being on the move means that the data on our digital devices is more susceptible to loss. We’re often not as careful backing up that data or even keeping track of the devices themselves. Perhaps you’ve had the sad experience of getting back home after a family reunion, company picnic, or vacation and discovering that your phone or camera didn’t make it all the way home with you.

With just a little planning and a few simple practices, you can be certain that your digital memories will last far beyond summer.

Keep All Those Summer Memories Safe

We don’t want you to miss out on all the great summer memories you’re going to create this year. Before summer is actually here, it’s good to review some tips to make sure that all those great memories you create will be with you for years to come.

Summer Data Backup Tips

Even if your devices are lost or stolen, you’ll be able to recover what was on them if you back them up during your trip. Don’t wait until you get home — do it regularly no matter where you are. It’s not hard to make sure your devices are backed up; you just need to take a few minutes to make a plan on how and when you’re going to back up your devices.

Have somewhere to put your backup data, either in the cloud or on a backup device that you can keep safe, give to someone else, or ship home

If You Have Access to Wi-Fi
  • If your devices are internet-ready, you can back them up to the cloud directly whenever you’re connected.
  • If you don’t have access to Wi-Fi, you can back up your devices to a laptop computer and then back up that computer to the cloud.

Note: See Safety Tips for Using Wi-Fi on the Go, below.

If You Don’t Have Access to Wi-Fi

If you don’t have access to Wi-Fi, you can back up your devices to a USB thumb drive and carry that with you. If you put it in luggage, put it in a piece of luggage different than where you carry your devices, or give it to a family member to put in their bag or luggage. To be extra safe, it’s easy and inexpensive to mail a thumb drive to yourself when you’re away from home. Some hotels will even do that for you.

Make Sure Your Devices Get Home With You

You want to be careful with your devices when you travel.

  • Use covers for your phone and cameras. It helps protects them from physical damage and also discourages robbers who are attracted to shiny things. In any case, don’t flash around your nice mobile phone or expensive digital camera. Keep them out of sight when you’re not using them.
  • Don’t leave any of your digital devices unprotected in an airport security line, at a hotel, on a cafe or restaurant table, beside the pool, or in a handbag on the floor or hanging from a chair.
  • Be aware of your surroundings. Be especially cautious of anyone getting close to you in a crowd.
  • It seems silly to say, but keep your devices away from all forms of liquid.
  • If available, you can use a hotel room or front desk safe to protect your devices when you’re not using them.

Water and Tech Don’t Mix

I love being near or in the water, but did you know that water damage is the most common cause of damage to digital devices? We should be more careful around water, but it’s easy for accidents to happen. And in the summer they tend to happen even more.

Mobile phone in pool

Safety Tips for Using Wi-Fi on the Go

Public Wi-Fi networks are notorious for being places where nefarious individuals snoop on other computers to steal passwords and account information. You can avoid that possibility by following some easy tips.

  • Before you travel, change the passwords on the accounts you plan to use. Change them again when you get home. Don’t use the same password on different accounts or reuse a password you’ve used previously. Password managers, such as 1Password, LastPass, or BitWarden, make handling your password easy.
  • Turn off sharing on your devices to prevent anyone obtaining access to your device.
  • Turn off automatic connection to open Wi-Fi networks.
  • Don’t use the web to access your bank, financial institutions, or other important sites if you’re not 100% confident in the security of your internet connection.
  • If you do access a financial, shopping, or other high risk site, make sure your connection is protected with Secure Socket Layer (SSL), which is indicated with the HTTPS prefix in the URL. When you browse over HTTPS, people on the same Wi-Fi network as you can’t snoop on the data that travels between you and the server of the website you’re connecting to. Most sites that ask for payment or confidential information use SSL. If they don’t, stay away.
  • If you can, set up a virtual private network (VPN) to protect your connection. A VPN routes your traffic through a secure network even on public Wi-Fi, giving you all the protection of your private network while still having the freedom of public Wi-Fi. This is something you should look into and set up before you go on a trip. Here are some tips for choosing a VPN.

Share the Knowledge About Keeping Data Safe

You might be savvy about all the above, but undoubtedly you have family members or friends who aren’t as knowledgeable. Why not share this post with someone you know who might benefit from these tips? To email this post to a friend, just click on the email social sharing icon to the left or at the bottom of this post. Or, you can just send an email containing this post’s URL, https://www.backblaze.com/blog/protecting-your-data-when-traveling.

And be sure to have a great summer!

The post How to Have Fun This Summer and Keep Your Data Safe, Too appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Welcome to Our New Blog

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/welcome-to-our-new-blog/

screenshot of the new Backblaze blog homepage

As we recently teased, we’ve been working on a new blog design and now it’s here. We invite you to kick the tires, take it for a spin, do a few donuts, and tell us what you think.

Our goals for the new design were pretty simple:

  1. Present a friendlier user interface
  2. Make it easier for the reader to find content related to what they’re reading
  3. Introduce the reader to content they might not know we wrote about
  4. Make everything work faster

Specifically, here’s what’s changed:

  • OverallFaster, easier to navigate, more content to discover
    • Faster to load
    • Highly responsive for mobile visitors
    • Worldwide Cloudflare caching
    • Three-column grid layout
    • Smooth scroll back to top of page
  • New Home Page layoutA better introduction to the blog
    • New banner for desktop visitors
    • Featured post(s) at the top of the page
  • Category PagesMore information on categories
    • Optional featured post from that category at the top of the page
    • Category description at the top of the page
  • Tag PagesMore information on tags
    • Tag description is at the top of the page
  • Author PagesWho is the author?
    • Author bio is at top
    • Author’s past posts
  • Post PagesMore information about the content you’re reading
    • Wider post text area
    • New sidebar highlights posts related to the post being read
    • Option to include other content & events related to the current post
    • Option to print post, if desired
  • Archives PageA new way to discover content
    • Discover posts by:
      • Date
      • Category
      • Tag
      • Search
      • Byline

Our new blog is faster and more flexible so that we can change or add capabilities as we need them. We already have a new more items we’re planning to implement over the coming months.

Please tell us what you think of the new design and if you have any other enhancements you’d like to see.

The post Welcome to Our New Blog appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.