Tag Archives: Featured 1

On-prem to Cloud, Faster: Meet Our Newest Fireball

Post Syndicated from Jeremy Milk original https://www.backblaze.com/blog/on-prem-to-cloud-faster-meet-our-newest-fireball/

We’re determined to make moving data into cloud storage as easy as possible for you, so today we are releasing the latest improvement to our data migration pathways: a bigger, faster Backblaze Fireball.

The new Fireball increases capacity for the rapid ingest service from 70TB to 96TB and connectivity speed from 1 Gb/s to 10 Gb/s so that businesses can move larger data sets and media libraries from on-premises to the Backblaze Storage Cloud faster than before.

What Hasn’t Changed

The service is still drop-dead simple. Data is secure and encrypted during the transfer process, and you gain the benefits of the cloud without having to navigate the constraints (and sluggishness) of internet bandwidth. We’re still happy to send you two, or three, or more Fireballs as needed—you can order whatever you need right from your Backblaze B2 Cloud Storage account. Easy.

How It Works

The customer favorite (of folks like Austin City Limits and Yoga International) service works like this: We ship you the Fireball, you copy on-premises data to it directly or through the transfer tool of your choice, you send the Fireball back to us, and we quickly upload your data into your B2 Cloud Storage account.

The Fireball is not right for everyone—organizations already storing to public clouds now frequently use our cloud to cloud migration solution, while those with small, local data sets often find internet transfer tools more than sufficient. For a refresher, definitely check out this “Pathways to the Cloud” guide.

Don’t Be Afraid to Ask

However you’d like to join us, we’re here to help. So—shameless plug alert—please don’t hesitate to contact our Sales team to talk about how to best start saving with B2 Cloud Storage.

The post On-prem to Cloud, Faster: Meet Our Newest Fireball appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Finding a 1Up When Free Cloud Credits Run Out

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/finding-a-1up-when-free-cloud-credits-run-out/

For people in the early stages of development, a cloud storage provider that offers free credits might seem like a great deal. And diversified cloud providers do offer these kinds of promotions to help people get started with storing data: Google Cloud Free Tier and AWS Free Tier offer credits and services for a limited time, and both providers also have incentive funds for startups which can be unlocked through incubators that grant additional credits of up to tens of thousands of dollars.

Before you run off to give them a try though, it’s important to consider the long-term realities that await you on the far side of these promotions.

The reality is that once they’re used up, budget items that were zeros yesterday can become massive problems tomorrow. Twitter is littered with countless experiences of developers finding themselves surprised with an unexpected bill and the realization that they need to figure out how to navigate the complexities of their cloud provider—fast.

What to Do When You Run Out of Free Cloud Storage Credits

So, what do you do once you’re out of credits? You could try signing up with different emails to game the system, or look into getting into a different incubator for more free credits. If you plan on your app being around for a few years and succeeding, the solution of finding more credits isn’t scalable, and the process of applying to another incubator would take too long. You can always switch from Google Cloud Platform to AWS to get free credits elsewhere, but transferring data between providers almost always incurs painful egress charges.

If you’re already sure about taking your data out of your current provider, read ahead to the section titled “Cloud to Cloud Migration” to learn how transferring your data can be easier and faster than you think.

Because chasing free credits won’t work forever, this post offers three paths for navigating your cloud bills after free tiers expire. It covers:

  • Staying with the same provider. Once you run out of free credits, you can optimize your storage instances and continue using (and paying) for the same provider.
  • Exploring multi-cloud options. You can port some of your data to another solution and take advantage of the freedom of a multi-cloud strategy.
  • Choosing another provider. You can transfer all of your data to a different cloud that better suits your needs.

Path 1: Stick With Your Current Cloud Provider

If you’re running out of promotional credits with your current provider, your first path is to just continue using their storage services. Many people see this as your only option because of the frighteningly high egress fees you’d face if you try to leave. If you choose to stay with the same provider, be sure to review and account for all of the instances you’ve spun up.

Here’s an example of a bill that one developer faced after their credits expired: This user found themselves locked into an unexpected $2,700 bill because of egress costs. Looking closer at their experience, the spike in charges was due to a data transfer of 30TB of data. The first 1GB of data transferred out is free, followed by egress costing $0.09 per gigabyte for the first 10TB and $0.085 per gigabyte for the next 40TB. Doing the math, that’s:

$0.085/GB x 20,414 GB = $1735, $0.090/GB x 10,239 GB = $921

Choosing to stay with your current cloud provider is a straightforward path, but it’s not necessarily the easiest or least expensive option, which is why it’s important to conduct a thorough audit of the current cloud services you have in use to optimize your cloud spend.

Optimizing Your Current Cloud Storage Solution

Over time, cloud infrastructure tends to become more complex and varied, and your cloud storage bills follow the same pattern. Cloud pricing transparency in general is an issue with most diversified providers—in short: It’s hard to understand what you’re paying for, and when. If you haven’t seen a comparison yet, a breakdown contrasting storage providers is shared in this post.

Many users find that AWS and Google Cloud are so complex that they turn to services that can help them monitor and optimize their cloud spend. These cost management services charge based on a percentage of your AWS spend. For a startup with limited resources, paying for these professional services can be challenging, but manually predicting cloud costs and optimizing spending is also difficult, as well as time consuming.

The takeaway for sticking with your current provider: Be a budget hawk for every fee you may be at risk of incurring, and ensure your development keeps you from unwittingly racking up heavy fees.

Path 2: Take a Multi-cloud Approach

For some developers, although you may want to switch to a different cloud after your free credits expire, your code can’t be easily separated from your cloud provider. In this case, a multi-cloud approach can achieve the necessary price point while maintaining the required level of service.

Short term, you can mitigate your cloud bill by immediately beginning to port any data you generate going forward to a more affordable solution. Even if the process of migrating your existing data is challenging, this move will stop your current bill from ballooning.

Beyond mitigation, there are multiple benefits to using a multi-cloud solution. A multi-cloud strategy gives companies the freedom to use the best possible cloud service for each workload. There are other benefits to taking a multi-cloud approach:

  • Redundancy: Some major providers have faced outages recently. A multi-cloud strategy allows you to have a backup of your data to continue serving your customers even if your primary cloud provider goes down.
  • Functionality: With so many providers introducing new features and services, it’s unlikely that a single cloud provider will meet all of your needs. With a multi-cloud approach, you can pick and choose the best services from each provider. Multinational companies can also optimize for their particular geographical regions.
  • Flexibility: Avoid vendor lock-in if you outgrow a single cloud provider with a diverse cloud infrastructure.
  • Cost: You may find that one cloud provider offers a lower price for compute and another for storage. A multi-cloud strategy allows you to pick and choose which works best for your budget.

The takeaway for pursuing multi-cloud: It might not solve your existing bill, but it will mitigate your exposure to additional fees going forward. And it offers the side benefit of providing a best-of-breed approach to your development tech stack.

Path 3: Find a New Cloud Provider

Finally, you can choose to move all of your data to a different cloud storage provider. We recommend taking a long-term approach: Look for cloud storage that allows you to scale with the least amount of friction while continuing to support everything you need for a good customer experience in your app. You’ll want to consider cost, usability, and solutions when looking for a new provider.

Cost

Many cloud providers use a multi-tier approach, which can become complex as your business starts to scale its cloud infrastructure. Switching to a provider that has single-tier pricing helps businesses planning for growth predict their cloud storage cost and optimize its spend, saving time and money for use on future opportunities. You can use this pricing calculator to check storage costs of Backblaze B2 Cloud Storage against AWS, Azure, and Google Cloud.

One example of a startup that saved money and was able to grow their business by switching to another storage provider is CloudSpot, a SaaS photography platform. They had initially gotten their business off the ground with the help of a startup incubator. Then in 2019, their AWS storage costs skyrocketed, but their team felt locked in to using Amazon.

When they looked at other cloud providers and eventually transferred their data out of AWS, they were able to save on storage costs that allowed them to reintroduce services they had previously been forced to shut down due to their AWS bill. Reviving these services made an immediate impact on customer acquisition and recurring revenue.

Usability

Time spent trying to navigate a complicated platform is a significant cost to business. Aiden Korotkin of AK Productions, a full-service video production company based in Washington, D.C., experienced this first hand. Korotkin initially stored his client data in Google Cloud because the platform had offered him a promotional credit. When the credits ran out in about a year, he found himself frustrated with the inefficiency, privacy concerns, and overall complexity of Google Cloud.

Korotkin chose to switch to Backblaze B2 Cloud Storage with the help of solution engineers that helped him figure out the best storage solution for his business. After quickly and seamlessly transferring his first 12TB in less than a day, he noticed a significant difference from using Google Cloud. “If I had to estimate, I was spending between 30 minutes to an hour trying to figure out simple tasks on Google (e.g. setting up a new application key, or syncing to a third-party source). On Backblaze it literally takes me five minutes,” he emphasized.

Integrations

Workflow integrations can make cloud storage easier to use and provide additional features. By selecting multiple best-of-breed providers, you can achieve better functionality with significantly reduced price and complexity.

Content delivery network (CDN) partnerships with Cloudflare and Fastly allow developers using services like Backblaze B2 to take advantage of free egress between the two services. Game developers can serve their games to users without paying egress between their origin source and their CDN, and media management solutions that can integrate directly with cloud storage to make media assets easy to find, sort, and pull into a new project or editing tool. Take a look at other solutions integrated with cloud storage that can support your workflows.

Cloud to Cloud Migration

After choosing a new cloud provider, you can plan your data migration. Your data may be spread out across multiple buckets, service providers, or different storage tiers—so your first task is discovering where your data is and what can and can’t move. Once you’re ready, there is a range of solutions for moving your data, but when it comes to moving between cloud services, a data migration tool like Flexify.IO can help make things a lot easier and faster.

Instead of manually offloading static and production data from your current cloud storage provider and reuploading it into your new provider, Flexify.IO reads the data from the source storage and writes it to the destination storage via inter-cloud bandwidth. Flexify.IO achieves fast and secure data migration at cloud-native speeds because the data transfer happens within the cloud environment.

Supercharged Data Migration with Flexify.IO

For developers with customer-facing applications, it’s especially important that customers still retain access to data during the migration from one cloud provider to another. When CloudSpot moved about 700TB of data from AWS to Backblaze B2 in just six days with help from Flexify.IO, customers were actually still uploading images to their Amazon S3 buckets. The migration process was able to support both environments and allowed them to ensure everything worked properly. It was also necessary because downtime was out of the question—customers access their data so frequently that one of CloudSpot’s galleries is accessed every one or two seconds.

What’s Next?

If you’re interested in exploring a different cloud storage service for your solution, you can easily sign up today, or contact us for more information on how to run a free POC or just to begin transferring your data out of your current cloud provider.

The post Finding a 1Up When Free Cloud Credits Run Out appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

NAS 101: A Buyer’s Guide to the Features and Capacity You Need

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/nas-101-a-buyers-guide-to-the-features-and-capacity-you-need/

As your business grows, the amount of data that it needs to store and manage also grows. Storing this data on loose hard drives and individual workstations will no longer cut it: Your team needs ready data access, protection from loss, and capacity for future growth. The easiest way to provide all three quickly and easily is network attached storage (NAS).

You might have already considered buying a NAS device, or you purchased one that you’ve already grown out of, or this could be your first time looking at your options. No matter where you’re starting, the number of choices and features NAS systems offer today are overwhelming, especially when you’re trying to buy something that will work now and in the future.

This post aims to make your process a little easier. The following content will help you:

  • Review the benefits of a NAS system.
  • Navigate the options you’ll need to choose from.
  • Understand the reason to pair your NAS with cloud storage.

How Can NAS Benefit Your Business?

There are multiple benefits that a NAS system can provide to users on your network, but we’ll recap a few of the key advantages here.

  • More Storage. It’s a tad obvious, but the primary benefit of a NAS system is that it will provide a significant addition to your storage capacity if you’re relying on workstations and hard drives. NAS systems create a single storage volume from several drives (often arranged in a RAID scheme).
  • Protection From Data Loss. Less obvious, but equally important, the RAID configuration in a NAS system ensures that the data you store can survive the failure of one or more of its hard drives. Hard drives fail! NAS helps to make that statement of fact less scary.
  • Security and Speed. Beyond protection from drive failure, NAS also provides security for your data from outside actors as it is only accessible on your local office network and to user accounts which you can control. Not only that, but it generally works as fast as your local office network speeds.
  • Better Data Management Tools. Fully automated backups, deduplication, compression, and encryption are just a handful of the functions you can put to work on a NAS system—all of which make your data storage more efficient and secure. You can also configure sync workflows to ease collaboration for your team, enable services to manage your users and groups with directory services, and even add services like photo or media management.

If this all sounds useful for your business, read on to learn more about bringing these benefits in-house.

The Network Attached Storage (NAS) Buyer’s Guide

How do you evaluate the differences between different NAS vendors? Or even within a single company’s product line? We’re here to help. This tour of the major components of a NAS system will help you to develop a tick list for the sizing and features of a system that will fit your needs.

Choosing a NAS: The Components

How your NAS performs is dictated by the components that make up the system, and capability of future upgrades. Let’s walk through the different options.

NAS Storage Capacity: How Many Bays Do You Need?

One of the first ways to distinguish between different NAS systems is the number of drive bays a given system offers, as this determines how many disks the system can hold. Generally speaking, the larger the number of drive bays, the more storage you can provide to your users and the more flexibility you have around protecting your data from disk failure.

In a NAS system, storage is defined by the number of drives, the shared volume they create, and their striping scheme (e.g. RAID 0, 1, 5, 6, etc.). For example, one drive gives no additional performance or protection. Two drives allows the option of simple mirroring. Mirroring is also referred to as RAID 0, when one volume is built from two drives, allowing for the failure of one of those drives without data loss. Two drives also allows for striping—referred to as RAID 1—when one volume is “stretched” across two drives, making a single, larger drive that also gives some performance improvement, but increases risk because the loss of one drive means that the entire volume will be unavailable.

Refresher: How Does RAID Work Again?
A redundant array of independent disks, or RAID, combines multiple hard drives into one or more storage volumes. RAID distributes data and parity (drive recovery information) across the drives in different ways, and each layout provides different degrees of data protection.

Three drives is the minimum for RAID 5, which can survive the loss of one drive, though four drives is a more common NAS system configuration. Five drives allow for RAID 6, which can survive the loss of two drives. Six to eight drives are very common NAS configurations that allow more storage, space, performance, and even drive sparing—the ability to designate a stand-by drive to immediately rebuild a failed drive.

Many believe that, if you’re in the market for a NAS system with multiple bays, you should opt for capacity that allows for RAID 6 if possible. RAID 6 can survive the loss of two drives, and delivers performance nearly equal to RAID 5 with better protection.

It’s understandable to think: Why do I need to prepare in case two drives fail? Well, when a drive fails and you replace it with a fresh drive, the rebuilding process to restore that drive’s data and parity information can take a long time. Though it’s rare, it’s possible to have another drive fail during the rebuilding process. In that scenario, if you have RAID 6 you’re likely going to be okay. If you have RAID 5, you may have just lost data.

Buyer’s Note: Some systems are sold without drives. Should you buy NAS with or without drives? That decision usually boils down to the size and type of drives you’d like to have.

When buying a NAS system with drives provided:

  • The drives are usually covered by the manufacturer’s warranty as part of the complete system.
  • The drives are typically bought directly from the manufacturer’s supply chain and shipped directly from the hard drive manufacturer.

If you choose to buy drives separately from your NAS:

  • The drives may be a mix of drive production runs, and have been in the supply chain longer. Match the drive capacities and models for the most predictable performance across the RAID volume.
  • Choose drives rated for NAS systems—NAS vendors publish lists of supported drive types. Here’s a list from QNAP, for example.
  • Check the warranty and return procedures, and if you are moving a collection of older drives into your NAS, you may also consider how much of the warranty has already run out.

Buyer Takeaway: Choose a system that can support RAID 5 or RAID 6 to allow a combination of more storage space, performance, and drive failure protection. But be sure to check whether the NAS system is sold with or without drives.

Selecting Drive Capacity for the NAS: What Size of Drives Should You Buy?

You can quickly estimate how much storage you’ll need by adding up the hard drives and external drives of all the systems you’ll be backing up in your office, adding the amount of shared storage you’ll want to provide to your users, and factor in any growing demand you project for shared storage.

If you have any historical data under management from previous years, you can calculate a simple growth rate. But, include a buffer as data growth accelerates every year. Generally speaking, price out systems at two or four times the size of your existing data capacity. Let’s say that your hard drives and external drives to back up, and any additional shared storage you’d like to provide your users, add up to 20TB. Double that size to get 40TB to account for growth, then divide by a common hard drive size such as 10TB. With that in mind, you can start shopping for four bay systems and larger.

Formula 1: ((Number of NAS Users x Hard Drive Size ) + Shared Storage) * Growth Factor = NAS Storage Needed

Example: There are six users in an office that will each be backing up their 2TB workstations and laptops. The team will want to use another 6TB of shared storage for documents, images, and videos for everyone to use. Multiplied times a growth factor of two, you’d start shopping for NAS systems that offer at least 36TB of storage.

((Six users * 2TB each) + 6TB shared storage ) * growth factor of two = 36TB

Formula 2: ((NAS Storage Needed / Hard Drive Size) + Two Parity Drives) = Drive Bays Needed

Example: Continuing the example above, when looking for a new NAS system using 12TB drives, accounting for two additional drives for RAID 6, you’d look for NAS systems that can support five or more drive bays of 12TB hard drives.

(( 36TB / 12TB ) + two additional drives ) = Five drive bays and up

If your budget allows, opting for larger drives and more drive bays will give you more storage overhead that you’ll surely grow into over time. Factor in, however, that if you go too big, you’re paying for unused storage space for a longer period of time. And if you use GAAP accounting, you’ll need to capitalize that investment over the same time window as a smaller NAS system which will hit your bottom line on an annual basis. This is the classic CapEx vs. Opex dilemma you can learn more about here.

If your cash budget is tight you can always purchase a NAS system with more bays but smaller drives, which will significantly reduce your upfront pricing. You can then replace those drives in the future with larger ones when you need them. Hard drive prices generally fall over time, so they will likely be less expensive in the future. You’ll end up purchasing two sets of drives over time, which will be less cash-intensive at the outset, but likely more expensive in the long run.

Similarly, you can partially fill the drive bays. If you want to get an eight bay system, but only have the budget for six drives, just add the other drives later. One of the best parts of NAS systems is the flexibility they allow you for right-sizing your shared storage approach.

Buyer Takeaway: Estimate how much storage you’ll need, add the amount of shared storage you’ll want to provide to your users, and factor in growing demand for shared storage—then balance long term growth potential against cash flow.

Processor, Controllers, and Memory: What Performance Levels Do You Require?

Is it better to have big onboard processors or controllers? Smaller, embedded chips common in smaller NAS systems provide basic functionality, but might bog down when serving many users or crunching through deduplication and encryption tasks, which are options with many backup solutions. Larger NAS systems typically stored in IT data center racks usually offer multiple storage controllers that can deliver the fastest performance and even failover capability.

  • Processor: Provides compute power for the system operation, services, and applications.
  • Controller: Manages the storage volume presentation and health.
  • Memory: Improves speed of applications and file serving performance.
  • ARM and Intel Atom chips are good for basic systems, while larger and more capable processors such as the Intel Corei3 and Corei5 are faster at NAS tasks like encryption, deduplication, and serving any on-board apps. Xeon server class chips can be found in many rack-mounted systems, too.

    So if you’re just looking for basic storage expansion, the entry-level systems with more modest, basic chips will likely suit you just fine. If deduplication, encryption, sync, and other functions many NAS systems offer as optional tools are part of your future workflow, this is one area where you shouldn’t cut corners.

    Adding memory modules to your NAS can be a simple performance upgrade.

    If you have the option to expand the system memory, this can be an easy performance upgrade. Generally, the higher the ratio of memory to drives will benefit the performance of reading and writing to disk and the speed of on-board applications.

    Buyer Takeaway: Entry-level NAS systems provide good basic functionality, but you should ensure your components are up to the challenge if you plan to make heavy use of deduplication, encryption, compression, and other functions.

    Network and Connections: What Capacity for Speed Do You Need?

    A basic NAS will have a Gigabit Ethernet connection, which you will often find listed as 1GigE. This throughput of 1 Gb/s in network speeds is equivalent to 125 MB/s coming from your storage system. That means that the NAS system must fit storage service to all users within that limitation, which is usually not an issue when serving only a few users. Many systems offer expansion ports inside, allowing you to purchase a 10GigE network card later to upgrade your NAS.

    An example of a small 10GigE add-in card that can boost your NAS network performance.

    Some NAS vendors offer 2.5 Gb/s, or 5 Gb/s connections on their systems—these will give you more performance than 1GigE connections, but usually require that you get a compatible network switch, and possibly, USB adapters or expansion cards for every system that will connect to that NAS via the switch. If your office is already wired for 10GigE, make sure your NAS is also 10GigE. Otherwise, the more network ports in the back of the system, the better. If you aren’t ready to get a 10GigE capable system now, but you think you might be in the future, select a system that has expansion capability.

    Some NAS systems offer not only multiple network ports, but faster connections as well, such as Thunderbolt™.

    Some systems provide another option of Thunderbolt connections in addition to Ethernet connections. These allow laptops and workstations with Thunderbolt ports to directly connect to the NAS and offer much higher bandwidth—up to 40GigE (5 GB/s)—and are good for systems that need to edit large files directly on the NAS, such as is often the case in video editing. If you’ll be directly connecting systems that need the fastest possible speeds, select a system with Thunderbolt ports, one per Thunderbolt-connected user.

    Buyer Takeaway: It’s best to have more network ports in the back of your system. Or, select a system with network expansion card capability.

    Caching and Hybrid Drive Features: How Fast Do You Need to Serve Files?

    Many of the higher-end NAS systems can complement standard 5.25” hard drives with higher performing, smaller form factor SSD or M.2 drives. These smaller, faster drives can dramatically improve the NAS file serving performance by caching files in most recent, or most frequently requested files. By combining these different types of drives, the NAS can deliver both improved file serving performance, and large capacity.

    As the number of users you support in each office grows, these capabilities will become more important as a relatively simple way to boost performance. Like we mentioned earlier, you can purchase a system with these slots unpopulated and add them in later.

    Buyer Takeaway: Combine different types of drives, like smaller form factor SSD or M.2 storage with 5.25” hard drives to gain improved file serving performance.

    Operating System: What Kind of Management Features Do You Require?

    The NAS operating systems of the major vendors generally provide the same services in an OS-like interface delivered via an on-board web server. By simply typing in your NAS’s IP address, you can sign in and manage your system’s settings, create and manage the storage volumes, set up groups of users on your network who have access, configure and monitor backup and sync tasks, and more.

    If there are specific user management features in your IT environment that you need, or want to test how the NAS OS works, you can test them by spinning up a demonstration virtual machine offered by some NAS vendors. You can test service configuration and get a feel for the interface and tools, but obviously as a virtual environment you won’t be able to manage hardware directly. Here are some options:

    Buyer Takeaway: The on-board NAS OS looks similar to a Mac or PC operating system to make it easy to navigate system setup and maintenance and allows you to manage settings, storage, and tasks.

    Solutions: What Added Services Do You Require?

    While the onboard processor and memory on your NAS are primarily for file service, backup, and sync tasks, you can also install other solutions directly onto it. For instance, QNAP and Synology—two popular NAS providers—have app stores accessible from their management software where you can select applications to download and install on your NAS. You might be interested in a backup and sync solution such as Archiware, or CMS solutions like Joomla or WordPress.

    Applications available to install directly within some NAS vendors’ management system.

    However, beyond backup solutions, you’d benefit from installing mission-critical apps onto a dedicated system rather than on your NAS. For a small number of users, running applications directly on the NAS can be a good temporary use or a pathway to testing something out. But if the application becomes very busy, it could impact the other services of the NAS. Big picture, native apps on your NAS can be useful, but don’t overdo it.

    Buyer Takeaway: The main backup and sync apps from the major NAS vendors are excellent—give them a good test drive, but know that there are many excellent backup and sync solutions available as well.

    Why Adding Cloud Storage to Your NAS Offers Additional Benefits

    When you pair cloud storage with your NAS, you gain access to features that complement the security of your data and your ability to share files both locally and remotely.

    To start with, cloud storage provides off-site backup protection. This aligns your NAS setup with the industry standard for data protection: a 3-2-1 backup strategy—which ensures that you have three copies of your data, the source data and two backups—one of which is on your NAS, and the second copy of your data is protected off-site. And in the event of data loss, you can restore your systems directly from the cloud even if all the systems in your office are knocked out or destroyed.

    While data sent to the cloud is encrypted in-flight via SSL, you can also encrypt your backups so that they are only openable with your team’s encryption key. The cloud can also give you advanced storage options for your backup files like Write Once, Read Many (WORM) or immutability—making your data unchangeable for a defined period of time—or set custom data lifecycle rules at the bucket level to help match your ideal backup workflow.

    Additionally, cloud storage provides valuable access to your data and documents from your NAS through sync capabilities. In case anyone on your team needs to access a file when they are away from the office, or as is more common now, in case your entire team is working from home, they’ll be able to access the files that have been synced to the cloud through your NAS’s secure sync program. You can even sync across multiple locations using the cloud as a two-way sync to quickly replicate data across locations. For employees collaborating across great distances, this helps to ensure they’re not waiting on the internet to deliver critical files: They’re already on-site.

    Refresher: What’s the Difference Between Cloud Sync, Cloud Backup, and Cloud Storage? Sync services allow multiple users across multiple devices to access the same file. Backup stores a copy of those files somewhere remote from your work environment, oftentimes in an off-site server—like cloud storage. It’s important to know that a “sync” is not a backup, but they can work well together when properly coordinated. You can read more about the differences in this blog post.

    Ready to Set Up Your NAS With Cloud Storage

    To summarize, here are a few things to remember when shopping for a NAS system:

    • Consider how much storage you’ll need for both local backup and for shared user storage.
    • Look for a system with three to five drive bays at minimum.
    • Check that the NAS system is sold with drives—if not, you’ll have to source enough of the same size drives.
    • Opt for a system that lets you upgrade the memory and network options.
    • Choose a system that meets your needs today; you can always upgrade in the future.

    Coupled with cloud storage like Backblaze B2 Cloud Storage, which is already integrated with NAS systems from Synology and QNAP, you gain necessary backup protection and restoration from the cloud, as well as the capability to sync across locations.

    Have more questions about NAS features or how to implement a NAS system in your environment? Ask away in the comments.

    The post NAS 101: A Buyer’s Guide to the Features and Capacity You Need appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Backblaze Hard Drive Stats for 2020

    Post Syndicated from original https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/

    In 2020, Backblaze added 39,792 hard drives and as of December 31, 2020 we had 165,530 drives under management. Of that number, there were 3,000 boot drives and 162,530 data drives. We will discuss the boot drives later in this report, but first we’ll focus on the hard drive failure rates for the data drive models in operation in our data centers as of the end of December. In addition, we’ll welcome back Western Digital to the farm and get a look at our nascent 16TB and 18TB drives. Along the way, we’ll share observations and insights on the data presented and as always, we look forward to you doing the same in the comments.

    2020 Hard Drive Failure Rates

    At the end of 2020, Backblaze was monitoring 162,530 hard drives used to store data. For our evaluation, we remove from consideration 231 drives which were used for testing purposes and those drive models for which we did not have at least 60 drives. This leaves us with 162,299 hard drives in 2020, as listed below.

    Observations

    The 231 drives not included in the list above were either used for testing or did not have at least 60 drives of the same model at any time during the year. The data for all drives, data drives, boot drives, etc., is available for download on the Hard Drive Test Data webpage.

    For drives which have less than 250,000 drive days, any conclusions about drive failure rates are not justified. There is not enough data over the year-long period to reach any conclusions. We present the models with less than 250,000 drive days for completeness only.

    For drive models with over 250,000 drive days over the course of 2020, the Seagate 6TB drive (model: ST6000DX000) leads the way with a 0.23% annualized failure rate (AFR). This model was also the oldest, in average age, of all the drives listed. The 6TB Seagate model was followed closely by the perennial contenders from HGST: the 4TB drive (model: HMS5C4040ALE640) at 0.27%, the 4TB drive (model: HMS5C4040BLE640), at 0.27%, the 8TB drive (model: HUH728080ALE600) at 0.29%, and the 12TB drive (model: HUH721212ALE600) at 0.31%.

    The AFR for 2020 for all drive models was 0.93%, which was less than half the AFR for 2019. We’ll discuss that later in this report.

    What’s New for 2020

    We had a goal at the beginning of 2020 to diversify the number of drive models we qualified for use in our data centers. To that end, we qualified nine new drives models during the year, as shown below.

    Actually, there were two additional hard drive models which were new to our farm in 2020: the 16TB Seagate drive (model: ST16000NM005G) with 26 drives, and the 16TB Toshiba drive (model: MG08ACA16TA) with 40 drives. Each fell below our 60-drive threshold and were not listed.

    Drive Diversity

    The goal of qualifying additional drive models proved to be prophetic in 2020, as the effects of Covid-19 began to creep into the world economy in March 2020. By that time we were well on our way towards our goal and while being less of a creative solution than drive farming, drive model diversification was one of the tactics we used to manage our supply chain through the manufacturing and shipping delays prevalent in the first several months of the pandemic.

    Western Digital Returns

    The last time a Western Digital (WDC) drive model was listed in our report was Q2 2019. There are still three 6TB WDC drives in service and 261 WDC boot drives, but neither are listed in our reports, so no WDC drives—until now. In Q4 a total of 6,002 of these 14TB drives (model: WUH721414ALE6L4) were installed and were operational as of December 31st.

    These drives obviously share their lineage with the HGST drives, but they report their manufacturer as WDC versus HGST. The model numbers are similar with the first three characters changing from HUH to WUH and the last three characters changing from 604, for example, to 6L4. We don’t know the significance of that change, perhaps it is the factory location, a firmware version, or some other designation. If you know, let everyone know in the comments. As with all of the major drive manufacturers, the model number carries patterned information relating to each drive model and is not randomly generated, so the 6L4 string would appear to mean something useful.

    WDC is back with a splash, as the AFR for this drive model is just 0.16%—that’s with 6,002 drives installed, but only for 1.7 months on average. Still, with only one failure during that time, they are off to a great start. We are looking forward to seeing how they perform over the coming months.

    New Models From Seagate

    There are six Seagate drive models that were new to our farm in 2020. Five of these models are listed in the table above and one model had only 26 drives, so it was not listed. These drives ranged in size from 12TB to 18TB and were used for both migration replacements as well as new storage. As a group, they totaled 13,596 drives and amassed 1,783,166 drive days with just 46 failures for an AFR of 0.94%.

    Toshiba Delivers More Zeros

    The new Toshiba 14TB drive (model: MG07ACA14TA) and the new Toshiba 16TB (model: MG08ACA16TEY) were introduced to our data centers in 2020 and they are putting up zeros, as in zero failures. While each drive model has only been installed for about two months, they are off to a great start.

    Comparing Hard Drive Stats for 2018, 2019, and 2020

    The chart below compares the AFR for each of the last three years. The data for each year is inclusive of that year only and for the drive models present at the end of each year.

    The Annualized Failure Rate for 2020 Is Way Down

    The AFR for 2020 dropped below 1% down to 0.93%. In 2019, it stood at 1.89%. That’s over a 50% drop year over year. So why was the 2020 AFR so low? The answer: It was a group effort. To start, the older drives: 4TB, 6TB, 8TB, and 10TB drives as a group were significantly better in 2020, decreasing from a 1.35% AFR in 2019 to a 0.96% AFR in 2020. At the other end of the size spectrum, we added over 30,000 larger drives: 14TB, 16TB, and 18TB, which as a group recorded an AFR of 0.89% for 2020. Finally, the 12TB drives as a group had a 2020 AFR of 0.98%. In other words, whether a drive was old or new, or big or small, they performed well in our environment in 2020.

    Lifetime Hard Drive Stats

    The chart below shows the lifetime annualized failure rates of all of the drives models in production as of December 31, 2020.

    AFR and Confidence Intervals

    Confidence intervals give you a sense of the usefulness of the corresponding AFR value. A narrow confidence interval range is better than a wider range, with a very wide range meaning the corresponding AFR value is not statistically useful. For example, the confidence interval for the 18TB Seagate drives (model: ST18000NM000J) ranges from 1.5% to 45.8%. This is very wide and one should conclude that the corresponding 12.54% AFR is not a true measure of the failure rate of this drive model. More data is needed. On the other hand, when we look at the 14TB Toshiba drive (model: MG07ACA14TA), the range is from 0.7% to 1.1% which is fairly narrow, and our confidence in the 0.9% AFR is much more reasonable.

    3,000 Boot Drives

    We always exclude boot drives from our reports as their function is very different from a data drive. While it may not seem obvious, having 3,000 boot drives is a bit of a milestone. It means we have 3,000 Backblaze Storage Pods in operation as of December 31st. All of these Storage Pods are organized into Backblaze Vaults of 20 Storage Pods each or 150 Backblaze Vaults.

    Over the last year or so, we moved from using hard drives to SSDs as boot drives. We have a little over 1,200 SSDs acting as boot drives today. We are validating the SMART and failure data we are collecting on these SSD boot drives. We’ll keep you posted if we have anything worth publishing.

    Are you interested in learning more about the trends in the 2020 drive stats? Join our upcoming webinar: “Backblaze Hard Drive Report: 2020 Year in Review Q&A” with drive stats author, Andy Klein, on February 3.

    The Hard Drive Stats Data

    The complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone; it is free.

    If you just want the summarized data used to create the tables and charts in this blog post you can download the ZIP file containing the CSV files for each chart.

    Good luck and let us know if you find anything interesting.

    The post Backblaze Hard Drive Stats for 2020 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Q&A: Developing for the Data Transfer Project at Facebook

    Post Syndicated from Jeremy Milk original https://www.backblaze.com/blog/qa-developing-for-the-data-transfer-project-at-facebook/

    Facebook pointing at Backblaze Cloud

    In October of 2020, we announced that Facebook integrated Backblaze B2 Cloud Storage as a data transfer destination for their users’ photos and videos. This secure, encrypted service, based on code that Facebook developed with the open-source Data Transfer Project, allows users choices for how and where they manage or archive their media.

    We spoke with Umar Mustafa, the Facebook Partner Engineer who led the project, about his team’s role in the Data Transfer Project (DTP) and the development process in configuring the data portability feature for Backblaze B2 Cloud Storage using open-source code. Read on to learn about the challenges of developing data portability including security and privacy practices, coding with APIs, and the technical design of the project.

    Q: Can you tell us about the origin of Facebook’s data portability project?

    A: Over a decade ago, Facebook launched a portability tool that allowed people to download their information. Since then, we have been adding functionality for people to have more control over their data.

    In 2018, we joined the Data Transfer Project (DTP), which is an open-source effort by various companies, like Google, Microsoft, Twitter, and Apple, that aims to build products to allow people to easily transfer a copy of their data between services. The DTP tackles common problems like security, bandwidth limitations, and just the sheer inconvenience when it comes to moving large amounts of data.

    And so in connection with this project, we launched a tool in 2019 that lets people port their photos and videos. Google was the first destination and we have partnered with more companies since then, with Backblaze being the most recent one.

    Q: As you worked on this tool, did you have a sense for the type of Facebook customer that chooses to copy or transfer their photos and videos over to cloud storage?

    A: Yes, we thought of various ways that people could use the tool. Someone might want to try out a new app that manages photos or they might want to archive all the photos and videos they’ve posted over the years in a private cloud storage service.

    Q: Would you walk us through the choice to develop it using the open-source DTP code?

    A: In order to transfer data between two services, you’d typically use the API from the first service to read data, then transform it if necessary for the second service, and finally use the API from the second service to upload it. While this approach works, you can imagine that it requires a lot of effort every time you need to add a new source or destination. And an API change by any one service would force all its collaborators to make updates.

    The DTP solves these problems by offering an open-source data portability platform. It consists of standard data models and a set of service adapters. Companies can create their import and export adapters, or for services with a public API, anyone can contribute the adapters to the project. As long as two services have adapters available for a specific data type (e.g. photos), that data can be transferred between them.

    Being open-source also means anyone can try it out. It can be run locally using Docker, and can also be deployed easily in enterprise or cloud-based environments. At Facebook, we have a team that contributes to the project, and we encourage more people from the open-source community to join the effort. More details can be found about the project on GitHub.

    Integrating a new service as a destination or a source for an existing data type normally requires adding two types of extensions, an auth extension and a transfer extension. The open-source code is well organized, so you can find all available auth extensions under the extensions/auth module and all transfer extensions under the extensions/data-transfer module, which you can refer to for guidance.

    The auth extension only needs to be written once for a service and can be reused for each different data type that the service supports. Some common auth extensions, like OAuth, are already available in the project’s libraries folder and can be extended with very minimal code (mostly config). Alternatively, you can add your own auth extension as long as it implements the AuthServiceExtension interface.

    A transfer extension consists of import adapters and export adapters for a service, and each of them is for a single data type. You’ll find them organized by service and data type in the extensions/data-transfer module. In order to add one, you’ll have to add a similar package structure, and write your adapter by implementing the Importer<a extends AuthData, T extends DataModel> interface using the respective AuthData and DataModel classes for the adapter.

    For example, in Backblaze we created two import adapters, one for photos and one for videos. Each of them uses the TokenSecretAuthData containing the application key and secret. The photos importer uses the PhotosContainerResource as the DataModel and the videos importer uses the VideosContainerResource. Once you have the boilerplate code in place for the importer or exporter, you have to implement the required methods from the interface to get it working, using any relevant SDKs as you need. As Backblaze offers the Backblaze S3 Compatible APIs, we were able to use the AWS S3 SDK to implement the Backblaze adapters.

    There’s a well written integration guide for the project on GitHub that you can follow for further details about integrating with a new service or data type.

    Q: Why did you choose Backblaze as a storage endpoint?

    A: We want people to be able to choose where they want to take their data. Backblaze B2 is a cloud storage of choice for many people and offers Backblaze S3 Compatible APIs for easy integration. We’re happy to see people using Backblaze to save a copy of their photos and videos.

    Q: Can you tell us about the comprehensive security and compliance review you conducted before locking in on Backblaze?

    A: Privacy and security is of utmost importance for us at Facebook. When engaging with any partner, we check that they comply with certain standards. Some of the things that help us evaluate a partner include:

    • Information security policies.
    • Privacy policies.
    • Third-party security certifications, as available.

    We followed a similar approach to review the security and privacy practices that Backblaze follows, which are also demonstrated by various industry standard certifications.

    Q: Describe the process of coding to Backblaze, anything you particularly enjoyed? Anything you found different or challenging? Anything surprising?

    A: The integration for the data itself was easy to build. The Backblaze S3 Compatible APIs make coding the adapters pretty straightforward, and Backblaze has good documentation around that.

    The only difference between Backblaze and our other existing destinations was with authentication. Most adapters in the DTP use OAuth for authentication, where users log in to each service before initiating a transfer. Backblaze is different as it uses API keys-based authentication. This meant that we had to extend the UI in our tool to allow users to enter their application key details and wire that up as TokenSecretAuthData to the import adapters to transfer jobs securely.

    Q: What interested you in data portability?

    A: The concept of data portability sparked my interest once I began working at Facebook. Coincidentally, I had recently wondered if it would be possible to move my photos from one cloud backup service to another, and I was glad to discover a project at Facebook addressing the issue. More importantly, I felt that the problem it solves is important.

    Facebook is always looking for new ways to innovate, so it comes with an opportunity to potentially influence how data portability will be commonly used and perceived in the future.

    Q: What are the biggest challenges for DTP? It seems to be a pretty active project three years after launch. Given all the focus on it, what is it that keeps the challenge alive? What areas are particularly vexing for the project overall?

    One major challenge we’ve faced is around technical design—currently the tool has to be deployed and run independently as a single instance to be able to make transfers. This has its advantages and disadvantages. On one hand, any entity or individual can run the project completely and enable transfers to any of the available services as long as the respective credentials are available. On the other hand, in order to integrate a new service, you need to redeploy all the instances where you need that service.

    At the moment, Google has their own instance of the project deployed on their infrastructure, and at Facebook we have done the same, as well. This means that a well-working partnership model is required between services to offer the service to their respective users. As one of the maintainers of the project, we try to make this process as swift and hassle-free as possible for new partners.

    With more companies investing time in data portability, we’ve started to see increased improvements over the past few months. I’m sure we’ll see more destinations and data types offered soon.

    The post Q&A: Developing for the Data Transfer Project at Facebook appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    New Year, New Goals: Six Backup and Cloud Storage Tips for 2021

    Post Syndicated from Nicole Perry original https://www.backblaze.com/blog/new-year-new-goals-six-backup-and-cloud-storage-tips-for-2021/

    Are New Year’s resolutions still a thing after 2020? Given the way most of ours were blown out of the water in March of this past year, we’re not sure. At the least though, we learned that no matter our good intentions, the unexpected can still have its way with us. Thankfully we also learned new ways to plan and prepare (and we don’t mean buying 20 packs of toilet paper) to ensure that the unexpected isn’t quite as unpleasant as it might have been.

    With this post, we want to help ensure that data loss is one challenge you can take OFF your list of potential unpleasantness in 2021. By preparing for accidental deletions and computer crashes with a computer backup or cloud storage plan, you can shelve at least one uncertainty for the rest of 2021 and beyond.

    Best Practices for Starting Your Backup Plan

    With the holiday season (and the sales that come with it) coming to an end, you may have updated to a new computer or need to set up a computer for one of your family members. If so, you may have heard about the importance of backup and want to know how to set it up yourself. First thing to know: It’s super easy!

    To back up pictures and other files on your computer using a cloud backup system, you simply need to choose a service and install the software on your computer or laptop. Depending on what you choose, you may need to go through all of your files and folders and select what you’d like to protect. We’re partial to our backup service, however, which backs up everything on your machine for you. You don’t need to worry about anything getting missed. You won’t notice the Backblaze backup client is there, but it will store a backup of everything on your computer, and whenever you modify a file or add something, it will back that up, too. Other than ensuring your credit card is up to date and that you connect to the internet long enough for it to upload data, you don’t need to do anything else to keep the service rolling.

    For many of us, accomplishing this first step is good enough to keep us feeling safe and sound for a long time. But if you’ve been reading about ransomware attacks, had a friend lose data, or you’ve ever lost data yourself, there are six more easy steps you can take to ensure MAXIMUM peace of mind going forward.

    Top Six Things to Keep in Mind When Monitoring Your Backup and Cloud Storage Strategy in 2021

    1. Lay Out Your Strategy.

    When you’re just starting out, or even later on in your computer backup journey, it’s a good idea to have a basic backup strategy. Here are three questions to help you establish one:

    What data needs to be backed up?

    “Everything” might be your answer, but it’s a little more complex than that. Do you want to preserve every version of every file? Do you have external hard drives with data on them? Do you want to back up your social profiles or other data that doesn’t live on your machine? Make sure you’re truly considering everything.

    How often should it be backed up?

    Important files should be backed up at minimum once a week, preferably once every 24 hours. If your data changes less frequently, then scheduling a periodic backup might be better for you. If you have older hard drives you don’t use often, you might want to simply archive your backup for them, rather than needing to plug them in whenever you get close to the edge of your version history.

    How should I continue to monitor my backup?

    It can be devastating to find out that your data backup has been failing at the time when you may have lost your data. If your backup job has been running quietly for months, it is a good idea to check and make sure it’s doing its job. Testing the restore feature on your backup gives you the ability to check that all the data you deem important is going to still be there when you need it most.

    Two Factor Verification via Auth Apps

    2. Keep Data Security in Mind.

    At the end of 2019, we shared six New Year’s resolutions to help protect your data, but we realize that some of your New Year’s resolutions may have been deferred. So here’s a little reminder that data security is always important! We’ll keep it simple: If you take one security step in 2021, make it to set two-factor authentication on all of your accounts.

    Two-factor authentication notifies you whenever someone tries to log in to your account and will not give them access until you enter the second identification code. You can choose from many different delivery options to receive the code, like an SMS text, voicemail, or using an application like Google Authenticator (we recommend the latter as it’s the most secure).

    Either way, two-factor authentication means that not only will hackers have to steal your credentials and password, they’ll also have to get access to one of your personal devices. Needless to say, this will greatly decrease the chances that your data will be compromised.

    3. Know Where Your Data Lives.

    Over the years, our data often becomes “scattered.” Bits and pieces of our data are strewn from place to place as we create new data on different platforms and services. Between new and old computers, multiple hard drives, sync services like Google Drive, all of your social profiles, and all the others, it’s easy to lose track of where your most important data is when you need it. Especially because many of these locations will not be covered by standard backup services.

    Mapping out where your data lives will help you to track what’s being stored off of your computer (like on a hard drive or USB), what’s being synced to the cloud, and what data is being backed up.

    Once you have an idea of where your data is, your backup strategy comes into play. If there are important files that are being synced or that live on a hard drive, you may want to think about moving those files to a device that is being backed up or to an archive. Once you do, you’ll never have to worry about them again!

    4. Consider Which Retention Span Fits Best for You.

    Backup retention—also known as data retention—is how long you would like your data to be archived. At Backblaze, you have three options for your data retention: 30 days (the default), 1 Year, or Forever Version History. Picking between the three can feel tricky but it really just depends on your needs. If you have a college student away at school for a year and want to make sure their data is retrievable in case of emergency (like a coffee spill on their computer in the library), then yearly may be the best option for you. If you are a writer who constantly needs to look back on past versions of material you have written, then forever version history may be the best option for you.

    Any retention plan should work just fine as long as you are monitoring your backup and understand what data is still being retained.

    How to Restore Lost Files

    5. Testing Restores

    There’s an old saying that “Data is only as good as your last backup, and your backup is only as good as your ability to restore it.” When data loss occurs, the first question that comes to mind is, “Who is responsible for restoring those backups?” and the answer is simple: you are!

    Think of testing your restore as a fire drill. When you go through the steps to restore your data you want to make sure that you know what the steps are, what files are backed up when you want to recover them, and what options you have for restoring your data. When testing out your restore, this may clue you in on potential holes in your backup that you can fix before it’s too late.

    6. Archive Your Data

    Backups are great for things you are actively using on your computer, but when you’re done with a project or your computer starts underperforming due to the amount of data on it, you may want to think about archiving that data. In cloud storage and backup, an “archive” is a place to keep data for long term storage. This ensures your computer can run its best with some freed up storage space.

    Archives can be used for space management on your computer and long term retention. The original data may (or may not be) deleted after the archive copy is made and stored—it’s up to you! You can always store another copy on a hard drive if you want to be extra careful.

    With our Backblaze B2 Cloud Storage product, you can create an archive of your data in various different ways. You can experiment with setting up your own archive by creating a B2 Cloud Storage Bucket within your Backblaze Computer Backup account. It’s easy (we even outlined a step by step process on how to do it), and more importantly, free: Your first 10GB of data stored are on us!

    These are some of the recommendations we have for utilizing your computer backup and cloud storage account. If you could just try one, three, or more, then you are starting 2021 out right!

    The post New Year, New Goals: Six Backup and Cloud Storage Tips for 2021 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    2020 in the Rearview

    Post Syndicated from original https://www.backblaze.com/blog/2020-in-the-rearview/

    Looking Out for Our Team, Customers, and Community

    Writing a “year in review” for 2020 feels more than a little challenging. After all, it’s the first year in memory that became its own descriptor: The phrase “because 2020” has become the lead in or blanket explanation for just about any news story we never could have predicted at the beginning of this year.

    And yet, looking forward to 2021, I can’t help but feel hopeful when I think about what we did with these hard times. Families rediscovered ways to stay connected and celebrate, neighbors and communities strengthened their bonds and their empathy for one another, and all sorts of businesses and organizations reached well beyond any idea of normal operations to provide services and support despite wild headwinds. Healthcare professionals, grocery stores, poll workers, restaurants, teachers—the creativity and resilience shown in all they’ve accomplished in a matter of months is humbling. If we can do all of this and more in a year of unprecedented challenges, imagine what we can do when we’re no longer held back by a global pandemic?

    Looking closer to home, at the Backblaze community—some 190 employees, as well as their families and pets, and our hundreds of thousands of customers and partners around the world—I’m similarly hopeful. In the grand scheme of the pandemic, we were lucky. Most of our work, our services, and our customers’ work, can be accomplished remotely. And yet, I can’t help but be inspired by the stories from this year.

    There were Andrew Davis and Alex Acosta, two-thirds of the IT operations team at Gladstone Institutes—a leader in biomedical research that rapidly shifted many of its labs’ focus this year to studying the virus that causes COVID-19. After realizing their data was vulnerable, these two worked with our team to move petabytes of data off of tape and into the cloud, protecting all of it from ransomware and data loss.

    Research in process at Gladstone Institutes. Photo Credit: Gladstone Institutes.

    And then there were Cédric Pierre-Louis, Director of Programming for the African Fiction Channels at THEMA, and Gareth Howells, Director of Out Point Media, who worked with our friends at iconik to make collaboration and storytelling easier across the African Fiction Channels at THEMA—a Canal+ Group company that has more than 180 television channels in its portfolio. The creative collaboration that goes into TV might not rival the life-saving potential of Gladstone’s work, but I think everyone needed to escape through the power of media at some point this year.

    Members of the Backblaze team, connecting remotely.

    And if you had told me on March 7th—the day after we made the decision to shift Backblaze to mostly 100% work from home status until the COVID-19 situation resolved—that the majority of our team would work for 10 more months (and counting) from our kitchens and attics and garages…and that we’d still launch the Backblaze S3 Compatible APIs, clear an exabyte of data under management, enable Cloud to Cloud Migration, and announce so many other solutions and partnerships, I’m not sure which part would have been harder to believe. But during a year when cloud storage and computer backup became increasingly important for businesses and individuals, I’m truly proud of the way our team stepped up to support and serve our customers.

    These are just a sampling of the hopeful stories from our year. There’s no question that there are still challenges in our future, but tallying what we’ve been able to achieve while our Wi-Fi cut in and out, our pets and children rampaged through the house, while we swapped hard drives while masked and six feet distant from our coworkers, there’s little question in my mind that we can meet them. Until then, thanks for your good work, your business, and sticking with us, together, while apart.

    The post 2020 in the Rearview appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    The 2020 Top Ten(s)

    Post Syndicated from original https://www.backblaze.com/blog/the-2020-top-tens/

    Top 10 lists! You know them. You read them! You love them? As 2020 comes to an end and we look longingly at the new year ahead of us, I wanted to take a moment and look back at what you, our blog readers, have found amusing, entertaining, and informative over this past year.

    To do that, we looked at our analytics and picked out the top 10 most-viewed stories that we published in 2020. The results may not shock you, but they may entertain you, especially if you missed any of these the first time around. Without further ado, let’s jump into the results!

    Top 10 Backblaze Blog Posts From 2020

        1. 2019 Hard Drive Stats. It’s not surprising to see a year-end hard drive stats post in the first position. Readers show up for these posts in a big way and this one took a look at the entirety of 2019 as a year-end wrap up.
        2. The Complete Guide to Ransomware. With huge organizations like Foxconn, Kmart, many K-12 school districts, and hospitals being targeted by ransomware in recent years—and those attacks increasing—it’s no wonder that people are seeking to understand how to protect themselves.

      Hard Drive Stats Q1 2020

        3. & 4. Q1 2020 Hard Drive Stats and Q2 2020 Hard Drive Stats. The quarterly drive stats set the stage for our popular yearly reviews and provide a “heartbeat” of how our spinning disks are doing throughout the year.
        5. A Beginner’s Guide to External Hard Drives. We took a look at some best practices for folks looking to increase their on-site storage capacity and how to make sure all that data is safe, as well. It would appear a lot of readers were onboarding new hard drives in 2020.
        6. Synology Backup Guide. Other readers already have a series of external hard drives connected to their PC, meaning the natural progression is getting a NAS system like Synology in place and making sure that it, too, is backed up.
        7. Q3 2020 Hard Drive Stats. Looking at how the stats are progressing, we find that even when some drive models have over 4,029 failures, their annualized failure rate can be below 3%—that’s scale!

      a woman thinking about how to download and backup her Google Drive

        8. Backing Up Google Drive. Far be it from us to claim that we saw the future, but when we published this post in June it was a touch ahead of its time. A few months later, Google announced the end of their unlimited storage plan and as people looked for alternatives, this resource on downloading and backing up Google Drive information became invaluable.
        9. Backblaze S3 Compatible APIs. One of our biggest Backblaze B2 Cloud Storage releases for 2020 was the Backblaze S3 Compatible APIs suite. This launch cleared the way for a ton of new partner integrations, use cases, and happy cloud storage customers.
        10. Cloud Sync Is Not Backing Up. A common misconception is that someone is backed up if they only use iCloud, Google Drive, or Dropbox. Nothing could be further from the truth. In this post, we dig into the differences between cloud backup and cloud sync, why they’re both useful, and how to leverage both for maximum efficiency.

    The “Up-and-coming” Top Ten

    Looking at the top 10 list for 2020, we see a lot of series and subjects that are popular every year. This got us thinking, what about the stories that broke new ground? Posts that aren’t hard drive stats and yet still drew an admirable number of readers? When we removed the big hitters we found an alternative top ten that will appeal to anyone looking for some more in-depth solutions, some nice news, and answers to a few evergreen questions!

        1. What Is an Exabyte? What the heck is an exabyte anyway? We take a look at how much data that really is, and how it compares, on a cosmic level, to a gigabyte.
        2. Object vs. File vs. Block—A Cloud Storage Guide. The word “cloud” can sometimes feel amorphous. For readers just starting to look cloudwards, this post aims to help put a finer point on clouds! We take a look at the different types of cloud storage and how to most effectively use each.
        3. Duplicati + Backblaze. We love when B2 Cloud Storage gets integrated into popular apps, and Duplicati makes backing up data securely and easily from pretty much any system a piece of cake. No wonder it pairs so well with Backblaze B2!
        4. Metadata: Your File’s Hidden DNA. Metadata surrounds pretty much every digital thing we do on a day to day basis, but a lot of people don’t fully understand what it is or how it works. This post defines metadata and looks at how it helps programs keep track of the information about files for both humans and computers.

        5. Free Cloud Storage? What’s the Catch? There are a lot of “free” offers in the cloud storage marketplace positioned to help entrepreneurs get their application or website off the ground. In this post we go into some of the pitfalls that might come about when you take cloud storage providers up on an offer that might seem too good to be true.
        6. Computer Backup Version 7.0.1. We took some time at the beginning of the year to make some adjustments to our cloud backup software, improving performance and enhancing our Inherit Backup State feature to help folks avoid reuploading data if they switch computers!
        7. Exabyte Unlocked. In March, Backblaze crossed a data storage threshold that few other companies have achieved, storing over an exabyte of data for our customers, and we couldn’t be prouder.

        8. How to Wipe a Mac Hard Drive. As people get new computers and sell off their old hardware, sometimes they want to make sure that all of their data has been deleted from their computer (just make sure you have a backup first).
        9. Upgrading to an SSD. Once readers finish wiping their old drives, they often want something a bit more speedy. SSDs are dropping in price and getting more common, so this post gives you a few things to consider when upgrading.
        10. RAM vs. Storage. This post takes a look at one of the most commonly asked questions when people talk about gigabytes—“Do they mean RAM, or do they mean storage size?”—and what’s the difference between the two anyway?

    We love writing about the ins and outs of our industry, infrastructure, and the business in general, so it’s always fun to look back at what resonated with you over the past year. Was your favorite blog post not listed? Let us know in the comments below what resonated with you this year!

    The post The 2020 Top Ten(s) appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Development Roadmap: Power Up Apps With Go Programming Language and Cloud Storage

    Post Syndicated from Skip Levens original https://www.backblaze.com/blog/development-roadmap-power-up-apps-with-go-programming-language-and-cloud-storage/

    If you build apps, you’ve probably considered working in Go. After all, the open-source language has become more popular with developers every year since its introduction. With a reputation for simplicity in meeting modern programming needs, it’s no surprise that GitHub lists it as the 10th most popular coding language out there. Docker, Kubernetes, rclone—all developed in Go.

    If you’re not using Go, this post will suggest a few reasons you might give it a shot in your next application, with a specific focus on another reason for its popularity: its ease of use in connecting to cloud storage—an increasingly important requirement as data storage and delivery becomes central to wide swaths of app development. With this in mind, the following content will also outline some basic and relatively straightforward steps to follow for building an app in Go and connecting it to cloud storage.

    But first, if you’re not at all familiar with this programming language, here’s a little more background to get you started.

    What Is Go?

    Go (sometimes referred to as Golang) is a modern coding language that can perform as well as low-level languages like C, yet is simpler to program and takes full advantage of modern processors. Similar to Python, it can meet many common programming needs and is extensible with a growing number of libraries. However, these advantages don’t mean it’s necessarily slower—in fact, applications written in Go compile to a binary that runs nearly as fast as programs written in C. It’s also designed to take advantage of multiple cores and concurrency routines, compiles to machine code, and is generally regarded as being faster than Java.

    Why Use Go With Cloud Storage?

    No matter how fast or efficient your app is, how it interacts with storage is crucial. Every app needs to store content on some level. And even if you keep some of the data your app needs closer to your CPU operations, or on other storage temporarily, it still benefits you to use economical, active storage.

    Here are a few of the primary reasons why:

    • Massive amounts of user data. If your application allows users to upload data or documents, your eventual success will mean that storage requirements for the app will grow exponentially.
    • Application data. If your app generates data as a part of its operation, such as log files, or needs to store both large data sets and the results of compute runs on that data, connecting directly to cloud storage helps you to manage that flow over the long run.
    • Large data sets. Any app that needs to make sense of giant pools of unstructured data, like an app utilizing machine learning, will operate faster if the storage for those data sets is close to the application and readily available for retrieval.

    Generally speaking, active cloud storage is a key part of delivering ideal OpEx as your app scales. You’re able to ensure that as you grow, and your user or app data grows along with you, your need to invest in storage capacity won’t hamper your scale. You pay for exactly what you use as you use it.

    Whether you buy the argument here, or you’re just curious, it’s easy and free to test out adding this power and performance to your next project. Follow along below for a simple approach to get you started, then tell us what you think.

    How to Connect an App Written in Go With Cloud Storage

    Once you have your Go environment set up, you’re ready to start building code in your main Gopath’s directory ($GOPATH). This example builds a Go app that connects to Backblaze B2 Cloud Storage using the AWS S3 SDK.

    Next, create a bucket to store content in. You can create buckets programmatically in your app later, but for now, create a bucket in the Backblaze B2 web interface, and make note of the associated server endpoint.

    Now, generate an application key for the tool, scope bucket access to the the new bucket only, and make sure that “Allow listing all bucket names” is selected:


    Make note of the bucket server connection and app key details. Use a Go module—for instance, this popular one, called godotenv—to make the configuration available to the app that will look in the app root for a .env (hidden) file.

    Create the .env file in the app root with your credentials:

    With configuration complete, build a package that connects to Backblaze B2 using the S3 API and S3 Go packages.

    First, import the needed modules:

    Then create a new client and session that uses those credentials:

    And then write functions to upload, download, and delete files:

    Now, put it all to work to make sure everything performs.

    In the main test app, first import the modules, including godotenv and the functions you wrote:

    Read in and reference your configuration:

    And now, time to exercise those functions and see files upload and download.

    For example, this extraordinarily compact chunk of code is all you need to list, upload, download, and delete objects to and from local folders:

    If you haven’t already, run go mod init to initialize the module dependencies, and run the app itself with go run backblaze_example_app.go.

    Here, a listResult has been thrown in after each step with comments so that you can follow the progress as the app lists the number of objects in the bucket (in this case, zero), upload your specified file from the dir_upload folder, then download it back down again to dir_download:

    Use another tool like rclone to list the bucket contents independently and verify the file was uploaded:

    Or, of course, look in the Backblaze B2 web admin:

    And finally, looking in the local system’s dir_download folder, see the file you downloaded:

    With that—and code at https://github.com/GiantRavens/backblazeS3—you have enough to explore further, connect to Backblaze B2 buckets with the S3 API, list objects, pass in file names to upload, and more.

    Get Started With Go and Cloud Storage

    With your app written in Go and connected to cloud storage, you’re able to grow at hyperscale. Happy hunting!

    If you’ve already built an app with Go and have some feedback for us, we’d love to hear from you in the comments. And if it’s your first time writing in Go, let us know what you’d like to learn more about!

    The post Development Roadmap: Power Up Apps With Go Programming Language and Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Backblaze Holiday Gift Guide 2020

    Post Syndicated from original https://www.backblaze.com/blog/backblaze-holiday-gift-guide-2020/

    The gift giving-est time of year is almost here and at Backblaze, we’re always on the lookout for cool gizmos and gadgets to get our friends and family. (Or if we’re honest, ones that we’ve been eyeing ourselves but just can’t bring ourselves to buy.)

    Since this year has been so…unique, I polled my team members and asked what tools and toys they’ve been using or would love to receive. This year’s crop of gift ideas runs the gamut from egg cookers to firepits, but since we’re all spending more time at home, it’s no wonder that cooking gadgets, games, and smart home accessories were top of mind. Let’s dive in!

    Food and Food-related Goodies

    Instant Pot

    You have a ton of gadgets already, but what if you could cook things…instantly. Or, near instantly. I personally have been eyeing one of these for months, but need to use my slow cooker a few more times before finally expediting the process!

    Zojirushi Indoor Tabletop Grill

    Sure, you can make things instantly with an Instant Pot, but what about grilling indoors? Zojirushi has you covered.

    Wireless BBQ Thermometer

    Safety first! You’re cooking a lot more at home and maybe you’re not exactly sure what shade of pink your tenderloin is supposed to be. Well…better safe than sorry!

    Baking Steel

    It’s kind of like a pizza stone, but made of steel! Pizza steel!

    Egg Cooker

    If you’re the type of person who needs eggs in a hurry and doesn’t want to fiddle with them, this little gadget is an Amazon best seller.

    Mitsubishi Electric Bread Oven Toaster

    This bread oven is a bit pricey, but it’s also a bread oven that can make grilled cheeses. Think of the grilled cheesiness!

    Whiskey Decanter

    Not all decanters are created equal. This one has a boat in the middle of it. Perfect for the sailing aficionado in your life who also happens to enjoy whiskey.

    Games and Game Accessories

    Reaper Miniatures Learn to Paint Kit Core Skills

    Express yourself by learning to paint miniatures (and then use them to play tabletop games like Dungeons and Dragons)!

    Wacom Bamboo Folio Smartpad

    Another way to get the creative juices flowing is this smartpad! You can take notes or doodle on it, and then convert your work to shareable documents.

    DribbleUp

    Stay active! If you have a backyard or reasonable downstairs neighbors we recommend the DribbleUp Smart Soccer Ball or Smart Basketball! These Smart Balls link to your smartphone and give you telemetry into how you’re progressing in developing your skills.

    7 Wonders: Duel

    Stuck at home with your significant other and are bored playing War? Enjoy the prospect of outsmarting and besting your partner in a board game? Duel is designed specifically for two players. Have at ‘em!

    Zombicide: Black Plague

    This game comes with miniatures that you can paint and play with—perfect for getting your skills up, and also there are zombies. Who could ask for more?

    Prusa MINI+

    Tired of using other people’s miniatures? Make your own! This powerful 3D printer can help you build the miniatures of your dreams, or build your own home gadgets!

    Ender 3 3D Printer

    If you’re curious about 3D printing but don’t want to go all-in from the beginning, this “entry level” 3D printer is a great way to get your hands dirty and give the 3D printing world a whirl!

    Oculus Quest

    Don’t want to build your own 3D world or want a break from the real one? Explore virtual worlds instead! You don’t need a computer, but you can plug this into your gaming rig for extra power!

    Home (Sometimes Smart)

    Nugget

    Did someone say next-gen pillow fort? Use this modular setup to build the fort of your dreams, or just a nice random lounging area to watch TV in comfort.

    HomePod mini

    Smart homes are definitely top of mind nowadays and the new HomePod mini is a great entry level gadget to help dip your toe into smart homing.

    Google Nest Mini

    If you’re not in the Apple ecosystem and want to give smart homes a try, the Google Nest Mini is a great inexpensive option to listen to music while in the shower (speaking from personal experience).

    Kasa Smart Wi-Fi Plug

    Smart plugs are a great way to turn on the lights in your house when you walk through the door and don’t like looking at pitch darkness. These tie in to your existing smart home devices, or use the app to control your plugs!

    Arlo Video Doorbell

    Smart doorbells are all the rage nowadays and if you aren’t keen on expensive monthly plans, this is a great entry point into the smart home security system space!

    Premier Comfort Electric Mattress Pad

    Now that winter is upon us (at least here in the U.S.), treat yourself to an electric mattress pad and stay toasty and cozy no matter what’s going on outside. This one’s also compatible with your smart home and plugs!

    Gaiam Yoga Mat

    Don’t forget to make time for yourself and get a little stretch in throughout the day. These yoga mats are affordable and durable!

    Massage Gun

    After working out you might feel a little sore. Massage guns are all the rage right now and they do a heck of a job getting all of your knots loosened up. The myriad of attachments will help you get into all your nooks and crannies!

    For the Person Who Has Everything

    Personalized AirPod Case

    There are a lot of Etsy shops out there that can help you personalize things, so browse around! These AirPod cases are pretty stellar!

    Baby Yoda Christmas Sweater

    ‘Tis the Baby Yoda season (The Child has been named!) and what better way to celebrate than buying everyone in your family matching sweaters!

    Oaktown Spices

    Buy local! If you have a favorite local shop, be sure to show them some love this holiday season! If you want to support some from around the country, here’s a great spice shop in Oakland, California—and their gift boxes are the cutest.

    Bookshop.org

    Bookstores are getting hit pretty hard right now and if your local store is closed, this online bookshop donates proceeds to local bookstores—combining the convenience of online shopping with the good feeling of buying from a local store. They can also help you locate a local bookstore that’s open near you! Some books we recommend: “The 7 ½ Deaths of Evelyn Hardcastle,” “Rhythm of War,” “Designing Your Life: How to Build a Well-Lived, Joyful Life,” “The Essential Calvin and Hobbes: A Calvin and Hobbes Treasury,” “Cozy” (for kids), and “The Hike” (also for kids).

    Death Star Fire Pit

    OK, this thing is $2,400 but… Look at it!

    Backblaze

    And, of course… Backblaze.
    You know it. You love it. You can gift it!

    We wish you a healthy, happy, and hopeful new year! Maybe this list helped you on your gifting journey, or maybe it just added more items to your own wish list. What are you looking forward to gifting and receiving this year? Let us know in the comments below!

    The post Backblaze Holiday Gift Guide 2020 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Churn Analysis: Go From Churning to Learning

    Post Syndicated from Nicole Perry original https://www.backblaze.com/blog/churn-analysis-go-from-churning-to-learning/

    Ever wonder if your feedback is heard when you tell a company why you are cancelling your subscription? Well, at Backblaze, customer feedback isn’t just heard—it’s read, considered, and used to improve the product over time.

    Most companies seek to understand the reasons customers leave by setting up a formulated poll with a multiple choice style list of common reasons for why you may be leaving. We decided to manage this process a little differently by giving customers who decide they no longer want to use Backblaze Computer Backup an open forum.

    This format allows people to be specific about their reasoning, and in some cases to vent about their frustrations. By sifting through these responses and grouping them under common causes, we gain insights into the customer experience that allow us to create a better product.

    When customers choose to cancel our service, we send this email:

    Over time, the responses to these messages have helped us enhance our Computer Backup product and add new features to it that we knew customers would like thanks to this process. Because our approach is somewhat unique, we wanted to illuminate it for you, both to be transparent and also for anyone that might find our model useful.

    What Is Churn Analysis, and Why Is It Important?

    When a customer leaves a service or cancels an account, it’s called “churn.” Churn can be calculated as the percentage of customers that stopped using your company’s product or service during a certain time frame. The churn rate calculation for subscription or service-based products is an excellent metric to gauge their performance.

    As much as you wish it wouldn’t happen when running a business, customer churn is a real thing and important to keep an eye on. You may already know about some issues your service has that need to be addressed, but by tracking churn over time you can also identify new issues or discover that issues outside of your scope are more important than you thought. When these issues turn out to be easily fixable, they provide a direct path to decreasing churn and often also attract new business. This is churn analysis: identifying the reasons people are leaving and prioritizing their resolution.

    The Nuts and Bolts of Churn Analysis at Backblaze

    Every month, 10% of the customers that churn actually offer substantive responses for their departure. On the 10th day of each month, one hearty staffer sifts through all of the messages that we receive and adds them to a large spreadsheet. Unsurprisingly, every month, the reasons people cite for leaving are relatively similar, so she’s able to group the messages into 10–15 different categories. These categories range across different feature requests that we are tracking, like issues with our safety freeze feature, as well as trends with different accounts, like their desire for two-factor verification set up, and various other reasons.

    When different reasons begin to gain or lose ground, it’s a sign that we need to do something. Depending on the reason, it might mean that we need to write a more informative FAQ, or that we need to work with Marketing to highlight a feature better, or that we need to notify engineers that there is something that needs to be fixed or built.

    So Why Do People Churn From Backblaze?

    To illustrate how we go from churn analysis to product development, we gathered the five top reasons customers churned from Backblaze, and what we’ve decided to do about it (or not).

    Reason #1: “I No Longer Need My Data Backed Up”

    Customers use Backblaze Computer Backup for various reasons. Some of them have long term needs, like wanting to protect the files on their home computer. Others may be thought of as temporary, like backing up freelance businesses or college projects. The former tend to stick around, while there’s not much we can do to convince the latter that they might want to rethink their approach.

    As a result, “I don’t need it anymore” is one reason that’s always on our list. But that’s not to say we’re not doing anything about it. If you read this blog, you know that we’ll take any opportunity to remind people that there are more reasons for long term backups than most folks assume.

    Financial documents, legal correspondence, essential application settings, system information, and all of the important data you’ve forgotten you have on your machine until it crashes are great reasons to second guess a spotty back up strategy. If you have a computer, you should have a backup in place to protect yourself from accidental or incidental data loss. In fact, we recommend a 3-2-1 backup strategy to ensure that you’re always covered.

    Resolution #1: No specific response in product development, but a rigorous marketing campaign to argue against the premise of their departure.

    Reason #2: “30 Day Deletion”

    All Backblaze Computer Backup accounts have 30 Day Version History included with their backup license. That means you can go back in time for 30 days and retrieve older versions of your files or even files that you’ve deleted. For years, we had customers respond that they would continue to use Backblaze if we retained their files a little bit longer than 30 days.

    We took that feedback and created the ability to keep updated, changed, and even deleted files in their backups for a longer period of time by extending Version History for the computers backing up in their accounts. We chose to build this feature because the engineering investment was easily offset by the number of customers we could retain and/or gain by offering some customized approaches to data retention.

    Since 2013, customers who told us that they were cancelling due to our Version History being set to only 30 days hovered around 5.91% out of the total responses to reasons for leaving. Since we made a change in 2019, and started educating people that the feature exists, we’ve now seen a large number of people enabling Extended Version History. Reports of customers leaving for Version History reasons is now down to 3.37% for 2020 and is dropping quickly.

    You can now increase your peace of mind by enabling Yearly or Forever Version History on your account—all thanks to the customers who wrote in and told us why it was important to them.

    Resolution #2: Build a new feature set to answer a reasonable request with a reasonable offering.

    Reason #3: “Leaving For a Sync Service”

    There’s unfortunately still some confusion between backup (which Backblaze provides) and sync and share services, like Dropbox and iCloud.

    So what’s the difference? We wrote a blog post to explain it, but to summarize: Sync services will synchronize folders on your computer or mobile device to folders on other machines, allowing users to access the same file, folder, or directory across different devices. This is great for collaboration and reducing the amount of data you’re holding on any number of devices. But it’s completely different from a backup. In a sync service, only the files, folders, or directories you add to the service are synced, leaving the rest of the data on your computer completely unprotected.

    Backblaze’s cloud backup automatically backs up all user data with little or no setup, and no need for the dragging and dropping of files. If your friends tell you they are using a sync service to back up their personal data, let them know they may need a backup service as well—before they learn that lesson the hard way.

    Resolution #3: Similar to resolution #1, the response to confusion about what different services do is: Education. Tens of thousands of folks have already read our post about the difference between sync and backup, so hopefully we see this reason decrease over time.

    Reason #4: Too Expensive

    We’ve all been there. We look in our bank account and realize we accidentally signed up for a few too many monthly services and we need to cut back to pay the essential bills. At Backblaze, we realize that times can get tough and occasionally you will need to cut back on expenses.

    Keeping this in mind, we strive to be the most affordable unlimited online backup service for our customers. Over the course of 10+ years since Backblaze started backing up customer computers, we have only raised our prices once, by $1 (and wrote about how hard it was to do even that).

    When deciding which monthly service to keep, we hope you consider the value of keeping all your files safe and protected and the cost of losing precious memories or important documents.

    Resolution #4: Sometimes your product may be too expensive for people’s budget and they will leave. All you can do is work to be as affordable as possible and stress the value of your service.

    Reason #5: Switched to Backblaze B2 Cloud Storage

    “Hey Backblaze, we love your product but we are leaving to use B2 Cloud Storage!” Some Computer Backup customers occasionally write in with this response and we get a good chuckle from it… because B2 Cloud Storage is also a product of Backblaze. Backblaze B2 Cloud Storage was created to be a simple and flexible cloud storage platform and, with the help of integration partners, it can be a very nifty backup solution for more tech-savvy users!

    We actually love when this reason pops up! It lets us know that people are moving on to the product that’s right for them. Backblaze B2 was created as a result of customers writing in and saying “I love your backup service, but I need a place to just store the data on my server or NAS device. Can you give me direct access to your cloud storage? Is that possible?” So we created a product that could do just that.

    If you have been backing up your computer for a while, you may be curious about cloud storage or have heard about cloud storage and thought it might be too technical for you—don’t worry, we have all been there. We put together a quick starter guide that highlights how simple Backblaze B2 can be.

    Resolution #5: When the customer starts to outgrow your starter product, guide them to the product that fits them best.

    What Churn Responses Look Like Over the Years for Computer Backup

    About 10% of our customers that leave respond to our “how can we do better” email after cancelling their accounts. This number tends to be pretty constant, but when it rises above that range it usually indicates that something unique happened that month.

    An uptick in churn isn’t always a bad thing. We saw a rise in responses when we announced our first European data center because customers were switching their accounts to the EU region. It was a good sign that people were excited about the availability of different regions for storing their data.

    Giving the option for customers to share personal responses also notifies us when a new issue arrives. This can help us identify and fix bugs in our system that might only be caught in very specific situations that may not be seen by our engineers in our initial testing.

    They can also clue us in on world events. We started to see high trends of customers reporting COVID-19 related reasons for cancelling their accounts back in January 2020. This helped us assess in a timely manner how we could support our customers during a worldwide pandemic.

    The following graph shows you how a few different reasons for leaving have changed over the past few years:

    All Feedback Is Good Feedback

    You may find it a bit crazy but there really is a person at the other end of your responses—reading your feedback and sharing it with the rest of the gang at Backblaze. That feedback has provided us useful updates, new features, and peace of mind knowing that our customers feel heard.

    So, we want to say thank you to all the previous customers that took the time to write out why they were breaking up with Backblaze. Without that feedback, we wouldn’t be the company we are today.

    To this day we are still updating our products to meet our customers’ needs and we love to hear what our customers hope to see as our next feature. Do you have a feature request? Share it in the comments below!

    The post Churn Analysis: Go From Churning to Learning appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Backing Up Our Thanks

    Post Syndicated from Ramya Ramamoorthy original https://www.backblaze.com/blog/backing-up-our-thanks/

    Needless to say, 2020 has been a challenging, unpredictable year for everyone. All of us faced trials that we had never expected a year ago. In spite of the adversity, or possibly simply to ward it off, our team decided to dig for a couple of silver linings from this past year. We understand that times are still tough, but we wanted to take a moment this Thanksgiving season to show thanks for some of the unexpected positives that have come out of quarantine.

    So we reached out to the team to see what goodness they’ve taken from what has felt like a terrible, horrible, no good, very bad year (hat tip to Judith Viorst), and we got some excellent responses that we’ve digested for you here.

    Healthier Lifestyles

    Many of our team members have exchanged their morning commute for a workout. Jeremy, our Senior Director of Product Marketing, has run far more track and trail miles, more often and more quickly over time. He even set a 5K personal record time!

    Yev, our Director of Marketing, has also been on the workout grind. He said, “I’ve lost weight and gotten more healthy! During the first day of shelter-in-place, I started doing yoga and a small calisthenic workout, and have kept that streak going throughout the pandemic. I’ve also started to walk four to five miles per day to get me to the 10K step mark, and have started to eat more vegetables. Gross! But also, great!”

    Photo by Jenny Hill.

    It’s a Full House

    No matter the family size you have, before the pandemic, many of us were a little bit busy and hard to catch up with. But, shelter-in-place gave us the opportunity (and sometimes the necessity) to slow down and reconnect with family and friends over Zoom happy hours and holiday celebrations.

    Those family members who were long distance and couldn’t make it to all the holidays finally got to join in and see everyone on the screen. College friends who kept pushing out that weekend getaway together found a way to play virtual games and share silly memories with each other.

    One common response that people gave about this shelter-in-place time is that “It’s time we won’t ever get to spend together like this again.” Although our new work from home coworkers took up a lot of the Wi-Fi bandwidth, we wouldn’t trade that time we spent at home with family, friends, and kids for anything else.

    Photo by Pablo Merchán Montes.

    Furry Friends to the Rescue

    Let’s admit it, animals truly love their owners unconditionally (even cats, in their own special way) and that makes leaving them behind in a house all day tough. Some days it’s hard when your animal accidentally falls asleep on your toes and you just don’t have the heart to move them when it’s time to leave for work. The pandemic has made that conundrum a tad easier.

    During shelter-in-place, having an animal nearby during tough times made life a lot easier to deal with. The potential for impending bad news each day made a cuddly dog or an energetic hamster an important part of the survival kit for each day for many of us.

    And for the animals, this time at home was a blessing. They didn’t know why they had been given it but it was pretty cool that it was happening. They didn’t need to know about social distancing, or what the right face mask to wear is, or how to disinfect their groceries—animals just saw this time as extra time with their owners. They couldn’t believe they were around… and around… and still there… and “OMG they are still there.”

    Seeing the world through their eyes of “more play time” really made shelter-in-place a little less tough.

    Photo by Adrianna Calvo.

    Cozy Pants Rule

    With work from home, employees have embraced clothing that makes them feel more comfortable. Judith, our Recruiting Coordinator, said, “During shelter-in-place I’ve completely given up wearing jeans, and other restricting (yet, fashionable) items. I don’t think I’ll ever go back. I am so happy with my decision. Goodbye, jeans! See you never.”

    Cheers to a lifetime of no more jeans!

    Photo by Tatiana Syrikova.

    No Traffic!

    Although we miss being able to binge our favorite podcasts in the car, we do not miss sitting in car-to-car traffic. One of the top responses we found from our coworkers about silver linings during shelter-in-place is “Getting back two hours of my day that I used to spend commuting!”

    If you sit in two hours of traffic a week for an average of 10 hours a week—that amounts to nearly 20 days in a year that we get back. And to top it off, driving to go anywhere at 5 p.m. is less stressful when you don’t have to worry about rush hour traffic.

    Photo by Nick Fewings.

    Some “Me” Time

    Even though the pandemic has led to fewer social interactions, some of our employees have been enjoying that alone time. They have a lot more freedom to do what they want; for example, some of our employees love that they can now listen to music while working without wearing headphones.

    Another silver lining for some of our coworkers is that they no longer have to attend social events, which has been a real stress inducer for them in the past. We still have virtual social events for our extroverted employees, but this time in quarantine has given our introverted employees a bit of a breather.

    Photo by Andrea Piacquadio.

    Share Your Silver Lining!

    We’ve watched the world find new ways to bring each other together in a year where it felt like we could never be close again. Out of adversity we found new ways to connect, be creative, care for others, and also make a lot of new food (Like sourdough, who doesn’t love a good homemade sourdough loaf?).

    We hope that you can find a silver lining in your 2020, too. Whether it be big, or small, or something in between. Also, thank you to Nicole Perry for help with writing this post!

    We’d love for you to join us and find your own silver lining at Backblaze. We’re hiring, so apply today! We wish you and your loved ones a happy Thanksgiving!

    The post Backing Up Our Thanks appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Code and Culture: What Happens When They Clash

    Post Syndicated from Lora Maslenitsyna original https://www.backblaze.com/blog/code-and-culture-what-happens-when-they-clash/

    Every industry uses its own terminology. Originally, most jargon emerges out of the culture the industry was founded in, but then evolves over time as culture and technology change and grow. This is certainly true in the software industry. From its inception, tech has adopted terms—like hash, cloud, bug, ether, etc.—regardless of their original meanings and used them to describe processes, hardware issues, and even relationships between data architectures. Oftentimes, the cultural associations these terms carry with them are quickly forgotten, but sometimes they remain problematically attached.

    In the software industry, the terms “master” and “slave” have been commonly used as a pair to identify a primary database (the “master”) where changes are written, and a replica (the “slave”) that serves as a duplicate to which the changes are propagated. The industry also commonly uses other terms, such as “blacklist” and whitelist,” whose definitions reflect or at least suggest identity-based categorizations, like the social concept of race.

    Recently, the Backblaze Engineering team discussed some examples of language in the Backblaze code that carried negative cultural biases that the team, and the broader company, definitely didn’t endorse. Their conversation centered around the idea of changing the terms used to describe branches in our repositories, and we thought it would be interesting for the developers in our audience to hear about that discussion, and the work that came out of it.

    Getting Started: An Open Conversation About Software Industry Standard Terms

    The Backblaze Engineering team strives to cultivate a collaborative environment, an effort which is reflected in the structure of their weekly team meetings. After announcements, any member of the team is welcome to bring up any topics they want to discuss. As a result, these meetings work as a kind of forum where team members encourage each other to share their thoughts, especially about anything they might want to change related to internal processes or more generally about current events that may be affecting their thinking about their work.

    Earlier this year, the team discussed the events that lead to protests in many U.S. cities as well as to new prominence for the Black Lives Matter movement. The conversation brought up a topic that had been discussed briefly before these events, but now had renewed relevance: mindfulness around terms used as a software industry standard that could reflect biases against certain people’s identities.

    These conversations among the team did not start with the intention to create specific procedures, but focused on emphasizing awareness of words used within the greater software industry and what they might mean to different members of the community. Eventually, however, the team’s thinking progressed to include different words and concepts the Backblaze Engineering team resolved to adopt moving forward.

    working on code on a laptop during an interview

    Why Change the Branch Names?

    The words “master” and “slave” have long held harmful connotations, and have been used to distance people from each other and to exclude groups of people from access to different areas of society and community. Their accepted use today as synonyms for database dependencies could be seen as an example of systemic racism: racist concepts, words, or practices embedded as “normal” uses within a society or an organization.

    The engineers discussed whether the use of “master” and “slave” terminologies reflected an unconscious practice on the team’s part that could be seen as supporting systemic racism. In this case, the question alone forced them to acknowledge that their usage of these terms could be perceived as an endorsement of their historic meanings. Whether intentionally or not, this is something the engineers did not want to do.

    The team decided that, beyond being the right thing to do, revising the use of these terms would allow them to reinforce Backblaze’s reputation as an inclusive place to work. Just as they didn’t want to reiterate any historically harmful ideas, they also didn’t want to keep using terms that someone on the team might feel uncomfortable using, or accidentally make potential new hires feel unwelcome on the team. Everything seemed to point them back to a core part of Backblaze’s values: the idea that we “refuse to take history or habit to mean something is ‘right.’” Oftentimes this means challenging stale approaches to engineering issues, but here it meant accepting terminology that is potentially harmful just because it’s “what everyone does.”

    Overall, it was one of those choices that made more sense the longer they looked at it. Not only were the uses of “master” and “slave” problematic, they were also harder and less logical to use. The very effort to replace the words revealed that the dependency they described in the context of data architectures could be more accurately characterized using more neutral terms and shorter terms.

    The Engineering team discussed a proposal to update the terms at a team meeting. In unanimous agreement, the term “main” was selected to replace “master” because it is a more descriptive title, it requires fewer keystrokes to type, and since it starts with the same letter as “master,” it would be easier to remember after the change. The terms “whitelist” and “blacklist” are also commonly used terms in tech, but the team decided to opt for “allowlist” and “denylist” because they’re more accurate and don’t associate color with value.

    Rolling Out the Changes and Challenges in the Process

    The practical procedure of changing the names of branches was fairly straightforward: Engineers wrote scripts that automated the process of replacing the terms. The main challenge that the Engineering team experienced was in coordinating the work alongside team members’ other responsibilities. Short of stopping all other projects to focus on renaming the branches, the engineers had to look for a way to work within the constraints of Gitea, the constraints of the technical process of renaming, and also avoid causing any interruptions or inconveniences for the developers.

    First, the engineers prepared each repository for renaming by verifying that each one didn’t contain any files that referenced “master” or by updating files that referenced the “master” branch. For example, one script was going to be used for a repository that would update multiple branches at the same time. These changes were merged to a special branch called “master-to-main” instead of the “master” branch itself. That way, when that repository’s “master” branch was renamed, the “master-to-main” branch was merged into “main” as a final step. Since Backblaze has a lot of repositories, and some take longer than others to complete the change, people divided the jobs to help spread out the work.

    While the actual procedure did not come with many challenges, writing the scripts required thoughtfulness about each database. For example, in the process of merging changes to the updated “main” branch in Git, it was important to be sure that any open pull requests, where the engineers review and approve changes to the code, were saved. Otherwise, developers would have to recreate them, and could lose history of their work, changes, and other important comments from projects unrelated to the renaming effort. While writing the script to automate the name change, engineers were careful to preserve any existing or new pull requests that might have been created at the same time.

    Once they finished prepping the repositories, the team agreed on a period of downtime—evenings after work—to go through each repository and rename its “master” branch using the script they had previously written. Afterwards, each person had to run another short script to pick up the change and remove dangling references to the “master” branch.

    Managers also encouraged members of the Engineering team to set aside some time throughout the week to prep the repositories and finish the naming changes. Team members also divided and shared the work, and helped each other by pointing out any areas of additional consideration.

    Moving Forward: Open Communication and Collaboration

    In September, the Engineering team completed renaming the source control branch from “master” to “main.” It was truly a team effort that required unanimous support and time outside of regular work responsibilities to complete the change. Members of the Engineering team reflected that the project highlighted the value of having a diverse team where each person brings a different perspective to solving problems and new ideas.

    Earlier this year, some of the people on the Engineering team also became members of the employee-led Diversity, Equity, and Inclusion Committee. Along with Engineering, other teams are having open discussions about diversity and how to keep cultivating inclusionary practices throughout the organization. The full team at Backblaze understands that these changes might be small in the grand scheme of things, but we’re hopeful our intentional approach to those issues we can address will encourage other business and individuals to look into what’s possible for them.

    The post Code and Culture: What Happens When They Clash appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Development Simplified: CORS Support for Backblaze S3 Compatible APIs

    Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/development-simplified-cors-support-for-backblaze-s3-compatible-apis/

    Since its inception in 2009, Cross-Origin Resource Sharing (CORS) has offered developers a convenient way of bypassing an inherently secure default setting—namely the same-origin policy (SOP). Allowing selective cross-origin requests via CORS has saved developers countless hours and money by reducing maintenance costs and code complexity. And now with CORS support for Backblaze’s recently launched S3 Compatible APIs, developers can continue to scale their experience without needing a complete code overhaul.

    If you haven’t been able to adopt Backblaze B2 Cloud Storage in your development environment because of issues related to CORS, we hope this latest release gives you an excuse to try it out. Whether you are using our B2 Native APIs or S3 Compatible APIs, CORS support allows you to build rich client-side web applications with Backblaze B2. With the simplicity and affordability this service offers, you can put your time and money back to work on what’s really important: serving end users.

    Top Three Reasons to Enable CORS

    B2 Cloud Storage is popular among agile teams and developers who want to take advantage of easy to use and affordable cloud storage while continuing to seamlessly support their applications and workflows with minimal to no code changes. With Backblaze S3 Compatible APIs, pointing to Backblaze B2 for storage is dead simple. But if CORS is key to your workflow, there are three additional compelling reasons for you to test it out today:

    • Compatible storage with no re-coding. By enabling CORS rules for your custom web application or SaaS service that uses our S3 Compatible APIs, your development team can serve and upload data via B2 Cloud Storage without any additional coding or reconfiguring required. This will save you valuable development time as you continue to deliver a robust experience for your end users.
    • Seamless integration with your plugins. Even if you don’t choose B2 Cloud Storage as the primary backend for your business but you do use it for discreet plugins or content serving sites, enabling CORS rules for those applications will come in handy. Developers who configure PHP, NodeJS, and WordPress plugins via the S3 Compatible APIs to upload or download files from web applications can do so easily by enabling CORS rules in their Backblaze B2 Buckets. With CORS support enabled, these plugins work seamlessly.
    • Serving your web assets with ease. Consider an even simpler scenario in which you want to serve a custom web font from your B2 Cloud Storage Bucket. Most modern browsers will require a preflight check for loading the font. By configuring the CORS rules in that bucket to allow the font to be served in the origin(s) of your choice, you will be able to use your custom font seamlessly across your domains from a single source.

    Whether you are relying on B2 Cloud Storage as your primary cloud infrastructure for your web application or simply using it to serve cross-origin assets such as fonts or images, enabling CORS rules in your buckets will allow for proper and secure resource sharing.

    Enabling CORS Made Simple and Fast

    If your web page or application is hosted in a different origin from images, fonts, videos, or stylesheets stored in B2 Cloud Storage, you need to add CORS rules to your bucket to achieve proper functionality. Thankfully, enabling CORS rules is easy and can be found in your B2 Cloud Storage settings:

    You will have the option of sharing everything in your bucket with every origin, select origins, or defining custom rules with the Backblaze B2 CLI.

    Learning More and Getting Started

    If you’re dying to learn more about the fundamentals of CORS as well as additional specifics about how it works with B2 Cloud Storage, you can dig into this informative Knowledge Base article. If you’re just pumped that CORS is now easily available in our S3 Compatible APIs suite, well then, you’re probably already on your way to a smoother, more reasonably priced development experience. If you’ve got a question or a response, we always love to hear from you in the comments or you can contact us for assistance.

    The post Development Simplified: CORS Support for Backblaze S3 Compatible APIs appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Solution Roadmap: Cross-site Collaboration With the Synology NAS Toolset

    Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/solution-roadmap-cross-site-collaboration-with-the-synology-nas-toolset/

    Most teams use their Synology NAS device primarily as a common space to store active data. It’s helpful for collaboration and cuts down on the amount of storage you need to buy for each employee in a media workflow. But if your teams are geographically dispersed, a NAS device at each location will also allow you to sync specific folders across offices and protect the data in them with more reliable and non-duplicative workflows. By setting up an integrated cloud storage tier and using Synology Drive ShareSync, Cloud Sync, and Hyper Backup—all free tools that come with the purchase of your NAS device—you can improve your collaboration capabilities further, and simplify and strengthen data protection for your NAS.

    • Drive ShareSync: Synchronizes folders and files across linked NAS devices.
    • Cloud Sync: Copies files to cloud storage automatically as they’re created or changed.
    • Hyper Backup: Backs up file and systems data to local or cloud storage.

    Taken together, these tools, paired with a reasonable and reliable cloud storage, will grow your remote collaboration capacity while better protecting your data. Properly architected, they can make sharing and protecting large files easy, efficient, and secure for internal production, while also making it all look effortless for external clients’ approval and final delivery.

    We’ll break out how it all works in the sections below. If you have questions, please reach out in the comments, or contact us.

    If you’re more of a visual learner, our Cloud University series also offers an on-demand webinar featuring a demo laboratory showing how to set up cross-office collaboration on a Synology NAS. Otherwise, read on.
    In a multi-site file exchange configuration, Synology NAS devices are synced between offices, while cloud storage provides an archive and backup storage target for Synology Cloud Sync and Hyper Backup.

    Synchronizing Two or More NAS Devices With Synology Drive ShareSync

    Moving media files to a NAS is a great first step towards easier sharing and ensuring that everyone on the team is working on the correct version of any given project. But taking an additional step to also sync folders across multiple NAS devices guarantees that each file is only transferred between sites once, instead of every time a team member accesses the file. This is also a way to reduce network traffic and share large media files that would otherwise require more time and resources.

    With Synology Drive ShareSync, you can also choose which specific folders to sync, like folders with corporate brand images or folders for projects which team members across different offices are working on. You also have the option between a one-way and two-way sync, and Synology Drive ShareSync automatically filters out temporary files so that they’re not replicated from primary to secondary.

    With Synology Drive ShareSync, specific folders on NAS devices can be synced in a two-way or one-way fashion.

    Backing Up and Archiving Media Files With Synology Cloud Sync and Cloud Storage

    With Cloud Sync, another tool included with your Synology NAS, you can make a copy of your media files to a cloud storage bucket as soon as they are ingested into the NAS. For creative agencies and corporate video groups that work with high volumes of video and images, syncing data to the cloud on ingest protects the data while it’s active and sets up an easy way to archive it once the project is complete. Here’s how it works:

        1. After a multiple day video or photo shoot, upload the source media files to your Synology NAS. When new media files are found on the NAS, Synology Cloud Sync makes a copy of them to cloud storage.
        2. While the team works on the project, the copies of the media files in the cloud storage bucket serve as a backup in case a file is accidentally deleted or corrupted on the NAS.
        3. Once the team completes the project, you can switch off Synology Cloud Sync for just that folder, then delete the raw footage files from the NAS. This allows you to free up storage space for a new project.
        4. The video and photo files remain in the bucket for the long term, serving as archive copies for future use or when a client returns for another project.
    You can configure Synology Cloud Sync to watch folders for new files in specific time periods and control the upload speed to prevent saturating your internet connection.

    Using Cloud Sync for Content Review With External Clients

    Cloud Sync can also be used to simplify and speed up the editorial review process with clients. Emailing media files like videos and high-res images to external approvers is generally not feasible due to size, and setting up and maintaining FTP servers can be time consuming for you and complicated or confusing for your clients. It’s not an elegant way to put your best creative work in front of them. To simplify the process, create content review folders for each client, generate a link to a ZIP file in a bucket, and share the link with them via email.

    Protecting Your NAS Data With Synology Hyper Backup and Backblaze B2

    Last, but not least, Synology Hyper Backup can also be configured to do weekly full backups and daily incremental backups of all your NAS data to your cloud storage bucket. Disks can crash and valuable files can be deleted or corrupted, so ensuring you have complete data protection is an essential step in your storage infrastructure.

    Hyper Backup will allow you to back up files, folders, and other settings to another destination (like cloud storage) according to a schedule. It also offers flexible retention settings, which allow you to restore an entire shared folder from different points in time. You can learn about how to set it up using this Knowledge Base article.

    With Hyper Backup, you gain more control over setting up and managing weekly and daily backups to cloud storage. You can:

    • Encrypt files before transferring them, so that your data will be stored as encrypted files.
    • Choose to only encrypt files during the transfer process.
    • Enable an integrity check to confirm that files were backed up correctly and can be successfully restored.
    • Set integrity checks to run at specific frequencies and times.

    Human error is often the inspiration to reach for a backup, but ransomware attacks are on the rise, and a strategy of recycle and rotation practices alongside file encryption helps backups remain unreachable by a ransomware infection. Hyper Backup allows for targeted backup approaches, like saving hourly versions from the previous 24 hours of work, daily versions from the previous month of work, and weekly versions from older than one month. You choose what makes the most sense for your work. You can also set a maximum number of versions if there’s a certain cap you don’t want to exceed. Not only do these smart recycle and rotation practices manage your backups to help protect your organization against ransomware, but they can also reduce storage costs.

    Hyper Backup allows you to precisely configure which folders to back up. In this example, raw video footage is excluded because a copy was made by Cloud Sync on upload with the archive-on-ingest strategy.

    Set Up Multi-site File Exchange With Synology NAS and Cloud Storage

    To learn more about how you can set up your Synology NAS with cloud storage to implement a collaboration and data protection solution like this, one of our solutions engineers recently crafted a guide outlining how to do so with our cloud storage solution.

    At the end of the day, collaboration is the soul of much creative work, and orienting your system to make the nuts and bolts of collaboration invisible to the creatives themselves, while ensuring all their content is fully protected, will set your team up for the greatest success. Synology NAS, its impressive built-in software suite, and cloud storage can help you get there.

    The post Solution Roadmap: Cross-site Collaboration With the Synology NAS Toolset appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Vanguard Perspectives: Microsoft 365 to Veeam Backup to Backblaze B2 Cloud Storage

    Post Syndicated from Natasha Rabinov original https://www.backblaze.com/blog/vanguard-perspectives-microsoft-365-to-veeam-backup-to-backblaze-b2-cloud-storage/

    Ben Young works for vBridge, a cloud service provider in New Zealand. He specializes in the automation and integration of a broad range of cloud & virtualization technologies. Ben is also a member of the Veeam® Vanguard program, Veeam’s top-level influencer community. (He is not an employee of Veeam). Because Backblaze’s new S3 Compatible APIs enable Backblaze B2 Cloud Storage as an endpoint in the Veeam ecosystem, we reached out to Ben, in his role as a Veeam Vanguard, to break down some common use cases for us. If you’re working with Veeam and Microsoft 365, this post from Ben could help save you some time and headaches.

    —Natasha Rabinov, Backblaze

    Backing Up Microsoft Office 365 via Veeam in Backblaze B2 Cloud Storage

    Veeam Backup for Microsoft Office 365 v4 included a number of enhancements, one of which was the support for object-based repositories. This is a common trend for new Veeam product releases. The flagship Veeam Backup & Replication™ product now supports a growing number of object enabled capabilities.

    So, why object storage over block-based repositories? There are a number of reasons but scalability is, I believe, the biggest. These platforms are designed to handle petabytes of data with very good durability, and object storage is better suited to that task.

    With the data scalability sorted, you only need to worry about monitoring and scaling out the compute workload of the proxy servers (worker nodes). Did I mention you no longer need to juggle data moves between repositories?! These enhancements create a number of opportunities to simplify your workflows.

    So naturally, with the recent announcement from Backblaze saying they now have S3 Compatible API support, I wanted to try it out with Veeam Backup for Microsoft Office 365.
    Let’s get started. You will need:

    • A Backblaze B2 account: You can create one here for free. The first 10GB are complimentary so you can give this a go without even entering a credit card.
    • A Veeam Backup for Microsoft Office 365 environment setup: You can also get this for free (up to 10 users) with their Community Edition.
    • An organization connected to the Veeam Backup for Microsoft Office 365 environment: View the options and how-to guide here.

    Configuring Your B2 Cloud Storage Bucket

    In the Backblaze B2 console, you need to create a bucket. If you already have one, you may notice that there is a blank entry next to “endpoint.” This is because buckets created before May 4, 2020 cannot be used with the Backblaze S3 Compatible APIs.

    So, let’s create a new bucket. I used “VeeamBackupO365.”

    This bucket will now appear with an S3 endpoint, which we will need for use in Veeam Backup for Microsoft Office 365.

    Before you can use the new bucket, you’ll need to create some application keys/credentials. Head into the App Keys settings in Backblaze and select “create new.” Fill out your desired settings and, as good practice, make sure you only give access to this bucket, or the buckets you want to be accessible.

    Your application key(s) will now appear. Make sure to save these keys somewhere secure, such as a password manager, as they only will appear once. You should also keep them accessible now as you are going to need them shortly.

    The Backblaze setup is now done.

    Configuring Your Veeam Backup

    Now you’ll need to head over to your Veeam Backup for Microsoft Office 365 Console.

    Note: You could also achieve all of this via PowerShell or the RESTful API included with this product if you wanted to automate.

    It is time to create a new backup repository in Veeam. Click into your Backup Infrastructure panel and add a new backup repository and give it a name…

    …Then select the “S3 Compatible” option:

    Enter the S3 endpoint you generated earlier in the Backblaze console into the Service endpoint on the Veeam wizard. This will be something along the lines of: s3.*.backblazeb2.com.
    Now select “Add Credential,” and enter the App Key ID and Secret that you generated as part of the Backblaze setup.

    With your new credentials selected, hit “Next.” Your bucket(s) will now show up. Select your desired backup bucket—in this case I’m selecting the one I created earlier: “VeeamBackupO365.” Now you need to browse for a folder which Veeam will use as its root folder to base the backups from. If this is a new bucket, you will need to create one via the Veeam console like I did below, called “Data.”

    If you are curious, you can take a quick look back in your Backblaze account, after hitting “Next,” to confirm that Veeam has created the folder you entered, plus some additional parent folders, as you can see in the example below:

    Now you can select your desired retention. Remember, all jobs targeting this repository will use this retention setting, so if you need a different retention for, say, Exchange and OneDrive, you will need two different repositories and you will need to target each job appropriately.

    Once you’ve selected your retention, the repository is ready for use and can be used for backup jobs.

    Now you can create a new backup job. For this demo, I am going to only back up my user account. The target will be our new repository backed by Backblaze S3 Compatible storage. The wizard walks users through this process.

    Giving the backup job a name.

     

    Select your entire organization or desired users/groups and what to process (Exchange, OneDrive, and/or Sharepoint).

     

    Select the object-backed backblazeb2-s3 backup repository you created.

    That is it! Right click and run the job—you can see it starting to process your organization.
    As this is the first job you’ve run, it may take some time and you might notice it slowing down. This slow down is a result of the Microsoft data being pulled out of O365. But Veeam is smart enough to have added in some clever user-hopping, so as it detects throttling it will jump across and start a new user, and then loop back to the others to ensure your jobs finish as quickly as possible.

    While this is running, if you open up Backblaze again you will see the usage starting to show.

    Done and Done

    And there it is—a fully functional backup of your Microsoft Office 365 tenancy using Veeam Backup for Microsoft Office 365 and Backblaze B2 Cloud Storage.

    We really appreciate Ben’s guide and hope it helps you try out Backblaze as a repository for your Veeam data. If you do—or if you’ve already set us as a storage target—we’d love to hear how it goes in the comments.
    You can reach out to Ben at @benyoungnz on Twitter, or his blog, https://benyoung.blog.

    The post Vanguard Perspectives: Microsoft 365 to Veeam Backup to Backblaze B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Gladstone Institutes Builds a Backblaze Fireball, XXXL Edition

    Post Syndicated from original https://www.backblaze.com/blog/gladstone-institutes-builds-a-backblaze-fireball-xxxl-edition/

    Here at Backblaze, we’ve been known to do things a bit differently. From Storage Pods and Backblaze Vaults to drive farming and hard drive stats, we often take a different path. So, it’s no surprise we love stories about people who think outside of the box when presented with a challenge. This is especially true when that story involves building a mongo storage server, a venerable Toyota 4Runner, and a couple of IT engineers hell-bent on getting 1.2 petabytes of their organization’s data off-site. Let’s meet Alex Acosta and Andrew Davis of Gladstone Institutes.

    Data on the Run

    The security guard at the front desk nodded knowingly as Alex and Andrew rolled the three large Turtle cases through the lobby and out the front door of Gladstone Institutes. Well known and widely respected, the two IT engineers comprised two-thirds of the IT Operations staff at the time and had 25 years of Gladstone experience between them. So as odd as it might seem to have IT personnel leaving a secure facility after-hours with three large cases, everything was on the up-and-up.

    It was dusk in mid-February. Alex and Andrew braced for the cold as they stepped out into the nearly empty parking lot toting the precious cargo within those three cases. Andrew’s 4Runner was close, having arrived early that day—the big day, moving day. They gingerly lugged the heavy cases into the 4Runner. Most of the weight was the cases themselves, the rest of one was a 4U storage server, and in the other two, 36 hard drives. An insignificant part of the weight, if any at all, was the reason they were doing all of this—200 terabytes of Gladstone Institutes research data.

    They secured the cases, slammed the tailgate shut, climbed into the 4Runner, and put the wheels in motion for the next part of their plan. They eased onto Highway 101 and headed south. Traffic was terrible, even the carpool lane; dinner would be late, like so many dinners before.

    Photo Credit: Gladstone Institutes.

    Back to the Beginning

    There had been many other late nights since they started on this project six months before. The Fireball XXXL project, as Alex and Andrew eventually named it, was driven by their mission to safeguard Gladstone’s biomedical research data from imminent disaster. On an unknown day in mid-summer, Alex and Andrew were in the server room at Gladstone surrounded by over 900 tapes that were posing as a backup system.

    Andrew mused, “It could be ransomware, the building catches on fire, somebody accidentally deletes the datasets because of a command-line entry, any number of things could happen that would destroy all this.” Alex, as he waved his hand across the ever expanding tape library, added, “We can’t rely on this anymore. Tapes are cumbersome, messy and they go bad even when you do everything right. We waste so much time just troubleshooting things that in 2020 we shouldn’t be troubleshooting anymore.” They resolved to find a better way to get their data off-site.

    Reality Check

    Alex and Andrew listed the goals for their project: get the 1.2 petabytes of data currently stored on-site and in their tape library safely off-site, be able to add 10–20 terabytes of new data each day, and be able to delete files as they needed along the way. The fact that practically every byte of data in question represented biomedical disease research—including data with direct applicability to fighting a global pandemic—meant that they needed to accomplish all of the above with minimal downtime and maximum reliability. Oh, and they had to do all of this without increasing their budget. Optimists.

    With cloud storage as the most promising option, they first considered building their own private cloud in the distant data center in the desert. They quickly dismissed the idea as the upfront costs were staggering, never mind the ongoing personnel and maintenance costs of managing their distant systems.

    They decided the best option was using a cloud storage service and they compared the leading vendors. Alex was familiar with Backblaze, having followed the blog for years, especially the posts on drive stats and Storage Pods. Even better, the Backblaze B2 Cloud Storage service was straight-forward and affordable. Something he couldn’t say about the other leading cloud storage vendors.

    The next challenge was bandwidth. You might think having a 5 Gb/s connection would be enough, but they had a research-heavy, data-hungry organization using that connection. They sharpened their bandwidth pencils and, taking into account institutional usage, they calculated they could easily support the 10–20 terabytes per day uploads. Trouble was, getting the existing 1.2 petabytes of data uploaded would be another matter entirely. They contacted their bandwidth provider and were told they could double their current bandwidth to 10 Gb/s for a multi-year agreement at nearly twice the cost and, by the way, it would be several months to a year before they could start work. Ouch.

    They turned to Backblaze, who offered their Backblaze Fireball data transfer service which could upload about 70 terabytes per trip. “Even with the Fireball, it will take us 15, maybe 20, round trips,” lamented Andrew during another late night session of watching backup tapes. “I wish they had a bigger box,” said Alex, to which Andrew replied, “Maybe we could build one.”

    The plan was born: build a mongo storage server, load it with data, take it to Backblaze.

    Photo Credit: Gladstone Institutes.

    Andrew Davis in Gladstone’s server room.

    The Ask

    Before they showed up at a Backblaze data center with their creation, they figured they should ask Backblaze first. Alex noted, “With most companies if you say, ‘Hey, I want to build a massive file server, shuttle it into your data center, and plug it in. Don’t you trust me?’ They would say, ‘No,’ and hang up, but Backblaze didn’t, they listened.”

    After much consideration, Backblaze agreed to enable Gladstone personnel to enter a nearby data center that was a peering point for the Backblaze network. Thrilled to find kindred spirits, Alex and Andrew now had a partner in the Fireball XXXL project. While this collaboration was a unique opportunity for both parties, for Andrew and Alex it would also mean more late nights and microwaved burritos. That didn’t matter now, they felt like they had a great chance to make their project work.

    The Build

    Alex and Andrew had squirreled away some budget for a seemingly unrelated project: to build an in-house storage server to serve as a warm backup system for currently active lab projects. That way if anything went wrong in a lab, they could retrieve the last saved version of the data as needed. Using those funds, they realized they could build something to be used as their supersized Fireball XXXL, and then once the data transfer cycles were finished, they could repurpose the system to be the backup server they had budgeted.

    Inspired by Backblaze’s open-source Storage Pod, they worked with Backblaze on the specifications for their Fireball XXXL. They went the custom build route starting with a 4U chassis and big drives, and then they added some beefy components of their own.

    Fireball XXXL

    • Chassis: 4U Supermicro 36-bay, 3.5 in disc chassis, built by iXsystems.
    • Processor: Dual CPU Intel Xeon Gold 5217.
    • RAM: 4 x 32GB (128GB).
    • Data Drives: 36 14TB HE14 from Western Digital.
    • ZIL: 120GB NVMe SSD.
    • L2ARC: 512GB SSD.

    They basically built a 36-bay, 200 terabyte RAID 1+0 system to do the data replication using rclone. Andrew noted, “Rclone is resource-heavy, both on RAM and CPU cycles. When we spec’d the system we needed to make sure we had enough muscle so rclone could push data at 10 Gb/s. It’s not just reading off the drives; it’s the processing required to do that.”

    Loading Up

    Gladstone runs TrueNAS on their on-premise production systems so it made sense to use it on their newly built data transfer server. “We were able to do a ZFS send from our in-house servers to what looked like a gigantic external hard drive, for lack of a better description,” Andrew said. “It allowed us to replicate at the block level, compressed, so it was much higher performance in copying data over to that system.”

    Andrew and Alex had previously determined that they would start with the four datasets that were larger than 40 terabytes each. Each dataset represented years of research from their respective labs, placing them at the top of the off-site backup queue. Over the course of 10 days, they loaded the Fireball XXXL with the data. Once finished, they shut the system down and removed the drives. Opening the foam lined Turtle cases they had previously purchased, they gingerly placed the chassis into one case and the 36 drives in the other two. They secured the covers and headed towards the Gladstone lobby.

    At the Data Center

    Alex and Andrew eventually arrived at the data center where they’d find the needed Backblaze network peering point. Upon entry, inspections ensued and even though Backblaze had vouched for the Gladstone chaps, the process to enter was arduous. As it should be. Once in their assigned room, they connected a few cables, typed in a few terminal commands and data started uploading to their Backblaze B2 account. The Fireball XXXL performed as expected, with a sustained transfer rate of between eight and 10 Gb/s. It took a little over three days to upload all the data.

    They would make another trip a few weeks later and have planned two more. With each trip, more Gladstone data is safely stored off-site.

    Gladstone Institutes, with over 40 years of history behind them and more than 450 staff, is a world leader in the biomedical research fields of cardiovascular and neurological diseases, genomic immunology, and virology, with some labs recently shifting their focus to SARS-CoV-2, the virus that causes COVID-19. The researchers at Gladstone rely on their IT team to protect and secure their life-saving research.

    Photo Credit: Gladstone Institutes.

    When data is literally life-saving, backing up is that much more important.

    Epilogue

    Before you load up your 200 terabyte media server into the back of your SUV or pickup and head for a Backblaze data center—stop. While we admire the resourcefulness of Andrew and Alex, on our side the process was tough. The security procedures, associated paperwork, and time needed to get our Gladstone heroes access to the data center and our network with their Fireball XXXL were “substantial.” Still, we are glad we did it. We learned a tremendous amount during the process, and maybe we’ll offer our own Fireball XXXL at some point. If we do, we know where to find a couple of guys who know how to design one kick-butt system. Thanks for the ride, gents.

    The post Gladstone Institutes Builds a Backblaze Fireball, XXXL Edition appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Supporting Efficient Cloud Workflows at THEMA

    Post Syndicated from Steve Ferris original https://www.backblaze.com/blog/supporting-efficient-cloud-workflows-at-thema/

    Editor’s Note: As demand for localized entertainment grows rapidly around the globe, the amount of media that production companies handle has skyrocketed at the same time as the production process has become endlessly diverse. In a recent blog post, iconik highlighted one business that uses Backblaze B2 Cloud Storage and the iconik asset management platform together to develop a cloud-based, resource-efficient workflow perfectly suited to their unique needs. Read on for some key learnings from their case study, which we’ve adapted for our blog.

    Celebrating Culture With Content

    THEMA is a Canal+ Group company that has more than 180 television channels in its portfolio. It helps with the development of these channels and has built strong partnerships with major
    pay-TV platforms worldwide.

    THEMA started with a focus on ethnic, localized entertainment, and has grown that niche into the foundation of a large, expansive service. Today, THEMA has a presence in nearly every region of the world, where viewers can enjoy programming that celebrates their heritage and offers a taste of home wherever they are.

    Cédric Pierre-Louis, Director of Programming for the African Fiction Channels at THEMA, and Gareth Howells, Director of Out Point Media—which was created to assist THEMA quality control and content operations, mainly for its African channels—faced a problem shared by many media organizations: As demand for their content rose, so did the amount of media they were handling. To the extent that their systems were not able to scale with their growth.

    A Familiar Challenge

    Early on, most media asset management solutions that the African Fiction Channels at THEMA considered for dealing with their expanding content needs had a high barrier to entry, requiring large upfront investments. To stay cost-efficient, THEMA used more manual solutions, but this would eventually prove to be an unsustainable path.

    As THEMA moved into creating and managing channels, the increase of content and the added complexity of their workflows brought the need for media management front and center.

    Charting a Course for Better Workflows

    When Cédric took on leadership of his department at THEMA, he and Gareth both shared a strong desire to make their workflows more agile and efficient. They began by evaluating solutions using a few key points.

    Cloud-Based
    To start, THEMA needed a solution that could improve how they work across all their global teams. The operation needed to work from anywhere, supporting team members working in Paris and London, as well as content production teams in Nigeria, Ghana, and Ivory Coast.

    Minimal Cloud Resources
    There was also a unique challenge to overcome with connectivity and bandwidth restrictions facing the distributed teams. They needed a light solution requiring minimal cloud resources. Teams with limited internet access would also need immediate access to the content when they were online.

    Proxy Workflows
    They also couldn’t afford to continue working with large files. Previously, teams had to upload hi-res, full master versions of media, which then had to be downloaded by every editor who worked on the project. They needed proxy workflows to allow creation to happen faster with much smaller files.

    Adobe Integration
    The team needed to be able to find content fast and have the ability to simply drag it into their timelines from a panel within their Adobe programs. This ability to self serve and find media without any bottlenecks would have a great impact on production speed.

    Affordable Startup Costs
    They also needed to stay within a budget. There could not be any costly installation of new infrastructure.

    Landing at iconik

    While Cédric was searching for the right solution, he took a trip to Stockholm, where he met with iconik’s CEO, Parham Azimi. After a short talk and demo, it was clear that iconik satisfied all of the evaluation points they were looking for in one solution. Soon after that meeting, Cédric and Gareth began to implement iconik with the help of IVORY, who represents iconik in France.

    A note on storage: As a storage option within iconik, Backblaze B2 offers teams storage that is both oriented to their use case and economically priced. THEMA needed simple, cloud-based storage with a price tag that was both right-sized and predictable, and in selecting Backblaze B2, they got it.

    Today, THEMA uses iconik as a full content management system that offers nearly end-to-end control for their media workflows.

    This is how they utilize iconik for their broadcast work:

        1. Film and audio is created at the studios in Nigeria and Ghana.
        2. The media is uploaded to Backblaze B2.
        3. Backblaze B2 assets are then represented in iconik as proxies.
        4. Quality control and compliance teams use the iconik Adobe panel with proxy versions for quality control, checking compliance, and editing.
        5. Master files are downloaded to create the master copy.
        6. The master copy is distributed for playout.

    While all this is happening, the creative teams at THEMA can also access content in iconik to edit promotional media.

    Visions to Expand iconik’s Use

    With the experience THEMA has had so far, the team is excited to implement iconik for even more of their workflows. In the future, they plan to integrate iconik with their broadcast management system to share metadata and files with their playout system. This would save a lot of time and work, as much of the data in iconik is also relevant for the media playout system.

    Further into the future, THEMA hopes to achieve a total end to end workflow with iconik. The vision is to use iconik as soon as a movie comes in, so their team can put it through all the steps in a workflow such as quality control, compliance, transcoding, and sending media to third parties for playout or VOD platforms.

    For this global team that needed their media managed in a way that would be light and resource efficient, iconik—with the storage provided by Backblaze B2—delivered in a big way.

    Looking for a similar solution? Get started with Backblaze B2 and learn more about our integration with iconik today.

    The post Supporting Efficient Cloud Workflows at THEMA appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Rclone Power Moves for Backblaze B2 Cloud Storage

    Post Syndicated from Skip Levens original https://www.backblaze.com/blog/rclone-power-moves-for-backblaze-b2-cloud-storage/

    Rclone is described as the “Swiss Army chainsaw” of storage movement tools. While it may seem, at first, to be a simple tool with two main commands to copy and sync data between two storage locations, deeper study reveals a hell of a lot more. True to the image of a “Swiss Army chainsaw,” rclone contains an extremely deep and powerful feature set that empowers smart storage admins and workflow scripters everywhere to meet almost any storage task with ease and efficiency.


    Rclone—rsync for cloud storage—is a powerful command line tool to copy and sync files to and from local disk, SFTP servers, and many cloud storage providers. Rclone’s Backblaze B2 Cloud Storage page has many examples of configuration and options with Backblaze B2.

    Continued Steps on the Path to rclone Mastery

    In our in-depth webinar with Nick Craig-Wood, developer and principal maintainer of rclone, we discussed a number of power moves you can use with rclone and Backblaze B2. This post takes it a number of steps further with five more advanced techniques to add to your rclone mastery toolkit.
    Have you tried these and have a different take? Just trying them out for the first time? We hope to hear more and learn more from you in the comments.

    Use --track-renames to Save Bandwidth and Increase Data Movement Speed

    If you’re moving files constantly from disk to the cloud, you know that your users frequently re-organize and rename folders and files on local storage. Which means that when it’s time to back up those renamed folders and files again, your object storage will see the files as new objects and will want you to re-upload them all over again.

    Rclone is smart enough to take advantage of Backblaze B2 Native APIs for remote copy functionality, which saves you from re-uploading files that are simply renamed and not otherwise changed.

    By specifying the --track-renames flag, rclone will keep track of file size and hashes during operations. When source and destination files match, but the names are different, rclone will simply copy them over on the server side with the new name, saving you having to upload the object again. Use the --progress or --verbose flags to see these remote copy messages in the log.

    rclone sync /Volumes/LocalAssets b2:cloud-backup-bucket \
    –track-renames –progress –verbose

    2020-10-22 17:03:26 INFO : customer artwork/145.jpg: Copied (server side copy)
    2020-10-22 17:03:26 INFO : customer artwork//159.jpg: Copied (server side copy)
    2020-10-22 17:03:26 INFO : customer artwork/163.jpg: Copied (server side copy)
    2020-10-22 17:03:26 INFO : customer artwork/172.jpg: Copied (server side copy)
    2020-10-22 17:03:26 INFO : customer artwork/151.jpg: Copied (server side copy)

    With the --track-renames flag, you’ll see messages like these when the renamed files are simply copied over directly to the server instead of having to re-upload them.

     

    Easily Generate Formatted Storage Migration Reports

    When migrating data to Backblaze B2, it’s good practice to inventory the data about to be moved, then get reporting that confirms every byte made it over properly, afterwards.
    For example, you could use the rclone lsf -R command to recursively list the contents of your source and destination storage buckets, compare the results, then save the reports in a simple comma-separated-values (CSV) list. This list is then easily parsable and processed by your reporting tool of choice.

    rclone lsf –csv –format ps amzns3:/customer-archive-source
    159.jpg,41034
    163.jpg,29291
    172.jpg,54658
    173.jpg,47175
    176.jpg,70937
    177.jpg,42570
    179.jpg,64588
    180.jpg,71729
    181.jpg,63601
    184.jpg,56060
    185.jpg,49899
    186.jpg,60051
    187.jpg,51743
    189.jpg,60050

    rclone lsf –csv –format ps b2:/customer-archive-destination
    159.jpg,41034
    163.jpg,29291
    172.jpg,54658
    173.jpg,47175
    176.jpg,70937
    177.jpg,42570
    179.jpg,64588
    180.jpg,71729
    181.jpg,63601
    184.jpg,56060
    185.jpg,49899
    186.jpg,60051
    187.jpg,51743
    189.jpg,60050

    Example CSV output of file names and file hashes in source and target folders.

     
    You can even feed the results of regular storage operations into a system dashboard or reporting tool by specifying JSON output with the --use-json-log flag.

    In the following example, we want to build a report listing missing files in either the source or the destination location:

    The resulting log messages make it clear that the comparison failed. The JSON format lets me easily select log warning levels, timestamps, and file names for further action.

    {“level”:”error”,”msg”:”File not in parent bucket path customer_archive_destination”,”object”:”216.jpg”,”objectType”:”*b2.Object”,”source”:”operations
    /check.go:100″,”time”:”2020-10-23T16:07:35.005055-05:00″}
    {“level”:”error”,”msg”:”File not in parent bucket path customer_archive_destination”,”object”:”219.jpg”,”objectType”:”*b2.Object”,”source”:”operations
    /check.go:100″,”time”:”2020-10-23T16:07:35.005151-05:00″}
    {“level”:”error”,”msg”:”File not in parent bucket path travel_posters_source”,”object”:”.DS_Store”,”objectType”:”*b2.Object”,”source”:”operations
    /check.go:78″,”time”:”2020-10-23T16:07:35.005192-05:00″}
    {“level”:”warning”,”msg”:”12 files missing”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
    /check.go:225″,”time”:”2020-10-23T16:07:35.005643-05:00″}
    {“level”:”warning”,”msg”:”1 files missing”,”object”:”parent bucket path travel_posters_source”,”objectType”:”*b2.Fs”,”source”:”operations
    /check.go:228″,”time”:”2020-10-23T16:07:35.005714-05:00″}
    {“level”:”warning”,”msg”:”13 differences found”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
    /check.go:231″,”time”:”2020-10-23T16:07:35.005746-05:00″}
    {“level”:”warning”,”msg”:”13 errors while checking”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
    /check.go:233″,”time”:”2020-10-23T16:07:35.005779-05:00″}
    {“level”:”warning”,”msg”:”28 matching files”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
    /check.go:239″,”time”:”2020-10-23T16:07:35.005805-05:00″}
    2020/10/23 16:07:35 Failed to check with 14 errors: last error was: 13 differences found

    Example: JSON output from rclone check command comparing two data locations.

     

    Use a Static Exclude File to Ban File System Lint

    While rclone has a host of flags you can specify on the fly to match or exclude files for a data copy or sync task, it’s hard to remember all the operating system or transient files that can clutter up your cloud storage. Who hasn’t had to laboriously delete macOS’s hidden folder view settings (.DS_Store), or Window’s ubiquitous thumbnails database from your pristine cloud storage?

    By building your own customized exclude file of all the files you never want to copy, you can effortlessly exclude all such files in a single flag to consistently keep your storage buckets lint free.
    In the following example, I saved a text file under my user directory’s rclone folder and call it with --exclude-from rather than using --exclude (as I would if filtering on the fly):

    rclone sync /Volumes/LocalAssets b2:cloud-backup-bucket \
    –exclude-from ~/.rclone/exclude.conf

    .DS_Store
    .thumbnails/**
    .vagrant/**
    .gitignore
    .git/**
    .Trashes/**
    .apdisk
    .com.apple.timemachine.*
    .fseventsd/**
    .DocumentRevisions-V100/**
    .TemporaryItems/**
    .Spotlight-V100/**
    .localization/**
    TheVolumeSettingsFolder/**
    $RECYCLE.BIN/**
    System Volume Information/**

    Example of exclude.conf that lists all of the files you explicitly don’t want to ever sync or copy, including Apple storage system tags, Trash files, git files, and more.

     

    Mount a Cloud Storage Bucket or Folder as a Local Disk

    Rclone takes your cloud-fu to a truly new level with these last two moves.

    Since Backblaze B2 is active storage (all contents are immediately available) and extremely cost-effective compared to other media archive solutions, it’s become a very popular archive destination for media.

    If you mount extremely large archives as if they were massive, external disks on your server or workstation, you can make visual searching through object storage, as well as a whole host of other possibilities, a reality.

    For example, suppose you are tasked with keeping a large network of digital signage kiosks up-to-date. Rather than trying to push from your source location to each and every kiosk, let the kiosks pull from your single, always up-to-date archive in Backblaze!

    With FUSE installed on your system, rclone can mount your cloud storage to a mount point on your system or server’s OS. It will appear instantly, and your OS will start building thumbnails and let you preview the files normally.

    rclone mount b2:art-assets/video ~/Documents/rclone_mnt/

    Almost immediately after mounting this cloud storage bucket of HD and 4K video, macOS has built thumbnails, and even lets me preview these high-resolution video files.

     
    Behind the scenes, rclone’s clever use of VFS and caching makes this magic happen. You can tweak settings to more aggressively cache the object structure for your use case.

    Serve Content Directly From Cloud Storage With a Pop-up Web or SFTP Server

    Many times, you’re called on to give users temporary access to certain cloud files quickly. Whether it’s for an approval, a file hand off, or whatever, this requires thinking about how to get the file to a place where the user can have access to it with tools they know how to use. Trying to email a 100GB file is no fun, and spending the time to download and move it to another system that the user can access can take up a lot of time.

    Or perhaps you’d like to set up a simple, uncomplicated way to let users browse a large PDF library of product documents. Instead of moving files to a dedicated SFTP or web server, simply serve them directly from your cloud storage archive with rclone using a single command.

    Rclone’s serve command can present your content stored with Backblaze via a range of protocols as easy for users to access as a web browser—including FTP, SFTP, WebDAV, HTTP, HTTPS, and more.

    In the following example, I export the contents of the same folder of high-resolution video used above and present it using the WebDAV protocol. With zero HTML or complicated server setups, my users instantly get web access to this content, and even a searchable interface:

    rclone serve b2:art_assets/video
    2020/10/23 17:13:59 NOTICE: B2 bucket art_assets/video: WebDav Server started on http://127.0.0.1:8080/

    Immediately after exporting my cloud storage folder via WebDAV, users can browse to my system and search for all “ProRes” files and download exactly what they need.

     
    For more advanced needs, you can choose the HTTP or HTTPS option and specify custom data flags that populate web page templates automatically.

    Continuing Your Study

    Combined with our rclone webinar, these five moves will place you well on your path to rclone storage admin mastery, letting you confidently take on complicated data migration tasks with an ease and efficiency that will amaze your peers.

    We look forward to hearing of the moves and new use cases you develop with these tools.

    The post Rclone Power Moves for Backblaze B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    You’ve Cat to Be Kitten Me…

    Post Syndicated from original https://www.backblaze.com/blog/youve-cat-to-be-kitten-me/

    Catblaze. It started as an April Fools’ joke four years ago, but it stuck around as part of our website ever since. A few intrepid website perusers even found their way to the page and signed up for our backup service there. To be clear: There’s no actual difference between the two products except the landing page. If you bought Backblaze on Catblaze, it’s Backblaze. You received the same great service as everyone else, just with a nice cat-themed wrapper. Got it? Great!

    It’s been a while since we’ve done anything with Catblaze though, and so I got to thinking… If the page is still functional, how can we make use of it again? Well, why not redirect some traffic there and see how it affects conversions?! A lot of people love cats, maybe that love could be translated to loving backing up, too?

    So, that’s exactly what we did! A few weeks ago, for one day, we diverted some traffic from backblaze.com/cloud-backup.html to backblaze.com/catblaze.html to see how they performed against each other. Did anyone even notice? And if they did, did they sign up anyway? Read on to find out! The results may shock you! And other clickbait hyperbole!

    Why are we doing this? Well, along with everyone else who has had to shift to remote office-ing during the pandemic, we’ve been working hard to maintain high spirits and morale here at Backblaze. While we made a lot of changes to help our team be as productive as possible while working remotely, we thought, why not get a little silly, engage in a little charitable fundraising, and also buoy the spirits of our community at large: You!

    With a lot of people spending more time at home, animal adoption in urban areas increasing, and “Tiger King” being so popular on Netflix, I spent some time chatting with a friend of mine who works for the Humane Society of the United States, and asked if there were any shelters that were looking for aid. He told me that the Peninsula Humane Society—the same branch that the models for the original “Catblaze Cats” came from—could use some donations. So, as part of this experiment, we’ll be contributing to them in honor of the kittens that helped make this experiment possible!

    It also happens to be National Cat Day today, so what better way to celebrate?

    And Now, on to the Results!

    Wow, who would have known that diverting 50% of our hard-earned traffic to an April Fools’ landing page was an interesting idea? The results may or may not surprise you, but here’s the bottom line: Sending traffic to catblaze.com resulted in a decrease in trial conversions (folks coming to our site and creating a trial account) by 15%. Which, admittedly, is better than some of us had guessed!

    Let’s dive into more of those numbers, shall we? (Assuming we’re comparing Catblaze to our regular Backblaze Computer Backup landing page.)

    • Days of experiment: One.
    • Traffic diverted: 50%.
    • Percent change in conversion rate from visit to trial: 15% reduction.
    • Percent change in conversion rate from visit to purchase (skip trial): 41% reduction.
      • 69.96% (A palindrome!) of people were less likely to purchase directly from Catblaze—that’s how many fewer folks went to the “buy” page next.
    • Percent change in bounce rate: 15% improvement.
      • Percent change in visits going to the home page from Catblaze: 118%.
    • Tweets asking us what is going on: Zero.
    • Support tickets asking us, “Why the cats?”: Zero.
    • Donation to the Peninsula Humane Society: $2,000.

    Lessons Learned

    While we probably shouldn’t update our onboarding messaging to include a picture of our Catblaze friends, it may be worth going a bit more kitten-friendly in future illustrations and designs for our website. The fact that there was a 15% improvement in bounce rate (and 20% reduction in exit rate) meant that people were sticking around and looking at that awesome cat content, or they were very confused. The cat content was at best amusing, but at worst confusing (which is usually not what you want your customers to be feeling), and you can see that was the case because we saw the number of people going back to our homepage increase by 118%. So, while we kept people on our website, their confusion was visible in how they navigated our website.

    Perhaps the most entertaining thing is that no one asked about the Catblaze website. We received no Tweets or support tickets asking us why everything was cat-themed on our website. Based on our daily traffic, and the seemingly minor reasons that people write in with support tickets, I would have sworn up and down that I’d be on social media answering questions all day—though, if I responded to folks asking about it, that may have affected the experiment—so, it’s great that it went unnoticed.

    Will we be doing this again? I doubt it. The Finance department is already sending me eye roll emojis, but it was definitely an interesting experiment and taught me one important lesson: While people definitely noticed the cats, they certainly didn’t seem to mind them.

    The post You’ve Cat to Be Kitten Me… appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.