Tag Archives: b2

12 Power Tips for Backing Up Business Data

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/12-power-tips-for-backing-up-business-data/

Business Backup Power Tips

In this, the fourth post in our Power Tips series, we provide some blazingly useful tips that we feel would benefit business users. Some of the tips apply to our Backblaze Business Backup product and some to B2 Cloud Storage.

Don’t miss our earlier posts on Power Tips for Backblaze Computer Backup, 12 B2 Power Tips for New Users, and 12 B2 Power Tips for Experts and Developers.

12 Power Tips for Business Users of Backblaze Business Backup and B2

Backblaze logo

1 Manage All Users of Backblaze Business Backup or B2

Backblaze Groups can be used for both Backblaze Business Backup and B2 to manage accounts and users. See the status of all accounts and produce reports using the admin console.

Backblaze logo

2 Restore For Free via Web or USB

Admins can restore data from endpoints using the web-based admin console. USB drives can be shipped worldwide to facilitate the management of a remote workforce.

Backblaze logo

3 Back Up Your VMs

Backblaze Business Backup can handle virtual machines, such as those created by Parallels, VMware Fusion, and VirtualBox; and B2 integrates with StarWind, OpenDedupe, and CloudBerry to back up enterprise-level VMs.

Backblaze logo

4 Mass Deploy Backblaze Remotely to Many Computers

Companies, organizations, schools, non-profits, and others can use the Backblaze Business Backup MSI installer, Jamf, Munki, and other tools to deploy Backblaze computer backup remotely across all their computers without any end-user interaction.

Backblaze logo

5 Save Money with Free Data Exchange with B2’s Compute Partners

Spin up compute applications with high speed and no egress charges using our partners Packet and Server Central.

Backblaze logo

6 Speed up Access to Your Content With Free Egress to Cloudflare

Backblaze offers free egress from B2 to Cloudflare’s content delivery network, speeding up access to your data worldwide.

Backblaze logo

7 Get Your Data Into the Cloud Fast

You can use Backblaze’s Fireball hard disk array to load large volumes of data without saturating your network. We ship a Fireball to you and once you load your data onto it, you ship it back to us and we load it directly into your B2 account.

Backblaze logo

8 Use Single Sign-On (SSO) and Two Factor Verification for Enhanced Security

Single sign-on (Google and Microsoft) improves security and speeds signing into your Backblaze account for authorized users. With Backblaze Business Backup, all data is automatically encrypted client-side prior to upload, protected during transfer, and stored encrypted in our secure data centers. Adding Two Factor Verification augments account safety with another layer of security.

Backblaze logo

9 Get Quick Answers to Your Backing Up Questions

Refer to an extensive library of FAQs, how-tos, and help articles for Business Backup and B2 in our online help library.

Backblaze logo

10 Application Keys Enable Controlled Sharing of Data for Users and Apps

Take control of your cloud data and share files or permit API access using configurable Backblaze application keys.

Backblaze logo

11 Manage Your Server Backups with CloudBerry MBS and B2

Automate and centrally manage server backups using CloudBerry Managed Backup Service (MBS) and B2. It’s easy to set up and once configured, you have a true set-it-and-forget-it backup solution in place.

Backblaze logo

12 Protect your NAS Data Using Built-in Sync Applications and B2

B2 is integrated with the leading tools and devices in the market for NAS backup. Native integrations from Synology, QNAP, FreeNAS, TrueNAS and more ensure that setups are simple and backups are automated.

Want to Learn More About Backblaze Business Backup and B2?

You can find more information on Backblaze Business Backup (including a free trial) on our website, and more tips about backing up in our help pages and in our Backup Guide.

The post 12 Power Tips for Backing Up Business Data appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: Durability vs Availability

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/cloud-storage-durability-vs-availability/

What's the Diff: Durability vs Availability

When shopping for a cloud storage provider, customers should ask a few key questions of potential storage providers. In addition to inquiring about storage cost, data center location, and features and capabilities of the service, they’re going to want to know the numbers for two key metrics for measuring cloud storage performance: durability and availability.

We’ve discussed cloud storage costs and data center features in other posts. In this post we’re going to cover the basics about durability and availability.

What is Cloud Durability?

Think of durability as a measurement of how healthy and resilient your data is. You want your data to be as intact and pristine on the day you retrieve it as it was on the day you stored it.

There are a number of ways that data can lose its integrity.

1. Data loss

Data loss can happen through human accident, natural or manmade disaster, or even malicious action out of your control. Whether you store data in your home, office, or with a cloud provider, that data needs to be protected as much as possible from any event that could damage or destroy it. If your data is on a computer, external drive, or NAS in a home or office, you obviously want to keep the computing equipment away from water sources and other environmental hazards. You also have to consider the likelihood of fire, theft, and accidental deletion.

Data center managers go to great lengths to protect data under their care. That care starts with locating a facility in as safe a geographical location as possible, having secure facilities with controlled access, and monitoring and maintaining the storage infrastructure (chassis, drives, cables, power, cooling, etc.)

2. Data corruption

Data on traditional spinning hard drive systems can degrade with time, have errors introduced during copying, or become corrupted in any number of ways. File and operating systems and utilities have ways to double check that data is handled correctly during common file and data handling operations, but corruption can sneak into a system if it isn’t monitored closely or if the storage system doesn’t specifically check for such errors such as is common with systems with ECC (Error Correcting Code) RAM. Object storage systems will commonly monitor for any changes in the data, and often will automatically repair or provide warnings when data has been changed.

How is Durability Measured?

Object storage providers express data durability as an annual percentage in nines, as in two nines before the decimal point and as many nines as warranted after the decimal point. For example, eleven nines of durability is expressed as 99.999999999%.

Of the major vendors, Azure claims 12 nines and even 16 nines durability for some services, while Amazon S3, Google Cloud Platform and Backblaze offer 11 nines, or 99.999999999% annual durability.

4x3 rows of 9s

What this means is that those services are promising that your data will remain intact while it is under their care, and no more than 0.000000001 percent of your data will be lost in a year (in the case of eleven nines annual durability).

How is Durability Maintained?

Generally, there are two ways to maintain data durability. The first approach is to use software algorithms and metadata such as checksums to detect corruption of the data. If corruption is found, the data can be healed using the stored information. Examples of these approaches are erasure coding and Reed-Solomon coding.

Another tried and true method to ensure data integrity is to simply store multiple copies of the data in multiple locations. This is known as redundancy. This approach allows data to survive the loss or corruption of data in one or even multiple locations through accident, war, theft, or any manner of natural disaster or alien invasion. All that’s required is that at least one copy of the data remains intact. The odds for data survival increase with the number of copies stored, with multiple locations an important multiplying factor. If multiple copies (and locations) are lost, well, that means we’re all in a lot of trouble and perhaps there might be other things to think about than the data you have stored.

The best approach is a combination of the above two approaches. Home data storage appliances such as NAS can provide the algorithmic protection through RAID and other technologies. If you store at least one copy of your data in a different location than your office or home than you’ve got redundancy covered, as well. The redundant location can be as simple as a USB or hard drive you regularly drop off in your old bedroom’s closet at mom’s house or a data center in another state that gets a daily backup from your office computer or network.

What is Availability?

If durability can be compared to how well your picnic basket contents survived the automobile trip to the beach, then you might get a good understanding of availability if you subsequently stand and watch that basket being carried out to sea by a wave. The chicken salad sandwich in the basket might be in great shape but you won’t be enjoying it.

Availability is how much time the storage provider guarantees that your data and services are available to you. This is usually documented as a percent of time per year, e.g. 99.9% (or three nines) means that your data will be available to you from the data center and you will be unable to access the data for no more than about ten minutes per week, or 8.77 hours per year. Data centers often plan downtime for maintenance, which is acceptable as long as you have no immediate need of the data during those maintenance windows.

What availability is suitable for your data depends, of course, on how you’re using it. If you’re running an e-commerce site, reservation service, or a site that requires real-time transactions, then availability can be expressed in real dollars for any unexpected downtime. If you are simply storing backups, or serving media for a website that doesn’t get a lot of traffic, you probably can live with the service being unavailable on occasion.

There are of course no guarantees for connectivity issues that affect availability that are out of the control of the storage provider, such as internet outages, bad connections, or power losses affecting your connection to the storage provider.

Guarantees of Availability

Your cloud service provider should both publish and guarantee availability. Much like an insurance policy, the guarantee should be in terms that compensate you if the provider falls short of the guaranteed availability metrics. Naturally, the better the guarantee and the greater the availability, the more reliable and expensive the service will be.

Be sure to read the service level agreement (SLA) closely, to see how your vendor defines availability. A provider might define zero downtime if a single internet client can access even one service, while others might require that multiple internet service providers and countries can access all services to be defined as available.

Backblaze Durability and Availability

Backblaze offers 99.999999999 (eleven nines) annual durability and 99.9% availability for its cloud storage services.

The Bottom Line on Data Durability and Availability

The bottom line is that no number of nines can absolutely protect your data. Human error or acts of nature can always intercede to make the best plans to protect data go awry. The decision you should make is to decide how important the data is to you and whether you can afford to not have access to it temporarily or to lose it completely. That will guide what strategy or vendor you should use to protect that data.

Generally, having multiple copies of your data in different places, using reliable vendors for storage providers, and making sure that the infrastructure storing your data and your access to it will be supported (power, service payments, etc), will go a long way in ensuring that your data will continue to be stable and there when you need it.

The post What’s the Diff: Durability vs Availability appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Out of Stock: How to Survive the LTO-8 Tape Shortage

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/how-to-survive-the-lto-8-tape-shortage/

Not Available - LTO-8 Tapes

Eighteen months ago, the few remaining LTO tape drive manufacturers announced the availability of LTO-8, the latest generation of the Linear Tape-Open storage technology. Yet today, almost no one is actually writing data to LTO-8 tapes. It’s not that people aren’t interested in upgrading to the denser LTO-8 format that offers 12 TB per cartridge, twice LTO-7’s six TB capacity. It’s simply that the two remaining LTO tape manufacturers are locked in a patent infringement battle. And that means LTO-8 tapes are off the market indefinitely.

The pain of this delay is most acute for media professionals who are always quick to adopt higher capacity storage media for video and audio files that are notorious storage hogs. As cameras get more sophisticated, capturing in higher resolutions and higher frame rates, the storage capacity required per hour of content shoots through the roof. For example, one hour of ProRes UltraHD requires 148.72 GB storage capacity, which is four times more than the 37.35 GB required for one hour of ProRes HD-1080. Meanwhile, falling camera prices are encouraging production teams to use more cameras per shoot, further increasing the capacity requirements.

Since its founding, the LTO Consortium has prepared for storage growth by setting a goal of doubling tape density with each LTO generation and committed to releasing a new generation every two to three years. While this lofty goal might seem admirable to the LTO Consortium, it puts customers with earlier generations of LTO systems in a difficult position. New generation LTO drives at best can only read tapes from the two previous generations. So once a new generation is announced, the clock begins ticking on data stored on deprecated generations of tapes. Until you migrate the data to a newer generation, you’re stuck maintaining older tape drive hardware that may be no longer supported by manufacturers.

How Manufacturer Lawsuits Led to the LTO-8 Shortage

How the industry and the market arrived in this painful place is a tangled tale. The lawsuit and counter-lawsuit that led to the LTO-8 shortage is a patent infringement dispute between Fuji and Sony, the only two remaining manufacturers of LTO tape media. The timeline is complicated, starting in 2016 with Fujifilm suing Sony, then Sony counter-suing Fuji. By March 2019, US import bans of LTO products of both manufacturers were in place.

In the middle of these legal battles, LTO-8 drive manufacturers announced product availability in late 2017. But what about the LTO-8 tapes? Fujifilm says they’re not currently manufacturing LTO-8 and have never sold them. And Sony says its US imports of LTO-8 have been stopped and won’t comment about when they will begin shipping again per the dispute. So no LTO-8 for you!

LTO-8 Ultrium Tape Cost

Note that having only two LTO tape manufacturers is a root cause of this shortage. If there were still six LTO tape manufacturers like there were when LTO was launched in 2000, a dispute between two vendors might not have left the market in the lurch.

Weighing Your Options — LTO-8 Shortage Survival Strategies

If you’re currently using LTO for backup or archive, you have a few options for weathering the LTO-8 shortage.

The first option is to keep using your current LTO generation and wait until the disputes settle out completely before upgrading to LTO-8. The downside here is you’ll have to buy more and more LTO-7 or LTO-6 tapes that don’t offer the capacity you probably need if you’re storing higher resolution video or other capacity-hogging formats. And while you’ll be spending more on tapes than if you were able to use the higher capacity newer generation tapes, you’ll also know that anything you write to old-gen LTO tapes will have to be migrated sooner than planned. LTO’s short two to three year generation cycle doesn’t leave time for legal battles, and remember, manufacturers guarantee at most two generations of backward compatibility.

A second option is to go ahead and buy an LTO-8 library and use LTO-7 tapes that have been specially formatted for higher capacity called LTO Type M (M8). When initialized as Type M media, LTO-7 can hold nine TB of data instead of the standard six TB LTO-7 cartridge initialized as Type A. That puts it halfway up to the 12 TB capacity of an LTO-8 tape. However, this extra capacity comes with several caveats:

  • Only new, unused LTO-7 cartridges can be initialized as Type M.
  • Once initialized as Type M, they cannot be changed back to LTO-7 Type A.
  • Only LTO-8 drives in libraries can read and write to Type M, not standalone drives.
  • Future LTO generations — LTO-9, LTO-10, etc. — will not be able to read LTO-7 Type M.

So if you go with LTO-7 Type M for greater capacity, realize it’s still LTO-7, not LTO-8, and when you move to LTO-9, you won’t be able to read those tapes.

LTO Cartridge Capacity (TB) vs. LTO Generation Chart

Managing Tape is Complicated

If your brain hurts reading this as much as mine does writing this, it’s because managing tape is complicated. The devil is in the details, and it’s hard to keep them all straight. When you have years or even decades of content stored on LTO tape, you have to keep track of which content is on which generation of LTO, and ensure your facility has the drive hardware available to read them, and hope that nothing goes wrong with the tape media or the tape drives or libraries.

In general, new drives can read two generations back, but there are exceptions. For example, LTO-8 can’t read LTO-6 because the standard changed from GMR (Giant Magneto-Resistance) heads to TMR (Tunnel Magnetoresistance Recording) heads. The new TMR heads can write data more densely, which is what drives the huge increase in capacity. But that means you’ll want to keep an LTO-7 drive available to read LTO-5 and LTO-6 tapes.

Beyond these considerations for managing the tape storage long-term, there are the day-to-day hassles. If you’ve ever been personally responsible for managing backup and archive for your facility, you’ll know that it’s a labor-intensive, never-ending chore that takes time from your real job. And if your setup doesn’t allow users to retrieve data themselves, you’re effectively on-call to pull data off the tapes whenever it’s needed.

A Third Option — Migrate from LTO to Cloud Storage

If neither of these options to the LTO-8 crisis sounds appealing, there is an alternative: cloud storage. Cloud storage removes the complexity of tape while reducing costs. How much can you save in media and labor costs? We’ve calculated it for you in LTO Versus Cloud Storage Costs — the Math Revealed. And cloud storage makes it easy to give users access to files, either through direct access to the cloud bucket or through one of the integrated applications offered by our technology partners.

At Backblaze, we have a growing number of customers who shifted from tape to our B2 Cloud Storage and never looked back. Customers such as Austin City Limits, who preserved decades of concert historical footage by moving to B2; Fellowship Church, who eliminated Backup Thursdays and freed up staff for other tasks; and American Public Television, who adopted B2 in order to move away from tape distribution to its subscribers. What they’ve found is that B2 made operations simpler and their data more accessible without breaking their budget.

Another consideration: once you migrate your data to B2 cloud storage, you’ll never have to migrate again when LTO generations change or when the media ages. Backblaze takes care of making sure your data is safe and accessible on object storage, and migrates your data to newer disk technologies over time with no disruption to you or your users.

In the end, the problem with tape isn’t the media, it’s the complexity of managing it. It’s a well-known maxim that the time you spend managing how you do your work takes time away from what you do. Having to deal with multiple generations of both tape and tape drives is a good example of an overly complex system. With B2 Cloud Storage, you can get all the economical advantages of tape as well as the disaster recovery advantages of your data being stored away from your facility, without the complexity and the hassles.

With no end in sight to this LTO-8 shortage, now is a good time to make the move from LTO to B2. If you’re ready to start your move to alway available cloud storage, Backblaze and our partners are ready to help you.

Migrate or Die, a Webinar Series on Migrating Assets and Archives to the Cloud

If you’re facing challenges managing LTO and contemplating a move to the cloud, don’t miss Migrate or Die, our webinar series on migrating assets and archives to the cloud.

Migrate or Die: Evading Extinction -- Migrating Legacy Archives

The post Out of Stock: How to Survive the LTO-8 Tape Shortage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Profound Benefits of Cloud Collaboration for Business Users

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/cloud-collaboration-for-business-users/

The Profound Benefits of Cloud Collaboration for Business Users

Apple’s annual WWDC is highlighting high-end desktop computing, but it’s laptop computers and the cloud that are driving a new wave of business and creative collaboration

WWDC, Apple’s annual megaconference for developers kicks off this week, and Backblaze has team members on the ground to bring home insights and developments. Yet while everyone is drooling over the powerful new Mac Pro, we know that the majority of business users use a portable computer as their primary system for business and creative use.

The Rise of the Mobile, Always On, Portable Workstation

Analysts confirm this trend towards the use of portable computers and the cloud. IDC’s 2019 Worldwide Quarterly Personal Computing Device Tracker report shows that desktop form-factor systems comprise only 22.6% of new systems and laptops and portables are chosen almost twice as much at 42.4%.

After all, these systems are extremely popular with users and the DevOps and IT teams that support them. Small and self-contained, with massive compute power, modern laptops have fast SSD drives and always-connected Wi-Fi, helping users be productive anywhere: in the field, on business trips, and at home. Surprisingly, companies today can deploy massive fleets of these notebooks with extremely lean staff. At the inaugural MacDevOps conference a few years ago Google’s team shared that they managed 65,000 Macs with a team of seven admins!

Laptop Backup is More Important Than Ever

With the trend towards leaner IT staffs, and the dangers of computers in the field being lost, dropped or damaged, having a reliable backup system that just works is critical. Despite the proliferation of teams using shared cloud documents and email, all of the other files on your laptop you’re working on — the massive presentation due next week or the project that’s not quite ready to share on Google Drive — all have no protection without backup, which is of course why Backblaze exists!

Cloud as a Shared Business Content Hub is Changing Everything

When a company is backing up users’ files comfortably to the cloud, the next natural step is to adopt cloud-based storage like Backblaze B2 for your teams. With over 750 petabytes of customer data under management, Backblaze has worked with businesses of every size as they adopt cloud storage. Each customer and business does so for different reasons.

In the past, a business department typically would get a share of a company’s NAS server and was asked to keep all of the department’s shared documents there. But outside the corporate firewall, it turns out these systems are hard to access remotely from the road. They require VPNs and a constant network connection to mount a corporate shared drive via SMB or NFS. And, of course, running out of space and storing large files was an ever present problem.

Sharing Business Content in the Cloud Can be Transformational for Businesses

When considering a move to cloud-based storage for your team, some benefits seem obvious, but others are more profound and show that cloud storage is emerging as a powerful, organizing platform for team collaboration.

Shifting to cloud storage delivers these well-known benefits:

  • Pay only for storage you actually need
  • Grow as large and as quickly as you might need
  • Service, management, and upgrades are built in to the service
  • Pay for service as you use it out of operating expenses vs. onerous capital expenses

But shifting to shared, cloud storage yields even more profound benefits:

Your Business Content is Easier to Organize and Manage: When your team’s content is in one place, it’s easier to organize and manage, and users can finally let go of stashing content all over your organization or leaving it on their laptops. All of your tools to mine and uncover your business’s content work more efficiently, and your users do as well.

You Get Simple Workflow Management Tools for Free: Storage can fit your business processes much easier with cloud storage and do it on the fly. If you ever need to set up separate storage for teams of users, or define read/write rules for specific buckets of content, it’s easy to configure with cloud storage.

You Can Replace External File-Sharing Tools: Since most email services balk at sending large files, it’s common to use a file sharing service to share big files with other users on your team or outside your organization. Typically this means having to download a massive file, re-upload it to a file-sharing service, and publish that file-sharing link. When your files are already in cloud, sharing it is as simple as retrieving a URL location.

In fact, this is exactly how Backblaze organizes and serves PDF content on our website like customer case studies. When you click on a PDF link on the Backblaze website, it’s served directly from one of these links from a B2 bucket!

You Get Instant, Simple Policy Control over Your Business or Shared Content: B2 offers simple-to-use tools to keep every version of a file as it’s created, keep just the most recent version, or choose how many versions you require. Want to have your shared content links time-out after a day or so? This and more is all easily done from your B2 account page:

B2 Lifecycle Settings
An example of setting up shared link rules for a time-sensitive download: The file is available for 3 days, then deleted after 10 days

You’re One Step Away from Sharing That Content Globally: As you can see, beyond individual file-sharing, cloud storage like Backblaze B2 can serve as your origin store for your entire website. With the emergence of content delivery networks (CDN), you’re now only a step away from sharing and serving your content globally.

To make this easier, Backblaze joined the Bandwidth Alliance, and offers no-cost egress from your content in Backblaze B2 to Cloudflare’s global content delivery network.

Customers that adopt this strategy can dramatically slash the cost of serving content to their users.

"The combination of Cloudflare and Backblaze B2 Cloud Storage saves Nodecraft almost 85% each month on the data storage and egress costs versus Amazon S3." - James Ross, Nodecraft Co-founder/CTO

Read the Nodecraft/Backblaze case study.

Get Sophisticated Content Discovery and Compliance Tools for Your Business Content: With more and more business content in cloud storage, finding the content you need quickly across millions of files, or surfacing content that needs special storage consideration (for GDPR or HIPAA compliance, for example) is critical.

Ideally, you could have your own private, customized search engine across all of your cloud content, and that’s exactly what a new class of solutions provide.

With Acembly or Aparavi on Backblaze, you can build content indexes and offer deep search across all of your content, and automatically apply policy rules for management and retention.

Where Are You in the Cloud Collaboration Trend?

The trend to mobile, always-on workers building and sharing ever more sophisticated content around cloud storage as a shared hub is only accelerating. Users love the freedom to create, collaborate and share content anywhere. Businesses love the benefits of having all of that content in an easily managed repository that makes their entire business more flexible and less expensive to operate.

So, while device manufacturers like Apple may announce exciting Pro level workstations, the need for companies and teams to collaborate and be effective on the move is an even more important and compelling issue than ever before. The cloud is an essential element of that trend that can’t be underestimated.

•  •  •

Upcoming Free Webinars

Wednesday, June 5, 10am PT
Learn how Nodecraft saved 85% on their cloud storage bill with Backblaze B2 and Cloudflare.
Join the Backblaze/Nodecraft webinar.

Thursday, June 13, 10am PT
Want to learn more about turning content in Backblaze B2 into searchable content with powerful policy rules?
Join the Backblaze/Aparavi webinar
.

The post The Profound Benefits of Cloud Collaboration for Business Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze B2 Copy File Beta is Now Public

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/backblaze-b2-copy-file-beta-is-now-public/

B2 Copy File Beta

Since introducing B2 Cloud Storage nearly four years ago, we’ve been busy adding enhancements and new functionality to the service. We continually look for ways to make B2 more useful for our customers, be it through service level enhancements, partnerships with leading Compute providers, or lowering the industry’s lowest download price to 1¢/GB. Today, we’re pleased to announce the beta release of our newest functionality: Copy File.

What You Can Do With B2 Copy File

This new capability enables you to create a new file (or new part of a large file) that is a copy of an existing file (or range of an existing file). You can either copy over the source file’s metadata or specify new metadata for the new file that is created. This all occurs without having to download or reupload any data.

This has been one of our most requested features, as it unlocks:

  • Rename/Re-organize. The new capabilities give customers the ability to reorganize their files without having to download and reupload. This is especially helpful when trying to mirror the contents of a file system to B2.
  • Synthetic Backup. With the ability to copy ranges of a file, users can now leverage B2 for synthetic backup, which is uploading a full backup but then only uploading incremental changes (as opposed to reuploading the whole file with every change). This is particularly helpful for uses such as backing up VMs, where reuploading the entirety of the file every time it changes creates user inefficiencies.

Where to Learn More About B2 Copy File

The endpoint documentation can be found here:

b2_copy_file:  https://www.backblaze.com/b2/docs/b2_copy_file.html
b2_copy_part:  https://www.backblaze.com/b2/docs/b2_copy_part.html

More About the Beta Program

We’re introducing these endpoints as a beta so that developers can provide us feedback before the endpoints go into production. Specifically, this means that the APIs may evolve as a result of the feedback we get. We encourage you to give Copy File a try and, if you have any comments, you can email our B2 beta team at b2beta@backblaze.com. Thanks!

The post Backblaze B2 Copy File Beta is Now Public appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Connect Veeam to the B2 Cloud: Episode 4 — Using Morro Data CloudNAS

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/connect-veeam-to-the-b2-cloud-episode-4-using-morro-data-cloudnas/

Veeam backup to Backblaze B2 Episode 4 of Series

In the fourth post in our series on connecting Veeam with B2, we provide a guide on how to back up your VMs to Backblaze B2 using Veeam and Morro Data’s CloudNAS. In our previous posts, we covered how to connect Veeam to the B2 cloud using OpenDedupe, connect Veeam to the B2 cloud using Synology, and connect Veeam with B2 using StarWind VTL.

VM Backup to B2 Using Veeam Backup & Replication and Morro Data CloudNAS

We are glad to show how Veeam Backup & Replication can work with Morro Data CloudNAS to keep the more recent backups on premises for fast recovery while archiving all backups in B2 Cloud Storage. CloudNAS not only caches the more recent backup files, but also simplifies the management of B2 Cloud Storage with a network share or drive letter interface.

–Paul Tien, Founder & CEO, Morro Data

VM backup and recovery is a critical part of IT operations that supports business continuity. Traditionally, IT has deployed an array of purpose-built backup appliances and applications to protect against server, infrastructure, and security failures. As VMs continue to spread in production, development, and verification environments, the expanding VM backup repository has become a major challenge for system administrators.

Because the VM backup footprint is usually quite large, cloud storage is increasingly being deployed for VM backup. However, cloud storage does not achieve the same performance level as on-premises storage for recovery operation. For this reason, cloud storage has been used as tiered repository behind on-premises storage.

diagram of Veeam backing up to B2 using Cloudflare and Morro Data CloudNAS

In this best practice guide, VM Backup to B2 Using Veeam Backup & Replication and Morro Data CloudNAS, we will show how Veeam Backup & Replication can work with Morro Data CloudNAS to keep the most recent backups on premises for fast recovery while archiving all backups in the retention window in Backblaze B2 cloud storage. CloudNAS caching not only provides buffer for most recent backup files, but also simplifies the management of on-premises storage and cloud storage as an integral backup repository.

Tell Us How You’re Backing Up Your VMs

If you’re backing up VMs to B2 using one of the solutions we’ve written about in this series, we’d like to hear from you in the comments about how it’s going.

View all posts in the Veeam series.

The post Connect Veeam to the B2 Cloud: Episode 4 — Using Morro Data CloudNAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating Your Legacy Archive to Future-Ready Architecture

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/ortana-cubix-core-media-archive/

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

Guest blog post by James Gibson, Founder & CEO of Ortana Media Group

There’s a wide range of reasons why businesses want to migrate away from their current archive solution, ranging from managing risk, concerns over legacy hardware, media degradation and format support. Many businesses also find themselves stuck with closed format solutions that are based on legacy middleware with escalating support costs. It is a common problem that we at Ortana have helped many clients overcome through smart and effective use of the many storage solutions available on the market today. As founder and CEO of Ortana, I want to share some of our collective experience around this topic and how we have found success for our clients.

First, we often forget how quickly the storage landscape changes. Let’s take a typical case.

It’s Christmas 2008 and a CTO has just finalised the order on their new enterprise-grade hierarchical storage management (HSM) system with an LTO-4 tape robot. Beyonce’s Single Ladies is playing on the radio, GPS on phones has just started to be rolled out, and there is this new means of deploying mobile apps called the Apple™ App Store! The system purchased is from a well established, reputable company and provides peace of mind and scalability — what more can you ask for? The CTO goes home for the festive season — job well done — and hopes Santa brings him one of the new Android phones that have just launched.

Ten years on, the world is very different and Moore’s law tells us that the pace of technological change is only set to increase. That growing archive has remained on the same hardware, controlled by the same HSM and has gone through one or two expensive LTO format changes. “These migrations had to happen,” the CTO concedes, as support for the older LTO formats was being dropped by the hardware supplier. Their whole content library had to be restored and archived back to the new tapes. New LTO formats also required new versions of the HSM, and whilst these often included new features — over codec support, intelligent repacking and reporting — the fundamentals of the system remained: closed format, restricted accessibility, and expensive. Worse still, the annual support costs are increasing whilst the new feature development has ground to a halt. Sure the archive still works, but for how much longer?

Decisions, Decisions, So Many Migration Decisions

As businesses make the painful decision to migrate their legacy archive, the choices of what, where, and how become overwhelming. The storage landscape today is a completely different picture from when closed format solutions went live. This change alone offers significant opportunities to businesses. By combining the right storage solutions with seamless architecture and with lights out orchestration driving the entire process, businesses can flourish by allowing their storage to react to the needs of the business, not constrain them. Ortana has purposefully ensured Cubix (our asset management, automation, and orchestration platform) is as storage agnostic as possible by integrating a range of on-premises and cloud-based solutions, and built an orchestration engine that is fully abstracted from this integration layer. The end result is that workflow changes can be done in seconds without affecting the storage.

screenshot of Cubix workflow
Cubix’s orchestration platform includes a Taskflow engine for creating customized workflow paths

As our example CTO would say (shaking their head no doubt whilst saying it), a company’s main priority is to not-be-here-again, and the key is to store media in an open format, not bound to any one vendor, but also accessible to the business needs both today and tomorrow. The cost of online cloud storage such as Backblaze has now made storing content in the cloud more cost effective than LTO and this cost is only set to reduce further. This, combined with the ample internet bandwidth that has become ubiquitous, makes cloud storage an obvious primary storage target. Entirely agnostic to the format and codec of content you are storing, aligned with MPAA best practices and easily integrated to any on-premise or cloud-based workflows, cloud storage removes many of the issues faced by closed-format HSMs deployed in so many facilities today. It also begins to change the dialogue over main vs DR storage, since it’s no longer based at a facility within the business.

Cloud Storage Opens Up New Capabilities

Sometimes people worry that cloud storage will be too slow. Where this is true, it is almost always due to poor cloud implementation. B2 is online, meaning that the time-to-first-byte is almost zero, whereas other cloud solutions such as Amazon Glacier are cold storage, meaning that the time-to-first-byte ranges from at best one to two hours, but in general six to twelve hours. Anything that is to replace an LTO solution needs to match or beat the capacity and speed of the incumbent solution, and good workflow design can ensure that restores are done as promptly as possible and direct to where the media is needed.

But what about those nasty egress costs? People can get caught off guard when this is not budgeted for correctly, or when their workflow does not make good use of simple solutions such as proxies. Regardless of whether your archive is located on LTO or in the cloud, proxies are critical to keeping accessibility up and costs and restore times down. By default, when we deploy Cubix for clients we always generate a frame accurate proxy for video content, often devalued through the use of burnt-in timecode (BITC), logos, and overlays. Generated using open source transcoders, they are incredibly cost effective to generate and are often only a fraction of the size of the source files. These proxies, which can also be stored and served directly from B2 storage, are then used throughout all our portals to allow users to search, find, and view content. This avoids the time and cost required to restore the high resolution master files. Only when the exact content required is found is a restore submitted for the full-resolution masters.

Multiple Copies Stored at Multiple Locations by Multiple Providers

Moving content to the cloud doesn’t remove the risk of working with a single provider, however. No matter how good or big they are, it’s always a wise idea to ensure an active disaster recovery solution is present within your workflows. This last resort copy does not need all the capabilities of the primary storage, and can even be more punitive when it comes to restore costs and times. But it should be possible to enable in moments, and be part of the orchestration engine rather than being a manual process.

The need to de-risk that single provider, or for workflows where 30-40% of the original content has to be regularly restored (as proxies do not meet the needs of the workflow), on premise archive solutions still can be deployed without being caught in the issues discussed earlier. Firstly, LTO now offers portability benefits through LTFS, an easy to use open format, which critically has its specification and implementation within the public domain. This ensures it is easily supported by many vendors and guarantees support longevity for on-premises storage. Ortana with its Cubix platform supports many HSMs that can write content in native LTFS format that can be read by any standalone drive from any vendor supporting LTFS.

Also, with 12 TB hard drives now standard in the marketplace, nearline based storage has also become a strong contender for content when combined with intelligent storage tiering to the cloud or LTO. Cubix can fully automate this process, especially when complemented by such vendors as GB Labs’ wide range of hardware solutions. This mix of cloud, nearline and LTO — being driven by an intelligent MAM and orchestration platform like Cubix to manage content in the most efficient means possible on a per workflow basis — blurs the lines between primary storage, DR, and last resort copies.

Streamlining the Migration Process

Once you have your storage mix agreed upon and in place, now your fraught task is getting your existing library onto the new solution whilst not impacting access to the business. Some HSM vendors suggest swapping your LTO tapes by physically removing them from one library and inserting them into another. Ortana knows that libraries are often the linchpin of the organisation and any downtime has significant negative impact that can fill media managers with dread, especially since these one shot, one direction migrations can easily go wrong. Moreover, when following this route, simply moving tapes does not persist any editorial metadata or resolve many of the objectives around making content more available. Cubix not only manages the media and the entire transformation process, but also retains the editorial metadata from the existing archive also.

screenshot of Cubix search results
During the migration process, content can be indexed via AI-powered speech to text and image recognition

Given the high speeds that LTO delivers, combined with the scalability of Cubix, the largest libraries can be migrated in short timescales, whilst having zero downtime on the archive. Whilst the content is being migrated to the defined mix of storage targets, Cubix can perform several tasks on the content to further augment the metadata, including basics such as proxy and waveform generation, through to AI based image detection and speech to text. Such processes only further reduce the time spent by staff looking for content, and further refine the search capability to ensure only that content required is restored — translating directly to reduced restore times and egress costs.

A Real-World Customer Example

Many of the above concerns and considerations led a large broadcaster to Ortana for a large-scale migration project. The broadcaster produces in-house news and post production with multi-channel linear playout and video-on-demand (VoD). Their existing archive was 3 PB of media across two generations of LTO tape managed by Oracle™ DIVArchive & DIVADirector. They were concerned about on-going support for DIVA and wanted to fully migrate all tape and disk-based content to a new HSM in an expedited manner, making full use of the dedicated drive resources available.

Their primary goal was to fully migrate all editorial metadata into Cubix, including all ancillary files (subtitles, scripts, etc.), and index all media using AI-powered content discovery to reduce searching times for news, promos /and sports departments at the same time. They also wanted to replace the legacy Windows Media Video (WMV) proxy with new full HD H264 frame accurate proxy, and provide the business secure, group-based access to the content. Finally, they wanted all the benefits of cloud storage, whilst keeping costs to a minimum.

With Ortana’s Cubix Core, the broadcaster was able to safely migrate their DIVAarchive to two storage platforms: LTFS with a Quantum HSM system and Backblaze B2 cloud storage. Their content was indexed via AI powered image recognition (Google Vision) and speech to text (Speechmatics) during the migration process, and the Cubix UI replaced existing archive as media portal for both internal and external stakeholders.

The new solution has vastly reduced the timescales for content processing across all departments, and has led to a direct reduction in staff costs. Researchers report a 50-70% reduction in time spent searching for content, and the archive shows a 40% reduction in restore requests. By having the content located in two distinct geographical locations they’ve entirely removed their business risk of having their archive with a single vendor and in a single location. Most importantly, their archived content is more active than ever and they can be sure it will stay alive for the future.

How exactly did Ortana help them do it? Join our webinar Evading Extinction: Migrating Legacy Archives on Thursday, March 28, 2019. We’ll detail all the steps we took in the process and include a live demo of Cubix. We’ll show you how straightforward and painless the archive migration can be with the right strategy, the right tools, and the right storage.

— James Gibson, Founder & CEO, Ortana Media Group

•  •  •

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post Migrating Your Legacy Archive to Future-Ready Architecture appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A Workflow Playbook for Migrating Your Media Assets to a MAM

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/workflow-playbook-migrating-your-media-assets-to-a-mam/

Asset > Metadata > Database > Media Asset Manager > Backblaze Fireball > Backblaze B2 Cloud Storage

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

Whatever your creative venture, the byproduct of all your creative effort is assets. Whether you produce music, images, or video, as you produce more and more of these valuable assets, they tend to pile up and become difficult to manage, organize, and protect. As your creative practice evolves to meet new demands, and the scale of your business grows, you’ll often find that your current way of organizing and retrieving assets can’t keep up with the pace of your production.

For example, if you’ve been managing files by placing them in carefully named folders, getting those assets into a media asset management system will make them far easier to navigate and much easier to pull out exactly the media you need for a new project. Your team will be more efficient and you can deliver your finished content faster.

As we’ve covered before, putting your assets in a type of storage like B2 Cloud Storage ensures that they will be protected in a highly durable and highly available way that lets your entire team be productive.

You can learn about some of the new capabilities of the latest cloud-based collaboration tools here:

With some smart planning, and a little bit of knowledge, you can be prepared to get the most of your assets as you move them into an asset management system, or when migrating from an older or less capable system into a new one.

Assets and Metadata

Before we can build some playbooks to get the most from your creative assets, let’s review a few key concepts.

Asset — a rich media file with intrinsic metadata.

An asset is simply a file that is the result of your creative operation, and most often a rich media file like an image or a video. Typically, these files are captured or created in a raw state, then your creative team adds value to that raw asset by editing it together with other assets to create a finished story that in turn, becomes another asset to manage.

Metadata — Information about a file, either embedded within the file itself or associated with the file by another system, typically a media asset management (MAM) application.

The file carries information about itself that can be understood by your laptop or workstation’s operating system. Some of these seem obvious, like the name of the file, how much storage space it occupies, when it was first created, and when it was last modified. These would all be helpful ways to try to find one particular file you are looking for among thousands just using the tools available in your OS’s file manager.

File Metadata

There’s usually another level of metadata embedded in media files that is not so obvious but potentially enormously useful: metadata embedded in the file when it’s created by a camera, film scanner, or output by a program.

Results of a file inspected by an operating system's file manager
An example of metadata embedded in a rich media file

For example, this image taken in Backblaze’s data center a few years ago carries all kinds of interesting information. For example, when I inspect the file on macOS’s Finder with Get Info, a wealth of information is revealed. I can now not only tell the image’s dimensions and when the image was taken, but also exactly what kind of camera took this picture and the lens settings that were used, as well.

As you can see, this metadata could be very useful if you want to find all images taken on that day, or even images taken with that same camera, focal length, F-stop, or exposure.

When a File and Folder System Can’t Keep Up

Inspecting files one at a time is useful, but a very slow way to determine if a file is the one you need for a new project. Yet many creative environments that don’t have a formal asset management system get by with an ad hoc system of file and folder structures, often kept on the same storage used for production or even on an external hard drive.

Teams quickly outgrow that system when they find that their work spills over to multiple hard drives, or takes up too much space on their production storage. Worst of all, assets kept on a single hard drive are vulnerable to disk damage, or to being accidentally copied or overwritten.

Why Your Assets Need to be Managed

To meet this challenge, creative teams have often turned to a class of application called a Media Asset Manager (MAM). A MAM automatically extracts all their assets’ inherent metadata, helps move files to protected storage, and makes them instantly available to their entire team. In a way, these media asset managers become a private media search engine where any file attribute can be a search query to instantly uncover the file they need in even the largest media asset libraries.

Beyond that, asset management systems are rapidly becoming highly effective collaboration and workflow tools. For example, tagging a series of files as Field Interviews — April 2019, or flagging an edited piece of content as HOLD — do not show customer can be very useful indeed.

The Inner Workings of a Media Asset Manager

When you add files into an asset management system, the application inspects each file, extracting every available bit of information about the file, noting the file’s location on storage, and often creating a smaller stand-in or proxy version of the file that is easier to present to users.

To keep track of this information, asset manager applications employ a database and keep information about your files in it. This way, when you’re searching for a particular set of files among your entire asset library, you can simply make a query of your asset manager’s database in an instant rather than rifling through your entire asset library storage system. The application takes the results of that database query and retrieves the files you need.

The Asset Migration Playbook

Whether you need to move from a file and folder based system to a new asset manager, or have been using an older system and want to move to a new one without losing all of the metadata that you have painstakingly developed, a sound playbook for migrating your assets can help guide you.

Play 1 — Getting Assets in Files and Folders Protected Without an Asset Management System

In this scenario, your assets are in a set of files and folders, and you aren’t ready to implement your asset management system yet.

The first consideration is for the safety of the assets. Files on a single hard drive are vulnerable, so if you are not ready to choose an asset manager your first priority should be to get those files into a secure cloud storage service like Backblaze B2.

We invite you to read our post: How Backup and Archive are Different for Professional Media Workflows

Then, when you have chosen an asset management system, you can simply point the system at your cloud-based asset storage to extract the metadata of the files and populate the asset information in your asset manager.

  1. Get assets archived or moved to cloud storage
  2. Choose your asset management system
  3. Ingest assets directly from your cloud storage

Play 2 — Getting Assets in Files and Folders into Your Asset Management System Backed by Cloud Storage

In this scenario, you’ve chosen your asset management system, and need to get your local assets in files and folders ingested and protected in the most efficient way possible.

You’ll ingest all of your files into your asset manager from local storage, then archive them to cloud storage. Once your asset manager has been configured with your cloud storage credentials, it can automatically move a copy of local files to the cloud for you. Later, when you have confirmed that the file has been copied to the cloud, you can safely delete the local copy.

  1. Ingest assets from local storage directly into your asset manager system
  2. From within your asset manager system archive a copy of files to your cloud storage
  3. Once safely archived, the local copy can be deleted

Play 3 — Getting a Lot of Assets on Local Storage into Your Asset Management System Backed by Cloud Storage

If you have a lot of content, more than say, 20 terabytes, you will want to use a rapid ingest service similar to Backblaze’s Fireball system. You copy the files to Fireball, Backblaze puts them directly into your asset management bucket, and the asset manager is then updated with the file’s new location in your Backblaze B2 account.

This can be a manual process, or can be done with scripting to make the process faster.

You can read about one such migration using this play here:
iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted

  1. Ingest assets from local storage directly into your asset manager system
  2. Archive your local assets to Fireball (up to 70 TB at a time)
  3. Once the files have been uploaded by Backblaze, relink the new location of the cloud copy in your asset management system

You can read more about Backblaze Fireball on our website.

Play 4 — Moving from One Asset Manager System to a New One Without Losing Metadata

In this scenario you have an existing asset management system and need to move to a new one as efficiently as possible to not only take advantage of your new system’s features and get files protected in cloud storage, but also to do it in a way that does not impact your existing production.

Some asset management systems will allow you to export the database contents in a format that can be imported by a new system. Some older systems may not have that luxury and will require the expertise of a database expert to manually extract the metadata. Either way, you can expect to need to map the fields from the old system to the fields in the new system.

Making a copy of old database is a must. Don’t work on the primary copy, and be sure to conduct tests on small groups of files as you’re migrating from the older system to the new. You need to ensure that the metadata is correct in the new system, with special attention that the actual file location is mapped properly. It’s wise to keep the old system up and running for a while before completely phasing it out.

  1. Export the database from the old system
  2. Import the records into the new system
  3. Ensure that the metadata is correct in the new system and file locations are working properly
  4. Make archive copies of your files to cloud storage
  5. Once the new system has been running through a few production cycles, it’s safe to power down the old system

Play 5 — Moving Quickly from an Asset Manager System on Local Storage to a Cloud-based System

In this variation of Play 4, you can move content to object storage with a rapid ingest service like Backblaze Fireball at the same time that you migrate to a cloud-based system. This step will benefit from scripting to create records in your new system with all of your metadata, then relink with the actual file location in your cloud storage all in one pass.

You should test that your asset management system can recognize a file already in the system without creating a duplicate copy of the file. This is done differently by each asset management system.

  1. Export the database from the old system
  2. Import the records into the new system while creating placeholder records with the metadata only
  3. Archive your local assets to Fireball (up to 70 TB at a time)
  4. Once the files have been uploaded by Backblaze, relink the cloud based location to the asset record

Wrapping Up

Every production environment is different, but we all need the same thing: to be able to find and organize our content so that we can be more productive and rest easy knowing that our content is protected.

These plays will help you take that step and be ready for any future production challenges and opportunities.

If you’d like more information about media asset manager migration, join us for our webinar on March 15, 2019:

Backblaze Webinar:  Evolving for Intelligence: MAM to MAM Migration

•  •  •

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post A Workflow Playbook for Migrating Your Media Assets to a MAM appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Backup and Archive are Different for Professional Media Workflows

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/backup-vs-archive-professional-media-production/

a man working on a video editor

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

If you make copies of your images or video files for safekeeping, are you backing them up or archiving them? It’s been discussed many times before, but the short answer is that it depends on the function of the copy. For media workflows, a crisp understanding is required in order to implement the right tools. In today’s post, we’ll explore the nuances between backup and archiving in media workflows and provide a real world application from UCSC Silicon Valley.

We explored the broader topic of backing up versus archiving in our What’s the Diff: Backup vs Archive post. It’s a backup if you copy data to keep it available in case of loss, while it’s an archive if you make a copy for regulatory compliance, or to move older, less-used data off to cheaper storage. Simple, right? Not if you’re talking about image, video and other media files.

Backup vs. Archive for Professional Media Productions

Traditional definitions don’t fully capture how backup and archive typically operate in professional media workflows compared to business operations. Video and images aren’t typical business data in a number of ways, and that profoundly impacts how they’re protected and preserved throughout their lifecycle. With media backup there are key differences in which files get backed up and how they get backed up. With media archive there are key differences in when files get archived and why they’re archived.

Large Media Files Sizes Slow Down Backup

The most obvious nuance is that media files are BIG. While most business documents are under 30 MB in size, a single second of video could be larger than 30 MB at higher resolutions and frame rates. Backing up such large file sizes can take longer than the traditional backup windows of overnight for incremental backups and a weekend for full backup. And you can’t expect deduplication to shorten backup times or reduce backup sizes, either. Video and images don’t dedupe well.

Meanwhile, the editing process generates a flurry of intermediate or temporary files in the active content creation workspace that don’t need to be backed up because they can be easily regenerated from source files.

The best backup solutions for media allow you to specify exactly which directories and file types you want backed up, so that you’re taking time for and paying for only what you need.

Archiving to Save Space on Production Storage

Another difference is that archiving to reduce production storage costs is much more common in professional media workflows than with business documents, which are more likely to be archived for compliance. High-resolution video editing in particular requires expensive, high-performance storage to deliver multiple streams of content to multiple users simultaneously without dropping frames. With the large file sizes that come with high-resolution content, this expensive resource fills up quickly with content not needed for current productions. Archiving completed projects and infrequently-used assets can keep production storage capacities under control.

Media asset managers (MAMs) can simplify the archive and retrieval process. Assets can be archived directly through the MAM’s visual interface, and after archiving, their thumbnail or proxies remain visible to users. Archived content remains fully searchable by its metadata and can also be retrieved directly through the MAM interface. For more information on MAMs, read What’s the Diff: DAM vs MAM.

Strategically archiving select media files to less expensive storage allows facilities to stay within budget, and when done properly, keeps all of your content readily accessible for new projects and repurposing.

Permanently Secure Source Files and Raw Footage on Ingest

A less obvious way that media is different is that video files are fixed content that don’t actually change during the editing process. Instead, editing suites compile changes to be made to the original and apply the changes only when making the final cut and format for delivery. Since these source files are not going to change, and are often irreplaceable, many facilities save a copy to secondary storage as soon as they’re ingested to the workflow. This copy serves as a backup to the file on local storage during the editing process. Later, when the local copy is no longer actively being used, it can be safely deleted knowing it’s secured in the archive. I mean backup. Wait, which is it?

Whether you call it archive or backup, make a copy of source files in a storage location that lives forever and is accessible for repurposing throughout your workflow.

To see how all this works in the real world, here’s how UCSC Silicon Valley designed a new solution that integrates backup, archive, and asset management with B2 cloud storage so that their media is protected, preserved and organized at every step of their workflow.

Still from UC Scout AP Psychology video
Still from UC Scout AP Psychology course video

How UCSC Silicon Valley Secured Their Workflow’s Data

UCSC Silicon Valley built a greenfield video production workflow to support UC Scout, the University of California’s online learning program that gives high school students access to the advanced courses they need to be eligible and competitive for college. Three teams of editors, producers, graphic designers and animation artists — a total of 22 creative professionals — needed to share files and collaborate effectively, and digital asset manager Sara Brylowski was tasked with building and managing their workflow.

Sara and her team had specific requirements. For backup, they needed to protect active files on their media server with an automated backup solution that allowed accidentally deleted files to be easily restored. Then, to manage storage capacity more effectively on their media server, they wanted to archive completed videos and other assets that they didn’t expect to need immediately. To organize content, they needed an asset manager with seamless archive capabilities, including fast self-service archive retrieval.

They wanted the reliability and simplicity of the cloud to store both their backup and archive data. “We had no interest in using LTO tape for backup or archive. Tape would ultimately require more work and the media would degrade. We wanted something more hands off and reliable,” Sara explained. The cloud choice was narrowed to Backblaze B2 or Amazon S3. Both were proven cloud solutions that were fully integrated with the hardware and software tools in their workflow. Backblaze was chosen because its $5 per terabyte per month pricing was a fraction of the cost of Amazon S3.

Removing Workflow Inefficiencies with Smarter Backup and Archive

The team had previously used the university’s standard cloud backup service to protect active files on the media server as they worked on new videos. But because that cloud backup was designed for traditional file servers, it backed up everything, even the iterative files generated by video production tools like Adobe Premiere, After Effects, Maya 3D and Cinema 3D that didn’t need to be backed up. For this reason, Sara pushed to not use the university’s backup provider. It was expensive in large part because it was saving all of this noise in perpetuity.

“With our new workflow we can manage our content within its life cycle and at same time have reliable backup storage for the items we know we’re going to want in the future. That’s allowed us to concentrate on creating videos, not managing storage.”—Sara Brylowski, UCSC Silicon Valley

After creating thousands of videos for 65 online courses, their media server was quickly filling to its 128 TB capacity. They needed to archive data from completed projects to make room for new ones, sooner rather than later. Deploying a MAM solution would simplify archiving, while also helping them organize their diverse and growing library of assets — video shot in studio, B-roll, licensed images, and audio from multiple sources.

To find out exactly how Sara and her team addressed these challenges and more, read the full case study on UC Scout at UCSC Silicon Valley and learn how their new workflow enables them to concentrate on creating videos, not managing storage.

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post How Backup and Archive are Different for Professional Media Workflows appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cloud-based Tools Combined with AI Can Make Workflows More Powerful and Increase Content Value

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/increase-content-archive-value-via-cloud-tools/

CPU + Metadata Mining + Virtual Machines & Apps + AI in the cloud

This is part two of a series. You can read part one at Modern Storage Workflows in the Age of Cloud.

Modern Storage Workflows in the Age of Cloud, Part 2

In Modern Storage Workflows in the Age of Cloud, Part One, we introduced a powerful maxim to guide content creators (anyone involved in video or rich media production) in choosing storage for the different parts of their content creation workflows:

Choose the storage that best fits each workflow step.

It’s true that every video production environment is different, with different needs, and the ideal solution for an independent studio of a few people is different than the solution for a 50-seat post-production house. But the goal of everyone in the business of creative storytelling is to tell stories and let your vision and craft shine through. Anything that makes that job more complicated and more frustrating keeps you from doing your best work.

Given how prevalent, useful, and inexpensive cloud technologies are, almost every team today is rapidly finding they can jettison whole classes of storage that are complicating their workflow and instead focus on two main types of storage:

  1. Fast, shared production storage to support editing for content creation teams (with no need to oversize or overspend)
  2. Active, durable, and inexpensive cloud storage that lets you move all of your content in one protected, accessible place — your cloud-enabled content backplane

It turns out there’s another benefit unlocked when your content backplane is cloud enabled, and it’s closely tied to another production maxim:

Organizing content in a single, well managed repository makes that content more valuable as you use it.

When all content is in a single place, well-managed and accessible, content gets discovered faster and used more. Over time it will pick up more metadata, with sharper and more refined tags. A richer context is built around the tags, making it more likely that the content you already have will get repurposed for new projects.

Later, when you come across a large content repository to acquire, or contemplate a digitization or preservation project, you know you can bring it into the same content management system you’ve already refined, concentrating and increasing value further still.

Having more content that grows increasingly valuable over time becomes a monetization engine for licensing, content personalization, and OTT delivery.

You might think that these benefits already present a myriad of new possibilities, but cloud technologies are ready to accelerate the benefits even further.

Cloud Benefits — Pay as You Need It, Scalability, and Burstability

It’s worth recapping the familiar cost-based benefits of the cloud: 1) pay only for the resources you actually use, and only as long as you need them, and, 2) let the provider shoulder the expense of infrastructure support, maintenance, and continuous improvement of the service.

The cost savings from the cloud are obvious, but the scalability and flexibility of the cloud should be weighted strongly when comparing using the cloud versus handling infrastructure yourself. If you were responsible for a large server and storage system, how would you cope with a business doubling every quarter, or merging with another team for a big project?

Too many production houses end up disrupting their production workflow (and their revenue) when they are forced to beef up servers and storage capability to meet new production demands. Cloud computing and cloud storage offer a better solution. It’s possible to instantly bring on new capacity and capability, even when the need is unexpected.

Cloud Delivered Compute Horsepower on Demand

Let’s consider the example of a common task like transcoding content and embedding a watermark. You need to process 3,600 frames of a two hour movie to resize the frame and add a watermark, and that compute workload takes 100 minutes and ties up a single server.

You could adapt that workflow to the cloud by pulling high resolution frames from cloud storage, feed them to 10 cloud servers in parallel, and complete the same job in 10 minutes. Another option is to spin up 100 servers and get the job done in one minute.

The cloud provides the flexibility to cut workflow steps that used to take hours down to minutes by adding the compute horsepower that’s needed for the job, then turn it off when it’s no longer needed. You don’t need to worry about planning ahead or paying for ongoing maintenance. In short, compute adapts to your workflow rather than the other way around, which empowers you to make workflow choices that instead prioritize the creative need.

Your Workflow Applications Are Moving to the Cloud, Too

More and more of the applications used for content creation and management are moving to the cloud, as well. Modern web browsers are gaining astonishing new capabilities and there is less need for dedicated application servers accompanying storage.

What’s important is that the application helps you in the creative process, not the mechanics of how the application is served. Increasingly, this functionality is delivered by virtual machines that can be spun up by the thousands as needed or by cloud applications that are customized for each customer’s specific needs.

iconik media workflow management screenshot

An example of a cloud-delivered workflow application — iconik asset discovery and project collaboration

iconik is one example of such a service. iconik delivers cloud-based asset management and project collaboration as a service. Instead of dedicated servers and storage in your data center, each customer has their own unique installation of iconik’s service that’s ready in minutes from first signup. The installation is exclusive to your organization and tailored to your needs. The result is a workflow utilizing virtual machines, compute, and storage that matches your workflow with just the resources you need. The resources are instantly available whenever or wherever your team is using the system, and consume no compute or storage resources when they are not.

Here’s an example. A video file can be pumped from Backblaze B2 to the iconik application running on a cloud compute instance. The proxies and asset metadata are stored in one place and available to every user. This approach is scalable to as many assets and productions you can throw at it, or as many people as are collaborating on the project.

The service is continuously upgraded and updated with new features and improvements as they become available, without the delay of rolling out enhancements and patches to different customers and locations.

Given the advantages of the cloud, we can expect that more steps in the creative production workflow that currently rely on dedicated on-site servers will move to the highly agile and adaptable environment offered by the cloud.

The Next Evolution — AI Becomes Content-Aware

Having your content library in a single content backplane in the cloud provides another benefit: ready access to a host of artificial intelligence (AI) tools.

Examples of AI Tools That Can Improve Creative Production Workflows:

  • Text to speech transcription
  • Language translation
  • Object recognition and tagging
  • Celebrity recognition
  • Brand use recognition
  • Colorization
  • High resolution conversion
  • Image stabilization
  • Sound correction

AI tools can be viewed as compute workers that develop processing rules by training for a desired result on a data set. An AI tool can be trained by having it process millions of images until it can tell the difference between sky and grass, or pick out a car in a frame of video. Once such a tool has been trained, it provides an inexpensive way to add valuable metadata to content, letting you find, for example, every video clip across your entire library that has sky, or grass, or a car in it. Text keywords with an associated timecode can be automatically added to aid in quickly zeroing in on a specific section of a long video clip. That’s something that’s not practical for a human content technician over thousands of files, but is easy, repeatable, and scalable for an AI tool.

Let AI Breathe New Life into Existing Content

AI tools can breathe new life in older content and intelligently clean up older format source video by removing film scratches or upresing content to today’s higher resolution formats. They can be valuable for digital restoration and preservation projects, too. With AI tools and source content in the cloud, it’s now possible to give new life to analog source footage. Digitize it, let AI clean it up, and you’ll get fresh, monetizable assets in your library.

axle ai automatically tags:

An example of the time-synched tags that can be generated with an AI tool

Many workflow tools, such as asset and collaboration tools, can use AI tools for speech transcription or smart object recognition, which brings additional capabilities. axle.ai, for example, can connect with a visual search tool to highlight an object in the frame like a wine bottle, letting you subsequently find every shot of a wine bottle across your entire library.

Visual search for brands and products also is possible. Just highlight a brand logo and find every clip where the camera panned over that logo. It’s smart enough to gets results even when only part of the logo is shown.

We’ve barely touched on the many tools that can be applied to content on ingest or content already in place. Whichever way they’re applied, they can deliver on the promise of making your workflows more efficient and powerful, and your content more valuable.

All Together Now

Taken together, these trends are great news for creatives. They can serve your creative vision by making your workflow more agile and more efficient. Cloud-enabled technologies enable you to focus on adding value and repurposing content in fresh new ways, resulting in new audiences and better monetization.

By placing your content in a cloud content backplane, and taking advantage of applications as a service, including the latest AI tools, it becomes possible to continually grow your content collection while increasing its value — a desirable outcome for any creative production enterprise.

If you could focus only on delivering great creative content, and had a host of AI tools to automatically make your content more valuable, what would you do?

The post Cloud-based Tools Combined with AI Can Make Workflows More Powerful and Increase Content Value appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Cloud-Based MAMs Can Make End-to-End Cloud Workflows a Reality

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/how-to-migrate-mam-to-cloud/

Create, Capture, Distribute, Archive

Ever since commercial cloud services were launched over 12 years ago, media and entertainment professionals have debated how and where cloud services best fit in their workflows. Archive and delivery were seen as the most natural fits. But complete, that is, end-to-end cloud workflows were seen as improbable due to the network bandwidth required to edit full-resolution content. Now, with new cloud-oriented creative tools on the market, cloud is now playing a role at every step of creative workflows.

Of course, it’s one thing to talk about complete cloud workflows and it’s another thing to show how the cloud has transformed an actual customer’s workflow from end-to-end. But that’s exactly what healthcare content provider Everwell did by building a streamlined work from anywhere workflow with cloud storage and cloud-delivered asset management. The best part was that rolling out the new cloud workflow was just as painless as it was transformative for their business.

Where On-Site Asset Management Fails: Scaling Up and Remote Access

Everwell was founded on the idea that millions of TVs in medical office lobbies and waiting rooms could deliver compelling, well-produced healthcare educational content. Hospitals, medical groups, and medical practitioners that sign up with Everwell receive media players pre-loaded with an extensive library of Everwell’s educational videos along with software that allows each practice to customize the service with their own information.

As the number of subscribers and demand for their content grew, Everwell COO Loren Goldfarb realized that their production workflow needed to adapt quickly or they wouldn’t be able to scale their business to meet growth. The production workflow was centered around an on-site media asset management (MAM) server with on-site storage that had served them well for several years. But as the volume of raw footage grew and the file sizes increased from HD to 4K, their MAM struggled to keep up with production deadlines.

At the same time, Everwell’s content producers and editors needed to work more efficiently from remote locations. Having to travel to the main production office to check content into the media asset manager became a critical bottleneck. Their existing MAM was designed for teams working in a single location, and remote team members struggled to maintain access to it. And the off-site team members and Everwell’s IT support staff were spending far too much time managing VPNs and firewall access.

Workarounds Were Putting Their Content Library at Risk

Given the pain of a distributed team trying to use systems designed for a single office, it was no surprise that off-site producers resorted to shipping hard drives directly to editors, bypassing the asset management system altogether. Content was extremely vulnerable to loss while being shipped around on hard drives. And making editorial changes to content afterward without direct access to the original source files wasn’t practical. Content was becoming increasingly disorganized and hard for users to find or repurpose. Loren knew that installing servers and storage at every remote production site was not an option.

What Loren needed was an asset management solution that could keep productions moving smoothly and content organized and protected, even with remote producers and editors, so that his team could stay focused on creating content. He soon realized that most available MAMs weren’t built for that.

Everwell remote workflow

Everwell’s distributed workflow

A Cloud-Based MAM Designed for the Complete Workflow

After reviewing and rejecting several vendors on his own, Loren met with Jason Perr of Workflow Intelligence Nexus. Jason proposed a complete cloud workflow solution with iconik for asset management and B2 for cloud storage. Built by established MAM provider Cantemo, iconik takes an entirely new approach by delivering asset management with integrated workflow tools as an on-demand service. With iconik, everything is available through a web browser.

Jason helped Everwell migrate existing content, then deploy a complete, cloud-based production system. Remote producers can easily ingest content into iconik, making it immediately available to other team members anywhere on the planet. As soon as content is added, iconik’s cloud-based compute resources capture the files’ asset metadata, generate proxies, then seamlessly store both the proxies and full-resolution content to the cloud. What’s more, iconik provides in-the-cloud processing for advanced metadata extraction and other artificial intelligence (AI) analysis to enrich assets and allow intelligent searching across the entire content library.

Another critical iconik feature for Everwell is the support for cloud-based proxy editing. Proxies stored in the cloud can be pulled directly into Adobe Premiere, allowing editors to work on their local machine with lower resolution proxies, rather than having every editor download the full-resolution content and generate their own proxy. After the proxy editing is complete, full-resolution sequences are rendered using the full-resolution originals stored in B2 cloud storage and then returned to the cloud. Iconik also offers cloud-based compute resources that can perform quality checks, transcoding, and other processing its customers need to prepare the content for delivery.

Cloud Storage That Goes Beyond Archive

Working behind the scenes, cloud storage seamlessly supports the iconik asset management system, hosting and delivering proxy and full-resolution content while keeping it instantly available for editing, metadata extraction, and AI or other processing. And because cloud storage is built with object storage instead of RAID, it offers the extreme durability needed to keep valuable content highly protected with the infinite scalability needed to grow capacity on demand.

Backblaze B2’s combination of data integrity, dramatically lower pricing than other leading cloud storage options, and full integration with iconik made it an obvious choice for Everwell. With B2, they no longer have to pay for or manage on-site production storage servers, tape, or disk-based archives — all their assets are securely stored in the cloud.

This was the seamless, real-time solution that Loren had envisioned, with all of the benefits of a truly cloud-delivered and cloud-enabled solution. Both iconik and Backblaze services can be scaled up in minutes and the pricing is transparent and affordable. He doesn’t pay for services or storage he doesn’t use and he was able to phase out his on-site servers.

Migrating Existing Content Archive to the Cloud

Everwell’s next challenge was migrating their enormous content library of raw material and existing asset metadata without impacting production. With Jason of Workflow Intelligence Nexus guiding them, they signed up for Backblaze’s B2 Fireball, the rapid ingest service that avoids time-consuming internet transfers by delivering content directly to their cloud-based iconik library.

As part of the service, Backblaze sent Everwell the 70TB Fireball. Everwell connected it to their local network and copied archived content onto it. Meanwhile, Jason and Loren’s team exported the metadata records from their existing asset manager and with a migration tool from Workflow Intelligence Nexus, they automatically created new placeholder records in iconik with all of that metadata.

Everwell then shipped the Fireball back to the Backblaze data center where all of the content was securely uploaded to their B2 account. iconik then scanned and identified the content and linked it to the existing iconik records. The result was an extremely fast migration of an existing content archive to a new cloud-based MAM that was immediately ready for production work.

Workflow diagram of Everwell media archive to B2 cloud storage

Everwell’s media ingest workflow

Cloud Simplicity and Efficiency, with Growth for the Future

With a cloud-based asset management and storage solution in place, production teams like Loren’s can have creative freedom and add significant new capabilities. They can be free to add new editors and producers on the fly and at a moment’s notice, and let them ingest new content from any location and use a single interface to keep track of every project in their expanding asset library.

Production teams can use new AI-powered discovery tools to find content quickly and can always access the original raw source files to create new videos at any time. And they’ll have more time to add new features to their service and take on new productions and customers when they wish.

Best of all for Loren, he’s now free to grow Everwell’s production operations as fast as possible without having to worry about running out of storage, managing servers, negotiating expensive maintenance contracts, or paying for staff to run it all. Their workflow is more nimble, their workforce is more productive, and Loren finally has the modern cloud-delivered production he’s always wanted.

•  •  •

We invite you to view our demo on integrating iconik with B2, 3 Steps to Making Your Cloud Media Archive Active with iconik and Backblaze B2.

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post How Cloud-Based MAMs Can Make End-to-End Cloud Workflows a Reality appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

B2 on Your Desktop — Cloud Storage Made Easy

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/cloud-storage-made-easy/

B2 on your Desktop

People have lots of different ways that they work with files in B2 Cloud Storage, and there’s a wide range of integrations for different platforms and different uses.

Sometimes, though, being able to use B2 as if it were just another drive on your desktop is the easiest way to go. The applications we’ll be covering in this post make working with B2 as easy as dragging and dropping files from a file manager on your computer directly to B2, or from B2 to your computer. In other cases, you can drag files from a file manager to the application, or between panes inside the application. There’s something for every platform, too, whether you’re on Windows, Macintosh, or Linux. Some of these tools are even free.

Let’s take a look at the applications that make working with B2 a piece of cake! (Or, as easy as pie.)

Use B2 As a Drive on the Desktop

Our first group of applications let you use B2 as if it were a local drive on your computer. The files on B2 are available for you from (depending on platform) File Explorer on Windows, the Finder on Mac, or the File Manager on Linux (as well as the command-line). Some of the applications are free and some require purchase (marked with $).

Most of these apps are simple for anyone to set up. If you are a more advanced user, and comfortable working with the command-line in your OS’s terminal, there are a number of free command-line tools for mounting B2 as a drive, including restic, Rclone, and HashBackup. See their docs for how to mount restic, Rclone, or HashBackup as a drive. We previously wrote about using restic with B2 in our Knowledge Base.

When would dragging and dropping files on the desktop be useful? If you just need to move one or a few files, this could be the fastest way to do that. You can load the application when you need to transfer files, or have it start with your computer so your B2 files and buckets are always just a click away. If you keep archived documents or media in B2 and often need to browse to find a file, this makes that much faster. You can even use shortcuts, search, and other tools you have available for your desktop to find and manage files on B2.

We’ve grouped the applications by platform that let you use B2 as a drive.

Some Screenshots Showing Applications That Let You Use B2 as a Drive

screenshot of Mountain Duck interface for saving to B2 Cloud Storage

Mountain Duck

screenshot of B2 mounted on the desktop with Mountain Duck

B2 mounted on the desktop with Mountain Duck

screenshot of ExpanDrive saving to B2 cloud storage

ExpanDrive

Cloudmounter

Cloudmounter

screenshot of Cloudmounter with B2 open in Mac Finder

Cloudmounter with B2 open in Mac Finder

Use B2 From a Desktop Application

These applications allow you to use B2 from within the application, and also often work with the local OS’s file manager for drag and drop. They support not just B2, but other cloud and sync services, plus FTP, SFTP, Webdav, SSH, SMB, and other protocols for networking and transferring files.

All of the applications below require purchase, but they have demo periods when you can try them out before you decide you’re ready to purchase.

Screenshots of Using B2 From Desktop Applications

Filezilla Pro

Filezilla Pro browsing photos on B2

screenshot of Transmit with B2 files

Transmit with B2 files

screenshot of Cyberduck transmitting files to B2

Cyberduck

screenshot of odrive cloud storage integration

odrive

SmartFTP

SmartFTP

The Cloud on Your Desktop

We hope these applications make you think of B2 as easy and always available on your desktop whenever you need to move files to or from cloud storage. Easy Peasy Lemon Squeezy, right?

If you’ve used any of these applications, or others we didn’t mention in this post, please tell us in the comments how they worked for you.

The post B2 on Your Desktop — Cloud Storage Made Easy appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Save Data Directly to B2 With Backblaze Cloud Backup 6.0

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/save-data-directly-to-cloud-storage/

Save Restores to B2 screenshot

Customers have often told us that they’d love a way to save data directly from their Backblaze Computer Backup account to B2 Cloud Storage. Some want to freeze a set of records in time, others want to preserve the state of a directory or system as it existed at a specific moment. Still others simply want to remove data from their local drive but have the assurance that it is safely stored in the cloud.

We listened to these requests and are happy to say that we’ve added this capability in our just released 6.0 update of Backblaze Computer Backup. Users can now select B2 Cloud Storage as a destination to save Snapshots from their backup account during the restore process.

This capability lets customers do a number of new things, like keep a copy of their old computer’s data even when migrating to a new one, save a collection of files (e.g. last year’s emails, a completed work project, your novel draft, tax returns) in the cloud as an archive, or free up space on a hard drive by moving data to a Snapshot in B2 and then deleting the original copy. Just like files in Computer Backup, the B2 Snapshot can be downloaded over the internet or delivered anywhere on a USB flash or hard drive.

No More Connecting Your External Drives Every 30 Days

This new feature can particularly benefit users who have been using Computer Backup to back up data from multiple external drives. Often, these external drives are not always connected to their computers, and to maintain the backups they have been required to connect these drives at least once every 30 days so that they’re active and therefore maintained in their backup — a task they tell us they’d rather avoid.

Now, with the ability to save a restore to B2, these customers can take a Snapshot of the data already backed up from these drives and save it to a B2 account. They can save as many Snapshots as they wish, thereby saving the state of the drive as it existed in one moment for as long as they wish to retain it.

Snapshots are stored at economical B2 rates: $0.005 gigabyte/month and $0.01 gigabyte for downloads. Customers get an instant cost estimate when a Snapshot is prepared from Backblaze Backup to B2.

What is B2 Cloud Storage?

B2 is Backblaze’s low cost and high performance cloud storage. It can be used to store data for as short or as long a period as you require. The data in B2 is retrievable without delay from anywhere at any time.

B2 is different from Backblaze Computer Backup in that B2 can be used to store whatever data you want and you have complete control of how long it is retained. Our Computer Backup service offers unlimited backup of the data on your Mac or Windows computer using the Backblaze client software. B2, in contrast, can be accessed through the account dashboard or used with any of a number of applications chosen by the user, or accessed through various programming interfaces or from a computer’s command line. For more on pricing, see our pricing page and calculator for B2.

How Does Saving a Restore to B2 Work?

Files in your Computer Backup can be zipped and archived to a Snapshot that is stored in B2 Cloud Storage. These selected files will be safe in B2 until the Snapshot is removed by the user, even if the files have been deleted from the computer and the backup.

screenshot of the View/Restore Files options

Creating a Restore Snapshot in Backup account

The user gets an instant estimate of the cost to store the Snapshot in B2.

Name this Snapshot screenshot

Preparing Snapshot from Computer Backup account

The user receives a notice when the Snapshot is created and stored.

Your B2 Snapshot is Ready!

Notice that Snapshot has been created

An unlimited number of restores can be saved and retained as B2 Snapshots for any length of time desired.The user’s account dashboard shows all the Snapshots that have been created, and gives options to download or remove the Snapshot. A Snapshot can be downloaded directly from B2 to a user’s computer or shipped to customers on a USB flash or hard drive. And, when returned within 30 days, the cost of the flash or hard drive is completely refundable, just like with regular restores.

screenshot of user B2 Snapshots

User account page showing status of Snapshots in B2

Let Us Know How You’re Using Snapshots

We hope you’ll try out this new capability and let us know how you’re using it.

For more tips on saving data to B2 Snapshots, read our help article, Saving Files to B2 from Computer Backup, or sign up for our free webinar on Backblaze Backup v6.0 on January 30, 2019, at 11am PST.

The post Save Data Directly to B2 With Backblaze Cloud Backup 6.0 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating from CrashPlan: Arq and B2

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/migrating-crashplan-arq-backup-b2/

Arq and Backblaze B2 logos on a computer screen

Many ex-CrashPlan for Home users have moved to Backblaze over the last year. We gave them a reliable, set-and-forget backup experience for the amazing price of $5/month per computer. Yet some people wanted features such as network share backup and CrashPlan’s rollback policy, and Arq Backup can provide those capabilities. So we asked Stefan Reitshamer of Arq to tell us about his solution.

— Andy

Migrating from CrashPlan
by Stefan Reitshamer, Founder, Arq Backup

CrashPlan for Home is gone — no more backups to CrashPlan and no more ability to restore from your old backups. Time to find an alternative!

Arq + Backblaze B2 = CrashPlan Home

If you’re looking for many of the same features as CrashPlan plus affordable storage, Arq + B2 cloud storage is a great option. MacWorld’s review of Arq called it “more reliable and easier to use than CrashPlan.”

Just like CrashPlan for Home, Arq lets you choose your own encryption password. Everything is encrypted before it leaves your computer, with a password that only you know.

Also just like CrashPlan for Home, Arq keeps all backups forever by default. Optionally you can tell it to “thin” your backup records from hourly to daily to weekly as they age, similar to the way Time Machine does it. And/or you can set a budget and Arq will periodically delete the oldest backup records to keep your costs under control.

With Arq you can back up whatever you want — no limits. Back up your external hard drives, network shares, etc. Arq won’t delete backups of an external drive no matter how long it’s been since you’ve connected it to your computer.

The license for Arq is a one-time cost and, if you use multiple Macs and/or PCs, one license covers all of them. The pricing for B2 storage is a fraction of the cost of any scale cloud storage provider — just $0.005/GB per month and the first 10GB is free. To put that in context, that’s 1/4th the price of Amazon S3. The savings becomes more pronounced if/when you need to restore your files. B2 only charges a flat rate of $0.01/GB for data download, and you get 1 GB of downloads free every day. By contract, Amazon S3 has tiered pricing that starts at 9 times that of B2.

Arq’s Advanced Features

Arq is a mature product with plenty of advanced features:

  • You can tell Arq to pause backups whenever you’re on battery.
  • You can tell Arq to pause backups during a certain time window every day.
  • You can tell Arq to keep your computer awake until it finishes the backup.
  • You can restrict which Wi-Fi networks and which network interfaces Arq uses for backup.
  • You can restrict how much bandwidth Arq uses when backing up.
  • You can configure Arq to send you email every time it finishes backing up, or only if there were errors during backup.
  • You can configure Arq to run a script before and/or after backup.
  • You can configure Arq to back up to multiple B2 accounts if you wish. Back up different folders to different B2 accounts, configure different schedules for each B2 account, etc.

Arq is fully compatible with B2. You can configure it with your B2 account ID and master application key, or you can use B2’s new application keys feature to restrict which bucket(s) Arq can write to.

Privacy and Control

With Arq and B2 storage, you keep control of your data because it’s your B2 account and your encryption password — even if an attacker got access to the B2 data they wouldn’t be able to read your encrypted files. Your backups are stored in an open, documented format. There’s even an open-source restore tool.

The post Migrating from CrashPlan: Arq and B2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze B2 API Version 2 Beta is Now Open

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-b2-api-version-2-beta-is-now-open/

cloud storage workflow image

Since B2 cloud storage was introduced nearly 3 years ago, we’ve been adding enhancements and new functionality to the B2 API, including capabilities like CORS support and lifecycle rules. Today, we’d like to introduce the beta of version 2 of the B2 API, which formalizes rules on application keys, provides a consistent structure for all API calls returning information about files, and cleans up outdated request parameters and returned data. All version 1 B2 API calls will continue to work as is, so no changes are required to existing integrations and applications.

The API Versions section of the B2 documentation on the Backblaze website provides the details on how the V1 and V2 APIs differ, but in the meantime here’s an overview into the what, why, and how of the V2 API.

What Has Changed Between the B2 Cloud Storage Version 1 and Version 2 APIs?

The most obvious difference between a V1 and V2 API call is the version number in the URL. For example:

https://apiNNN.backblazeb2.com/b2api/v1/b2_create_bucket

https://apiNNN.backblazeb2.com/b2api/v2/b2_create_bucket

In addition, the V2 API call may have different required request parameters and/or required response data. For example, the V2 version of b2_hide_file always returns accountId and bucketId, while V1 returns accountId.

The documentation for each API call will show whether there are any differences between API versions for a given API call.

No Change is Required For V1 Applications

With the introduction of V2 of the B2 API there will be V1 and V2 versions for every B2 API call. All applications using V1 API calls will continue to work with no change in behavior. In some cases, a given V2 API call will be different from its companion V1 API call as noted in the B2 API documentation. For the remaining API calls a given V1 API call and its companion V2 call will be the same, have identical parameters, return the same data, and have the same errors. This provides a B2 developer the flexibility to choose how to upgrade to the V2 API.

Obviously, if you want to use the functionality associated with a V2 API version, then you must use the V2 API call and update your code accordingly.

One last thing: beginning today, if we create a new B2 API call it will be created in the current API version (V2) and most likely will not be created in V1.

Standardizing B2 File Related API Calls

As requested by many B2 developers, the V2 API now uses a consistent structure for all API calls returning information about files. To enable this there are some V2 API calls that return additional fields, for example:

Restricted Application Keys

In August we introduced the ability to create restricted applications keys using the B2 API. This capability allows an account owner the ability to restrict who, how, and when the data in a given bucket can be accessed. This changed the functionality of multiple B2 API calls such that a user could create a restricted application key that could break a 3rd party integration to Backblaze B2. We subsequently updated the affected V1 API calls, so they could continue to work with the existing 3rd party integrations.

The V2 API fully implements the expected behavior when it comes to working with restricted application keys. The V1 API calls continue to operate as before.

Here is an example of how the V1 API and the V2 API will act differently as it relates to restricted application keys.

Set-up

  • The B2 account owner has created 2 public buckets, “Backblaze_123” and “Backblaze_456”
  • The account owner creates a restricted application key that allows the user to read the files in the bucket named “Backblaze_456”
  • The account owner uses the restricted application key in an application that uses the b2_list_buckets API call

In Version 1 of the B2 API

  • Action: The account owner uses the restricted application key (for bucket Backblaze_456) to access/list all the buckets they own (2 public buckets).
  • Result: The results returned are just for Backblaze_456 as the restricted application key is just for that bucket. Data about other buckets is not returned.

While this result may seem appropriate, the data returned did not match the question asked, i.e. list all buckets. V2 of the API ensures the data returned is responsive to the question asked.

In Version 2 of the B2 API

  • Action: The account owner uses the restricted application key (for bucket Backblaze_456) to access/list all the buckets they own (2 public buckets).
  • Result: A “401 unauthorized” error is returned as the request for access to “all” buckets does not match the restricted application key, e.g. bucket Backblaze_456. To achieve the desired result, the account owner can specify the name of the bucket being requested in the API call that matches the restricted application key.

Cleaning up the API

There are a handful of API calls in V2 where we dropped fields that were deprecated in V1 of the B2 API, but were still required. So in V2:

  • b2_authorize_account: The response no longer contains minimumPartSize. Use partSize and absoluteMinimumPartSize instead.
  • b2_list_file_names: The response no longer contains size. Use contentLength instead.
  • b2_list_file_versions: The response no longer contains size. Use contentLength instead.
  • b2_hide_file: The response no longer contains size. Use contentLength instead.

Support for Version 1 of the B2 API

As noted previously, V1 of the B2 API continues to function. There are no plans to stop supporting V1. If at some point in the future we do deprecate the V1 API, we will provide advance notice of at least one year before doing so.

The B2 Java SDK and the B2 Command Tool

Both the B2 Java SDK and the B2 Command Line Tool, do not currently support Version 2 of B2 API. They are being updated and will support the V2 API at the time the V2 API exits Beta and goes GA. Both of these tools, and more, can be found in the Backblaze GitHub repository.

More About the Version 2 Beta Program

We introduced Version 2 of the B2 API as beta so that developers can provide us feedback before V2 goes into production. With every B2 integration being coded differently, we want to hear from as many developers as possible. Give the V2 API a try and if you have any comments you can email our B2 beta team at b2beta@backblaze.com or contact Backblaze B2 support. Thanks.

The post Backblaze B2 API Version 2 Beta is Now Open appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How to Leverage Your Amazon S3 Experience to Code the Backblaze B2 API

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/how-to-code-backblaze-b2-api-interface/

Going from S3 to learning Backblaze B2

We wrote recently about how the Backblaze B2 and Amazon S3 APIs are different. What we neglected to mention was how to bridge those differences so a developer can create a B2 interface if they’ve already coded one for S3. John Matze, Founder of BridgeSTOR, put together his list of things consider when levering your S3 API experience to create a B2 interface. Thanks John.   — Andy
BackBlaze B2 to Amazon S3 Conversion
by John Matze, Founder of BridgeSTOR

BackBlaze B2 Cloud Storage Platform has developed into a real alternative to the Amazon S3 online storage platform with the same redundancy capabilities but at a fraction of the cost.

Sounds great — sign up today!

Wait. If you’re an application developer, it doesn’t come free. The Backblaze REST API is not compatible with Amazon S3 REST API. That is the bad news. The good news — it includes almost the entire set of functionality so converting from S3 to B2 can be done with minimal work once you understand the differences between the two platforms.

This article will help you shortcut the process by describing the differences between B2 and S3.

  1. Endpoints: AWS has a standard endpoint of s3.amazonaws.com which redirects to the region where the bucket is located or you may send requests directly to the bucket by a region endpoint. B2 does not have regions, but does have an initial endpoint called api.blackblazeb2.com. Every application must start by talking to this endpoint. B2 also requires two other endpoints. One for uploading an object and another one for downloading an object. The upload endpoint is generated on demand when uploading an object while the download API is returned during the authentication process and may be saved for download requests.
  1. Host: Unlike Amazon S3, the HTML header requires the host token. If it is not present, B2 will not respond with an error.
  1. JSON: Unlike S3, which uses XML, all B2 calls use JSON. Some API calls require data to be sent on the request. This data must be in JSON and all APIs return JSON as a result. Fortunately, the amount of JSON required is minimal or none at all. We just built a JSON request when required and made a simple JSON parser for returned data.
  1. Authentication: Amazon currently has two major authentication mechanisms with complicated hashing formulas. B2 simply uses the industry standard “HTTP basic auth” algorithm. It takes only a few minutes to get to speed on this algorithm.
  1. Keys: Amazon has the concept of an access key and a secret key. B2 has the equivalent with the access key being your key id (your account id) and the secret key being the application id (returned from the website) that maps to the secret key.
  1. Bucket ID: Unlike S3, almost every B2 API requires a bucket ID. There is a special list bucket call that will display bucket IDs by bucket name. Once you find your bucket name, capture the bucket ID and save it for future API calls.
  1. Head Call: The bottom line — there is none. There is, however, a list_file_names call that can be used to build your own HEAD call. Parse the JSON returned values and create your own HEAD call.
  1. Directory Listings: B2 Directories again have the same functionality as S3, but with a different API format. Again the mapping is easy: marker is startFileName, prefix is prefix, max-keys is maxFileCount and delimiter is delimiter. The big difference is how B2 handles markers. The Amazon S3 nextmarker is literally the next marker to be searched, the B2 nextmarker is the last file name that was searched. This means the next listing will also include the last marker name again. This means your routines must parse out the name or your listing will show the next marker twice. That’s a difference, but not a difficult one.
  1. Uploading an object: Uploading an object in B2 is quite different than S3. S3 just requires you to send the object to an endpoint and they will automatically place the object somewhere in their environment. In the B2 world, you must request a location for the object with an API call and then send the object to the returned location. The first API will send you a temporary key and you can continue to use this key for one hour without generating another, with the caveat that you have to monitor for failures from B2. The B2 environment may become full or some other issue will require you to request another key.
  1. Downloading an Object: Downloading an object in B2 is really easy. There is a download endpoint that is returned during the authentication process and you pass your request to that endpoint. The object is downloaded just like Amazon S3.
  1. Multipart Upload: Finally, multipart upload. The beast in S3 is just as much of a beast in B2. Again the good news is there is a one to one mapping.
    1. Multipart Init: The equivalent initialization returns a fileid. This ID will be used for future calls.
    2. Mulitpart Upload: Similar to uploading an object, you will need to get the API location to place the part. So use the fileid from “a” above and call B2 for the endpoint to place the part. Another difference is the upload also requires the payload to be hashed with a SHA1 algorithm. Once done, simply pass the SHA and the part number to the URL and the part is uploaded. This SHA1 component is equivalent to an etag in the S3 world so save it for later.
    3. Multipart Complete: Like S3, you will have to build a return structure for each part. B2 of course requires this structure to be in JSON but like S3, B2 requires the part number and the SHA1 (etag) for each part.

What Doesn’t Port

We found almost everything we required easily mapped from S3 to B2 except for a few issues. To be fair, BackBlaze is working on the following in future versions.

  1. Copy Object doesn’t exist: This could cause some issues with applications for copying or renaming objects. BridgeSTOR has a workaround for this situation so it wasn’t a big deal for our application.
  2. Directory Objects don’t exist: Unlike Amazon, where an object with that ends with a “/” is considered a directory, this does not port to B2. There is an undocumented object name that B2 applications use called .bzEmpty. Numerous 3rd party applications, including BridgeSTOR, treat an object ending with .bzEmpty as a directory name. This is also important for directory listings described above. If you choose to use this method, you will be required to replace the “.bzEmpty” with a “/.”

In conclusion, you can see the B2 API is different than the Amazon S3, but as far as functionality they are basically the same. For us at first it looked like it was going to be a large task, but once we took the time to understand the differences, porting to B2 was not a major job for our application. We created a S3 to B2 shim in a week followed by a few extra weeks of testing and bug fixes. I hope this document helps in your S3 to B2 conversion.

— John Matze, BridgeSTOR

The post How to Leverage Your Amazon S3 Experience to Code the Backblaze B2 API appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Creating a Media Archive Solution with Backblaze B2 and Archiware P5

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/creating-a-media-archive-solution/

Backblaze B2 Cloud Storage + Archiware P5= 7 Ways to Save

B2 + P5 = 7 Ways to Save Time, Money and Gain Peace of Mind with an Archive Solution of Backblaze B2 and Archiware P5

by Dr. Marc M. Batschkus, Archiware

This week’s guest post comes to us from Marc M. Batschkus of Archiware, who is well-known to media and entertainment customers, and is a trusted authority and frequent speaker and writer on data backup and archiving.

— Editor

Archiving has been around almost forever.

Roman "Archivum"Roman “Archivum” where scrolls were stored for later reference.

The Romans used the word “archivum” for the building that stored scrolls no longer needed for daily work. Since then, files have replaced scrolls, but the process has stayed the same and so today, files that are no longer needed for daily production can be moved to an archive.

Backup and Archive

Backblaze and Archiware complement each other in accomplishing this and we’ll show you how to get the most from this solution. But before we look at the benefits of archiving, let’s take a step back and review the difference between backup and archive.

A backup of your production storage protects your media files by replicating the files to a secondary storage. This is a cyclical process, continually checking for changed and new files, and overwriting files after the specified retention time is reached.

Archiving, on the other hand is a data migration, moving files that are no longer needed for daily production to (long-term) storage, yet keeping them easily retrievable. This way, all completed productions are collected in one place and kept for later reference, compliance, and re-use.

Think of backup as a spare tire, archive as winter tiresThink of BACKUP as a spare tire, in case you need it, and ARCHIVE as a stored set of tires for different needs.

To use an analogy:

  • Think of backup as the spare tire in the trunk.
  • Think of archive as the winter tires in the garage.

Both are needed!

Editor’s note: For more insight on “backup vs archive” have a look at What’s the Diff: Backup vs Archive.

Building a Media Archive Solution with Archiware P5 and Backblaze B2

Now that the difference between backup and archive is clear, let’s have a look at what an archive can do to make your life easier.

Archiware archive catalog transfering to B2 cloud storageArchiware P5 can be your interface to locate and manage your files, with Backblaze B2 as your ready storage for all of those files

P5 Archive connects to Backblaze B2 and offers the interface for locating files.

B2 + P5 = 7 Ways to Save Time and Money and Gain Peace-of-Mind

  1. Free up expensive production storage
  2. Archive from macOS, Windows, and Linux
  3. Browse and search the archive catalog with thumbnails and proxies
  4. Re-use, re-purpose, reference and monetize files
  5. Customize the metadata schema to fit your needs and speed up search
  6. Reduce backup size and runtime by moving files from production storage
  7. Protect precious assets from local disaster and for the long-term (no further migration/upgrade needed)

Archive as Mini-MAM

The “Mini-MAM” features of Archiware P5 help you to browse and find files easier than ever. Browse the archive visually using the thumbnails and proxy clips in the archive catalog. Search for specific criteria or a combination of criteria such as location or description.

Since P5 Archive lets you easily expand and customize metadata fields and menus, you can build the individual metadata schema that works best for you.

Technical metadata (e.g. camera type, resolution, lens) can be automatically imported from the file header into the metadata fields of P5 archive using a script.

The archive becomes the file memory of the company saving time and energy because now there is only one place to browse and search for files.

Mini MAM screenshotArchiware as “Mini-MAM” —  thumbnails, proxies, even metadata all within Archiware P5

P5 offers maximum flexibility and supports all storage strategies, be it cloud, disk or tape and any combination of the above.

For more information on Archiving with Archiware: Archiving with Archiware P5. For macOS, P5 Archive offers integration with the Finder and Final Cut Pro X via the P5 Archive App. For more information on integrated archiving with Final Cut Pro X: macOS Finder and Final Cut Pro X Integrated Archiving

You can start building an archive immediately with Backblaze B2 cloud storage because it allows you to do this without any additional storage hardware and upfront investment.

Backblaze B2 is the Best of Cloud

  • ✓  Saves investment in storage hardware
  • ✓  Access from anywhere
  • ✓  Storage on demand
  • ✓  Perpetual storage – no migration or upgrade of hardware
  • ✓  Financially advantageous (OPEX vs CAPEX)
  • ✓  Best price in its category

Backblaze B2 offers flexible access so that the archive can be accessed from several physical locations with no storage hardware needing to be moved.

P5 Archive supports consumable files as archive format. This makes the single files accessible even if P5 Archive is not present at the other location. This opens up a whole new world of possibilities for collaborative workflows that were not possible before.

Save Money with OPEX vs CAPEX

CAPEX vs. OPEXCAPital EXpenditures are the money companies spend to purchase major physical goods that will be used for more than one year. Examples in our field are investments in hardware such as storage and servers.

OPerating EXpenses are the costs for a company to run its business operations on a daily basis. Examples are rent and monthly cost for cloud storage like B2.

By using Backblaze B2, companies can save CAPEX and instead have monthly payments only for the cloud storage they use, while also saving maintenance and migration cost. Furthermore, migrating files to B2 makes expansion of high performance and costly production storage unnecessary. Over time this alone will make the archive pay off.

Now that you know how to profit from archiving with Archiware P5 and Backblaze B2, let’s look at the steps to build the best archive for you.

Connecting B2 cloud storage screenshot

Backblaze B2 is already a built-in option in P5 and works with P5 Archive and P5 Backup.

For detailed setup and best practice see:

Cloud Storage Setup and Best Practice for Archiware

Steps in Planning a Media Archive

Depending on the size of the archive, the number of people working with and using it, and the number of files that are archived, planning might be extremely important. Thinking ahead and asking the right questions ensures that the archive later delivers the value that it was built for.

Including people that will configure, operate, and use the system guarantees a high level of acceptance and avoids blind spots in your planning.

  1. Define users: who administers, who uses and who archives?
  2. Decide and select: what goes into the archive, and when?
  3. Which metadata are needed to describe the data needed (what will be searched for)?
  4. Actual security: on what operating system, hardware, software, infrastructure, interfaces, network and medium will be archived?
  5. What security requirements should be fulfilled: off-site storage, duplication, storage duration, test cycles of media, generation migration, etc.
  6. Retrieval:
    • Who searches?
    • With what criteria?
    • Who is allowed to restore?
    • On what storage?
    • For what use?

Metadata is the key to the archive and enables complex searches for technical and descriptive criteria.

Naming Conventions or “What’s in a File Name?”

The most robust metadata you can have is the file name. It can travel through different operating systems and file systems. The file name is the only metadata that is available all the time. It is independent of any database, catalog, MAM system, application, or other mechanism that can keep or read metadata. With it, someone can instantly make sense of a file that gets isolated, left over, misplaced or transferred to another location. Building a solid and intelligent naming convention for media files is crucial. Consistency is key for metadata. Metadata is a solid foundation for the workflow, searching and sharing files with other parties. The filename is the starting point.

Wrapping Up

There is much more that can make a media archive extremely worthwhile and efficient. For further reading I’ve made this free eBook available for more tips on planning and implementation.

eBook:  Data Management, Backup and Archive for Media Professionals — How to Protect Valuable Video Data in All Stages of the Workflow by Marc M. Batschkus

Start looking into the benefits an archive can bring you today. There is a 30-day fully featured trial license for Archiware P5 that can be combined with the Backblaze B2 free trial storage.

Trial License:  About Archiware P5 and 30-Day Trial

And of course, if you’re not already a Backblaze B2 customer, sign up instantly at the link below.

B2 Cloud Storage:  Instant Signup

— Dr. Marc M. Batschkus, Archiware

The post Creating a Media Archive Solution with Backblaze B2 and Archiware P5 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

cPanel Backup to B2 Cloud Storage

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/cpanel-backup-to-b2-cloud-storage/

laptop on a desk with a cup of coffee, cell phone, and iPad

Anyone who’s managed a business or personal website is likely familiar with cPanel, the control panel that provides a graphical interface and tools that simplify the process of managing a website. IT professionals who’ve managed hosting servers might know cPanel’s big brother, WHM (Web Host Manager), which is used by server administrators to manage large web hosting servers and cPanels for their customers.

cPanel DashboardWHM Dashboard
cPanel Dashboard  WHM Dashboard

Just as with any other online service, backup is critically important to safeguard user and business data from hardware failure, accidental loss, or unforeseen events. Both cPanel and WHM support a number of applications for backing up websites and servers.

JetApps’s JetBackup cPanel App

One of those cPanel applications is JetApps’s JetBackup, which supports backing up data to a number of destinations, including local, remote SSH, remote FTP, and public cloud services. Backblaze B2 Cloud Storage was added as a backup destination in version 3.2. Web hosts that support JetBackup for their cPanel and WHM users include Clook, FastComet, TMDHosting, Kualo, Media Street, ServerCake, WebHost.UK.net, MegaHost, MonkeyTree Hosting, and CloudBunny.

cPanel with JetBackup app

cPanel with JetBackup app

JetBackup configuration for B2

JetBackup configuration for B2

Directions for configuring JetBackup with B2 are available on their website.

Note:  JetBackup version 3.2+ supports B2 cloud storage, but that support does not currently include incremental backups. JetApps has told us that incremental backup support will be available in an upcoming release.

Interested in more B2 Support for cPanel and WHM?

JetBackup support for B2 was added to JetBackup because their users asked for it. Users have been vocal in asking vendors to add cPanel/WHM support for backing up to B2 in forums and online discussions, as evidenced on cPanel.net and elsewhere — here, here, and here. The old axiom that the squeaky wheel gets the grease is true when lobbying vendors to add B2 support — the best way to have B2 directly supported by an app is to express your interest directly to the backup app provider.

Other Ways to Back Up Website Data to B2

When a dedicated backup app for B2 is not available, some cPanel users are creating their own solutions using the B2 Command Line Interface (CLI), while others are using Rclone to back up to B2.

B2 CLI example:

#!/bin/bash
b2 authorize_account ACCOUNTID APIKEY
b2 sync –noProgress /backup/ b2://STORAGECONTAINER/

Rclone example:

rclone copy /backup backblaze:my-server-backups –transfers 16

Those with WordPress websites have other options for backing up their sites, which we highlighted in a post, Backing Up WordPress.

Having a Solid Backup Plan is What’s Important

If you’re using B2 for cPanel backup, or are using your own backup solution, please let us know what you’re doing in the comments.

The post cPanel Backup to B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The B2 Developers’ Community

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/object-storage-developer-community/

Developers at Work Using Object Storage

When we launched B2 Cloud Storage in September of 2015, we were hoping that the low cost, reliability, and openness of B2 would result in developers integrating B2 object storage into their own applications and platforms.

We’ve continually strengthened and encouraged the development of more tools and resources for the B2 developer community. These resources include APIs, a Command-Line tool, a Java SDK, and code examples for Swift and C++. Backblaze recently added application keys for B2, which enable developers to restrict access to B2 data and control how an application interacts with that data.

An Active B2 Developer Community

It’s three years later and we are happy to see that an active developer community has sprung up around B2. Just a quick look at GitHub shows over 250 repositories for B2 code with projects in ten different languages that range from C# to Go to Ruby to Elixir. A recent discussion on Hacker News about a B2 Python Library resulted in 225 comments.

B2 coding languages - Java, Ruby, C#, Shell, PHP, R, JavaScript, C++, Elixir, Go, Python, Swift

What’s Happening in the B2 Developer Community?

We believe that the two major reasons for the developer activity supporting B2 are, 1) the user demand for inexpensive and reliable storage, and, 2) the ease of implementation of the B2 API. We discussed the B2 API design decisions in a recent blog post.

Sharing and transparency have been cornerstone values for Backblaze since our founding, and we believe openness and transparency breed trust and further innovation in the community. Since we ask customers to trust us with their data, we want our actions to show why we are worthy of that trust.

Here are Just Some of the Many B2 Projects Currently Underway

We’re excited about all the developer activity and all of the fresh and creative ways you are using Backblaze B2 storage. We want everyone to know about these developer projects so we’re spotlighting some of the exciting work that is being done to integrate and extend B2.

Rclone (Go) — In addition to being an open source command line program to sync files and directories to and from cloud storage systems, Rclone is being used in conjunction with other applications such as restic. See Rclone on GitHub, as well.

CORS (General web development) — Backblaze supports CORS for efficient cross-site media serving. CORS allows developers to store large or infrequently accessed files on B2 storage, and then refer to and serve them securely from another website without having to re-download the asset.

b2blaze (Python) — The b2blaze Python library for B2.

Laravel Backblaze Adapter (PHP) — Connect your Laravel project to Backblaze connector with this storage adapter with token caching.

Wal-E (Postgres) — Continuous archiving to Backblaze for your Postgres databases.

Phoenix (Elixir) — File upload utility for the Phoenix web dev framework.

ZFS Backup (Go) — Backup tool to move your ZFS snapshots to B2.

Django Storage (Python) — B2 storage for the Python Django web development framework.

Arq Backup (Mac and Windows application) — Arq Backup is an example of a single developer, Stefan Reitshamer, creating and supporting a successful and well-regarded application for cloud backup. Stefan also is known for being responsive to his users.

Go Client & Libraries (Go) — Go is a popular language that is being used for a number of projects that support B2, including restic, Minio, and Rclone.

How to Get Involved as a B2 Developer

If you’re considering developing for B2, we encourage you to give it a try. It’s easy to implement and your application and users will benefit from dependable and economical cloud storage.

Developers at workStart by checking out the B2 documentation and resources on our website. GitHub and other code repositories are also great places to look. If you follow discussions on Reddit, you could learn of projects in the works and maybe find users looking for solutions.

We’ve written a number of blog posts highlighting the integrations for B2. You can find those by searching for a specific integration on our blog or under the tag B2. Posts for developers are tagged developer.

Developers at work

If you have a B2 integration that you believe will appeal to a significant audience, you should consider submitting it to us. Those that pass our review are listed on the B2 Integrations page on our website. We’re adding more each week. When you’re ready, just review the B2 Integration Checklist and submit your application. We’re looking forward to showcasing your work!

Now’s a good time to join the B2 developers’ community. Jump on in — the water’s great!

P.S. We want to highlight and promote more developers working with B2. If you have a B2 integration or project that we haven’t mentioned in this post, please tell us what you’re working on in the comments.

The post The B2 Developers’ Community appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.