Tag Archives: Cloud Storage

Backblaze’s Must See List for NAB 2019

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/what-not-to-miss-nab2019/

Collage of logos from Backblaze B2 cloud storage partners

With NAB 2019 only days away, the Backblaze team is excited to launch into the world’s largest event for creatives, and our biggest booth yet!

Must See — Backblaze Booth

This year we’ll be celebrating some of the phenomenal creative work by our customers, including American Public Television, Crisp Video, Falcons’ Digital Creative, WunderVu, and many more.

We’ll have workflow experts standing by to chat with you about your workflow frustrations, and how Backblaze B2 Cloud Storage can be the key to unlocking efficiency and solving storage challenges throughout your entire workflow: From Action! To Archive. With B2, you can focus on creating and managing content, not managing storage.

Create: Bring Your Story to Life

Stop by our booth and we can show you how you can protect your content from ingest through work-in-process by syncing seamlessly to the cloud. We can also detail how you can improve team collaboration and increase content reuse by organizing your content with one of our MAM integrations.

Distribute: Share Your Story With the World

Our experts can show you how B2 can help you scale your content library instantly and indefinitely, and avoid the hassle and expense of on-premises storage. We can demonstrate how everything in your content library can be served directly from your B2 account or through our content delivery partners like Cloudflare.

Preserve: Make Sure Your Story Lives Forever

Want to see the math behind the first cloud storage that’s more affordable than LTO? We can step through the numbers. We can also show you how B2 will keep your archived content accessible, anytime, and anywhere, through a web browser, API calls, or one of our integrated applications listed below.

Must See — Workflow Integrations You Can Count On

Our fantastic workflow partners are a critical part of your creative workflow backed by Backblaze — and there’s a lot of partner news to catch up on!

Drop by our booth to pick up a handy map to help you find Backblaze partners on the show floor including:

Backup and Archive Workflow Integrations

Archiware P5, booth SL15416
SyncBackPro, Wynn Salon — J

File Transfer Acceleration, Data Wrangling, Data Movement

FileCatalyst, booth SL12116
Hedge, booth SL14805

Asset and Collaboration Managers

axle ai, booth SL15116
Cantemo iconik, booth SL6021
Cantemo (Portal), booth SL6021
CatDV, booth SL5421
Cubix (Ortana Media Group), booth SL5922
eMAM, booth SL10224

Workflow Storage

Facilis, booth SL6321
GB Labs, booth SL5324
ProMAX, booth SL6313
Scale Logic, booth SL11109
Tiger Technology, booth SL8505
QNAP, booth SL15716
Seagate, booth SL8511
StorageDNA, booth SL11810

Must See — Backblaze Events during NAB

Monday morning we’re delivering a presentation in the Scale Logic Knowledge Zone, and Tuesday night of NAB we’re honored to help sponsor the all-new Faster Together event that replaces the long-standing Las Vegas Creative User Supermeet event.

We’ll be raffling off a Hover2 4K drone powered by AI to help you get that perfect drone shot for your next creative film! So after the NAB show wraps up on Tuesday, head over to the Rio main ballroom for a night of mingling with creatives and amazing talks by some of the top editors, colorists, and VFX artists in the industry.

ProVideoTech and Backblaze at Scale Logic Knowledge Zone
Monday April 8 at 11 AM
Scale Logic Knowledge Zone, NAB Booth SL111109
Monday of NAB, Backblaze and PVT will deliver a live presentation for NAB attendees on how to build hybrid-cloud workflows with Cantemo and Backblaze.
Scale Logic Media Management Knowledge Zone

Backblaze at The Faster Together Stage
Tuesday, April 9
Rio Las Vegas Hotel and Casino
Doors open at 4:30 PM, stage presentations begin at 7:00 PM
Reserve Tickets for the Faster Together event

If you haven’t yet, be sure to sign up and reserve your meeting time with the Backblaze team, and add us to your Map My Show NAB plan and we’ll see you there!

  NAB 2019 is just a few days away. NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post Backblaze’s Must See List for NAB 2019 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating Your Legacy Archive to Future-Ready Architecture

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/ortana-cubix-core-media-archive/

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

Guest blog post by James Gibson, Founder & CEO of Ortana Media Group

There’s a wide range of reasons why businesses want to migrate away from their current archive solution, ranging from managing risk, concerns over legacy hardware, media degradation and format support. Many businesses also find themselves stuck with closed format solutions that are based on legacy middleware with escalating support costs. It is a common problem that we at Ortana have helped many clients overcome through smart and effective use of the many storage solutions available on the market today. As founder and CEO of Ortana, I want to share some of our collective experience around this topic and how we have found success for our clients.

First, we often forget how quickly the storage landscape changes. Let’s take a typical case.

It’s Christmas 2008 and a CTO has just finalised the order on their new enterprise-grade hierarchical storage management (HSM) system with an LTO-4 tape robot. Beyonce’s Single Ladies is playing on the radio, GPS on phones has just started to be rolled out, and there is this new means of deploying mobile apps called the Apple™ App Store! The system purchased is from a well established, reputable company and provides peace of mind and scalability — what more can you ask for? The CTO goes home for the festive season — job well done — and hopes Santa brings him one of the new Android phones that have just launched.

Ten years on, the world is very different and Moore’s law tells us that the pace of technological change is only set to increase. That growing archive has remained on the same hardware, controlled by the same HSM and has gone through one or two expensive LTO format changes. “These migrations had to happen,” the CTO concedes, as support for the older LTO formats was being dropped by the hardware supplier. Their whole content library had to be restored and archived back to the new tapes. New LTO formats also required new versions of the HSM, and whilst these often included new features — over codec support, intelligent repacking and reporting — the fundamentals of the system remained: closed format, restricted accessibility, and expensive. Worse still, the annual support costs are increasing whilst the new feature development has ground to a halt. Sure the archive still works, but for how much longer?

Decisions, Decisions, So Many Migration Decisions

As businesses make the painful decision to migrate their legacy archive, the choices of what, where, and how become overwhelming. The storage landscape today is a completely different picture from when closed format solutions went live. This change alone offers significant opportunities to businesses. By combining the right storage solutions with seamless architecture and with lights out orchestration driving the entire process, businesses can flourish by allowing their storage to react to the needs of the business, not constrain them. Ortana has purposefully ensured Cubix (our asset management, automation, and orchestration platform) is as storage agnostic as possible by integrating a range of on-premises and cloud-based solutions, and built an orchestration engine that is fully abstracted from this integration layer. The end result is that workflow changes can be done in seconds without affecting the storage.

screenshot of Cubix workflow
Cubix’s orchestration platform includes a Taskflow engine for creating customized workflow paths

As our example CTO would say (shaking their head no doubt whilst saying it), a company’s main priority is to not-be-here-again, and the key is to store media in an open format, not bound to any one vendor, but also accessible to the business needs both today and tomorrow. The cost of online cloud storage such as Backblaze has now made storing content in the cloud more cost effective than LTO and this cost is only set to reduce further. This, combined with the ample internet bandwidth that has become ubiquitous, makes cloud storage an obvious primary storage target. Entirely agnostic to the format and codec of content you are storing, aligned with MPAA best practices and easily integrated to any on-premise or cloud-based workflows, cloud storage removes many of the issues faced by closed-format HSMs deployed in so many facilities today. It also begins to change the dialogue over main vs DR storage, since it’s no longer based at a facility within the business.

Cloud Storage Opens Up New Capabilities

Sometimes people worry that cloud storage will be too slow. Where this is true, it is almost always due to poor cloud implementation. B2 is online, meaning that the time-to-first-byte is almost zero, whereas other cloud solutions such as Amazon Glacier are cold storage, meaning that the time-to-first-byte ranges from at best one to two hours, but in general six to twelve hours. Anything that is to replace an LTO solution needs to match or beat the capacity and speed of the incumbent solution, and good workflow design can ensure that restores are done as promptly as possible and direct to where the media is needed.

But what about those nasty egress costs? People can get caught off guard when this is not budgeted for correctly, or when their workflow does not make good use of simple solutions such as proxies. Regardless of whether your archive is located on LTO or in the cloud, proxies are critical to keeping accessibility up and costs and restore times down. By default, when we deploy Cubix for clients we always generate a frame accurate proxy for video content, often devalued through the use of burnt-in timecode (BITC), logos, and overlays. Generated using open source transcoders, they are incredibly cost effective to generate and are often only a fraction of the size of the source files. These proxies, which can also be stored and served directly from B2 storage, are then used throughout all our portals to allow users to search, find, and view content. This avoids the time and cost required to restore the high resolution master files. Only when the exact content required is found is a restore submitted for the full-resolution masters.

Multiple Copies Stored at Multiple Locations by Multiple Providers

Moving content to the cloud doesn’t remove the risk of working with a single provider, however. No matter how good or big they are, it’s always a wise idea to ensure an active disaster recovery solution is present within your workflows. This last resort copy does not need all the capabilities of the primary storage, and can even be more punitive when it comes to restore costs and times. But it should be possible to enable in moments, and be part of the orchestration engine rather than being a manual process.

The need to de-risk that single provider, or for workflows where 30-40% of the original content has to be regularly restored (as proxies do not meet the needs of the workflow), on premise archive solutions still can be deployed without being caught in the issues discussed earlier. Firstly, LTO now offers portability benefits through LTFS, an easy to use open format, which critically has its specification and implementation within the public domain. This ensures it is easily supported by many vendors and guarantees support longevity for on-premises storage. Ortana with its Cubix platform supports many HSMs that can write content in native LTFS format that can be read by any standalone drive from any vendor supporting LTFS.

Also, with 12 TB hard drives now standard in the marketplace, nearline based storage has also become a strong contender for content when combined with intelligent storage tiering to the cloud or LTO. Cubix can fully automate this process, especially when complemented by such vendors as GB Labs’ wide range of hardware solutions. This mix of cloud, nearline and LTO — being driven by an intelligent MAM and orchestration platform like Cubix to manage content in the most efficient means possible on a per workflow basis — blurs the lines between primary storage, DR, and last resort copies.

Streamlining the Migration Process

Once you have your storage mix agreed upon and in place, now your fraught task is getting your existing library onto the new solution whilst not impacting access to the business. Some HSM vendors suggest swapping your LTO tapes by physically removing them from one library and inserting them into another. Ortana knows that libraries are often the linchpin of the organisation and any downtime has significant negative impact that can fill media managers with dread, especially since these one shot, one direction migrations can easily go wrong. Moreover, when following this route, simply moving tapes does not persist any editorial metadata or resolve many of the objectives around making content more available. Cubix not only manages the media and the entire transformation process, but also retains the editorial metadata from the existing archive also.

screenshot of Cubix search results
During the migration process, content can be indexed via AI-powered speech to text and image recognition

Given the high speeds that LTO delivers, combined with the scalability of Cubix, the largest libraries can be migrated in short timescales, whilst having zero downtime on the archive. Whilst the content is being migrated to the defined mix of storage targets, Cubix can perform several tasks on the content to further augment the metadata, including basics such as proxy and waveform generation, through to AI based image detection and speech to text. Such processes only further reduce the time spent by staff looking for content, and further refine the search capability to ensure only that content required is restored — translating directly to reduced restore times and egress costs.

A Real-World Customer Example

Many of the above concerns and considerations led a large broadcaster to Ortana for a large-scale migration project. The broadcaster produces in-house news and post production with multi-channel linear playout and video-on-demand (VoD). Their existing archive was 3 PB of media across two generations of LTO tape managed by Oracle™ DIVArchive & DIVADirector. They were concerned about on-going support for DIVA and wanted to fully migrate all tape and disk-based content to a new HSM in an expedited manner, making full use of the dedicated drive resources available.

Their primary goal was to fully migrate all editorial metadata into Cubix, including all ancillary files (subtitles, scripts, etc.), and index all media using AI-powered content discovery to reduce searching times for news, promos /and sports departments at the same time. They also wanted to replace the legacy Windows Media Video (WMV) proxy with new full HD H264 frame accurate proxy, and provide the business secure, group-based access to the content. Finally, they wanted all the benefits of cloud storage, whilst keeping costs to a minimum.

With Ortana’s Cubix Core, the broadcaster was able to safely migrate their DIVAarchive to two storage platforms: LTFS with a Quantum HSM system and Backblaze B2 cloud storage. Their content was indexed via AI powered image recognition (Google Vision) and speech to text (Speechmatics) during the migration process, and the Cubix UI replaced existing archive as media portal for both internal and external stakeholders.

The new solution has vastly reduced the timescales for content processing across all departments, and has led to a direct reduction in staff costs. Researchers report a 50-70% reduction in time spent searching for content, and the archive shows a 40% reduction in restore requests. By having the content located in two distinct geographical locations they’ve entirely removed their business risk of having their archive with a single vendor and in a single location. Most importantly, their archived content is more active than ever and they can be sure it will stay alive for the future.

How exactly did Ortana help them do it? Join our webinar Evading Extinction: Migrating Legacy Archives on Thursday, March 28, 2019. We’ll detail all the steps we took in the process and include a live demo of Cubix. We’ll show you how straightforward and painless the archive migration can be with the right strategy, the right tools, and the right storage.

— James Gibson, Founder & CEO, Ortana Media Group

•  •  •

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post Migrating Your Legacy Archive to Future-Ready Architecture appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A Workflow Playbook for Migrating Your Media Assets to a MAM

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/workflow-playbook-migrating-your-media-assets-to-a-mam/

Asset > Metadata > Database > Media Asset Manager > Backblaze Fireball > Backblaze B2 Cloud Storage

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

Whatever your creative venture, the byproduct of all your creative effort is assets. Whether you produce music, images, or video, as you produce more and more of these valuable assets, they tend to pile up and become difficult to manage, organize, and protect. As your creative practice evolves to meet new demands, and the scale of your business grows, you’ll often find that your current way of organizing and retrieving assets can’t keep up with the pace of your production.

For example, if you’ve been managing files by placing them in carefully named folders, getting those assets into a media asset management system will make them far easier to navigate and much easier to pull out exactly the media you need for a new project. Your team will be more efficient and you can deliver your finished content faster.

As we’ve covered before, putting your assets in a type of storage like B2 Cloud Storage ensures that they will be protected in a highly durable and highly available way that lets your entire team be productive.

You can learn about some of the new capabilities of the latest cloud-based collaboration tools here:

With some smart planning, and a little bit of knowledge, you can be prepared to get the most of your assets as you move them into an asset management system, or when migrating from an older or less capable system into a new one.

Assets and Metadata

Before we can build some playbooks to get the most from your creative assets, let’s review a few key concepts.

Asset — a rich media file with intrinsic metadata.

An asset is simply a file that is the result of your creative operation, and most often a rich media file like an image or a video. Typically, these files are captured or created in a raw state, then your creative team adds value to that raw asset by editing it together with other assets to create a finished story that in turn, becomes another asset to manage.

Metadata — Information about a file, either embedded within the file itself or associated with the file by another system, typically a media asset management (MAM) application.

The file carries information about itself that can be understood by your laptop or workstation’s operating system. Some of these seem obvious, like the name of the file, how much storage space it occupies, when it was first created, and when it was last modified. These would all be helpful ways to try to find one particular file you are looking for among thousands just using the tools available in your OS’s file manager.

File Metadata

There’s usually another level of metadata embedded in media files that is not so obvious but potentially enormously useful: metadata embedded in the file when it’s created by a camera, film scanner, or output by a program.

Results of a file inspected by an operating system's file manager
An example of metadata embedded in a rich media file

For example, this image taken in Backblaze’s data center a few years ago carries all kinds of interesting information. For example, when I inspect the file on macOS’s Finder with Get Info, a wealth of information is revealed. I can now not only tell the image’s dimensions and when the image was taken, but also exactly what kind of camera took this picture and the lens settings that were used, as well.

As you can see, this metadata could be very useful if you want to find all images taken on that day, or even images taken with that same camera, focal length, F-stop, or exposure.

When a File and Folder System Can’t Keep Up

Inspecting files one at a time is useful, but a very slow way to determine if a file is the one you need for a new project. Yet many creative environments that don’t have a formal asset management system get by with an ad hoc system of file and folder structures, often kept on the same storage used for production or even on an external hard drive.

Teams quickly outgrow that system when they find that their work spills over to multiple hard drives, or takes up too much space on their production storage. Worst of all, assets kept on a single hard drive are vulnerable to disk damage, or to being accidentally copied or overwritten.

Why Your Assets Need to be Managed

To meet this challenge, creative teams have often turned to a class of application called a Media Asset Manager (MAM). A MAM automatically extracts all their assets’ inherent metadata, helps move files to protected storage, and makes them instantly available to their entire team. In a way, these media asset managers become a private media search engine where any file attribute can be a search query to instantly uncover the file they need in even the largest media asset libraries.

Beyond that, asset management systems are rapidly becoming highly effective collaboration and workflow tools. For example, tagging a series of files as Field Interviews — April 2019, or flagging an edited piece of content as HOLD — do not show customer can be very useful indeed.

The Inner Workings of a Media Asset Manager

When you add files into an asset management system, the application inspects each file, extracting every available bit of information about the file, noting the file’s location on storage, and often creating a smaller stand-in or proxy version of the file that is easier to present to users.

To keep track of this information, asset manager applications employ a database and keep information about your files in it. This way, when you’re searching for a particular set of files among your entire asset library, you can simply make a query of your asset manager’s database in an instant rather than rifling through your entire asset library storage system. The application takes the results of that database query and retrieves the files you need.

The Asset Migration Playbook

Whether you need to move from a file and folder based system to a new asset manager, or have been using an older system and want to move to a new one without losing all of the metadata that you have painstakingly developed, a sound playbook for migrating your assets can help guide you.

Play 1 — Getting Assets in Files and Folders Protected Without an Asset Management System

In this scenario, your assets are in a set of files and folders, and you aren’t ready to implement your asset management system yet.

The first consideration is for the safety of the assets. Files on a single hard drive are vulnerable, so if you are not ready to choose an asset manager your first priority should be to get those files into a secure cloud storage service like Backblaze B2.

We invite you to read our post: How Backup and Archive are Different for Professional Media Workflows

Then, when you have chosen an asset management system, you can simply point the system at your cloud-based asset storage to extract the metadata of the files and populate the asset information in your asset manager.

  1. Get assets archived or moved to cloud storage
  2. Choose your asset management system
  3. Ingest assets directly from your cloud storage

Play 2 — Getting Assets in Files and Folders into Your Asset Management System Backed by Cloud Storage

In this scenario, you’ve chosen your asset management system, and need to get your local assets in files and folders ingested and protected in the most efficient way possible.

You’ll ingest all of your files into your asset manager from local storage, then archive them to cloud storage. Once your asset manager has been configured with your cloud storage credentials, it can automatically move a copy of local files to the cloud for you. Later, when you have confirmed that the file has been copied to the cloud, you can safely delete the local copy.

  1. Ingest assets from local storage directly into your asset manager system
  2. From within your asset manager system archive a copy of files to your cloud storage
  3. Once safely archived, the local copy can be deleted

Play 3 — Getting a Lot of Assets on Local Storage into Your Asset Management System Backed by Cloud Storage

If you have a lot of content, more than say, 20 terabytes, you will want to use a rapid ingest service similar to Backblaze’s Fireball system. You copy the files to Fireball, Backblaze puts them directly into your asset management bucket, and the asset manager is then updated with the file’s new location in your Backblaze B2 account.

This can be a manual process, or can be done with scripting to make the process faster.

You can read about one such migration using this play here:
iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted

  1. Ingest assets from local storage directly into your asset manager system
  2. Archive your local assets to Fireball (up to 70 TB at a time)
  3. Once the files have been uploaded by Backblaze, relink the new location of the cloud copy in your asset management system

You can read more about Backblaze Fireball on our website.

Play 4 — Moving from One Asset Manager System to a New One Without Losing Metadata

In this scenario you have an existing asset management system and need to move to a new one as efficiently as possible to not only take advantage of your new system’s features and get files protected in cloud storage, but also to do it in a way that does not impact your existing production.

Some asset management systems will allow you to export the database contents in a format that can be imported by a new system. Some older systems may not have that luxury and will require the expertise of a database expert to manually extract the metadata. Either way, you can expect to need to map the fields from the old system to the fields in the new system.

Making a copy of old database is a must. Don’t work on the primary copy, and be sure to conduct tests on small groups of files as you’re migrating from the older system to the new. You need to ensure that the metadata is correct in the new system, with special attention that the actual file location is mapped properly. It’s wise to keep the old system up and running for a while before completely phasing it out.

  1. Export the database from the old system
  2. Import the records into the new system
  3. Ensure that the metadata is correct in the new system and file locations are working properly
  4. Make archive copies of your files to cloud storage
  5. Once the new system has been running through a few production cycles, it’s safe to power down the old system

Play 5 — Moving Quickly from an Asset Manager System on Local Storage to a Cloud-based System

In this variation of Play 4, you can move content to object storage with a rapid ingest service like Backblaze Fireball at the same time that you migrate to a cloud-based system. This step will benefit from scripting to create records in your new system with all of your metadata, then relink with the actual file location in your cloud storage all in one pass.

You should test that your asset management system can recognize a file already in the system without creating a duplicate copy of the file. This is done differently by each asset management system.

  1. Export the database from the old system
  2. Import the records into the new system while creating placeholder records with the metadata only
  3. Archive your local assets to Fireball (up to 70 TB at a time)
  4. Once the files have been uploaded by Backblaze, relink the cloud based location to the asset record

Wrapping Up

Every production environment is different, but we all need the same thing: to be able to find and organize our content so that we can be more productive and rest easy knowing that our content is protected.

These plays will help you take that step and be ready for any future production challenges and opportunities.

If you’d like more information about media asset manager migration, join us for our webinar on March 15, 2019:

Backblaze Webinar:  Evolving for Intelligence: MAM to MAM Migration

•  •  •

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post A Workflow Playbook for Migrating Your Media Assets to a MAM appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Backup and Archive are Different for Professional Media Workflows

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/backup-vs-archive-professional-media-production/

a man working on a video editor

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

If you make copies of your images or video files for safekeeping, are you backing them up or archiving them? It’s been discussed many times before, but the short answer is that it depends on the function of the copy. For media workflows, a crisp understanding is required in order to implement the right tools. In today’s post, we’ll explore the nuances between backup and archiving in media workflows and provide a real world application from UCSC Silicon Valley.

We explored the broader topic of backing up versus archiving in our What’s the Diff: Backup vs Archive post. It’s a backup if you copy data to keep it available in case of loss, while it’s an archive if you make a copy for regulatory compliance, or to move older, less-used data off to cheaper storage. Simple, right? Not if you’re talking about image, video and other media files.

Backup vs. Archive for Professional Media Productions

Traditional definitions don’t fully capture how backup and archive typically operate in professional media workflows compared to business operations. Video and images aren’t typical business data in a number of ways, and that profoundly impacts how they’re protected and preserved throughout their lifecycle. With media backup there are key differences in which files get backed up and how they get backed up. With media archive there are key differences in when files get archived and why they’re archived.

Large Media Files Sizes Slow Down Backup

The most obvious nuance is that media files are BIG. While most business documents are under 30 MB in size, a single second of video could be larger than 30 MB at higher resolutions and frame rates. Backing up such large file sizes can take longer than the traditional backup windows of overnight for incremental backups and a weekend for full backup. And you can’t expect deduplication to shorten backup times or reduce backup sizes, either. Video and images don’t dedupe well.

Meanwhile, the editing process generates a flurry of intermediate or temporary files in the active content creation workspace that don’t need to be backed up because they can be easily regenerated from source files.

The best backup solutions for media allow you to specify exactly which directories and file types you want backed up, so that you’re taking time for and paying for only what you need.

Archiving to Save Space on Production Storage

Another difference is that archiving to reduce production storage costs is much more common in professional media workflows than with business documents, which are more likely to be archived for compliance. High-resolution video editing in particular requires expensive, high-performance storage to deliver multiple streams of content to multiple users simultaneously without dropping frames. With the large file sizes that come with high-resolution content, this expensive resource fills up quickly with content not needed for current productions. Archiving completed projects and infrequently-used assets can keep production storage capacities under control.

Media asset managers (MAMs) can simplify the archive and retrieval process. Assets can be archived directly through the MAM’s visual interface, and after archiving, their thumbnail or proxies remain visible to users. Archived content remains fully searchable by its metadata and can also be retrieved directly through the MAM interface. For more information on MAMs, read What’s the Diff: DAM vs MAM.

Strategically archiving select media files to less expensive storage allows facilities to stay within budget, and when done properly, keeps all of your content readily accessible for new projects and repurposing.

Permanently Secure Source Files and Raw Footage on Ingest

A less obvious way that media is different is that video files are fixed content that don’t actually change during the editing process. Instead, editing suites compile changes to be made to the original and apply the changes only when making the final cut and format for delivery. Since these source files are not going to change, and are often irreplaceable, many facilities save a copy to secondary storage as soon as they’re ingested to the workflow. This copy serves as a backup to the file on local storage during the editing process. Later, when the local copy is no longer actively being used, it can be safely deleted knowing it’s secured in the archive. I mean backup. Wait, which is it?

Whether you call it archive or backup, make a copy of source files in a storage location that lives forever and is accessible for repurposing throughout your workflow.

To see how all this works in the real world, here’s how UCSC Silicon Valley designed a new solution that integrates backup, archive, and asset management with B2 cloud storage so that their media is protected, preserved and organized at every step of their workflow.

Still from UC Scout AP Psychology video
Still from UC Scout AP Psychology course video

How UCSC Silicon Valley Secured Their Workflow’s Data

UCSC Silicon Valley built a greenfield video production workflow to support UC Scout, the University of California’s online learning program that gives high school students access to the advanced courses they need to be eligible and competitive for college. Three teams of editors, producers, graphic designers and animation artists — a total of 22 creative professionals — needed to share files and collaborate effectively, and digital asset manager Sara Brylowski was tasked with building and managing their workflow.

Sara and her team had specific requirements. For backup, they needed to protect active files on their media server with an automated backup solution that allowed accidentally deleted files to be easily restored. Then, to manage storage capacity more effectively on their media server, they wanted to archive completed videos and other assets that they didn’t expect to need immediately. To organize content, they needed an asset manager with seamless archive capabilities, including fast self-service archive retrieval.

They wanted the reliability and simplicity of the cloud to store both their backup and archive data. “We had no interest in using LTO tape for backup or archive. Tape would ultimately require more work and the media would degrade. We wanted something more hands off and reliable,” Sara explained. The cloud choice was narrowed to Backblaze B2 or Amazon S3. Both were proven cloud solutions that were fully integrated with the hardware and software tools in their workflow. Backblaze was chosen because its $5 per terabyte per month pricing was a fraction of the cost of Amazon S3.

Removing Workflow Inefficiencies with Smarter Backup and Archive

The team had previously used the university’s standard cloud backup service to protect active files on the media server as they worked on new videos. But because that cloud backup was designed for traditional file servers, it backed up everything, even the iterative files generated by video production tools like Adobe Premiere, After Effects, Maya 3D and Cinema 3D that didn’t need to be backed up. For this reason, Sara pushed to not use the university’s backup provider. It was expensive in large part because it was saving all of this noise in perpetuity.

“With our new workflow we can manage our content within its life cycle and at same time have reliable backup storage for the items we know we’re going to want in the future. That’s allowed us to concentrate on creating videos, not managing storage.”—Sara Brylowski, UCSC Silicon Valley

After creating thousands of videos for 65 online courses, their media server was quickly filling to its 128 TB capacity. They needed to archive data from completed projects to make room for new ones, sooner rather than later. Deploying a MAM solution would simplify archiving, while also helping them organize their diverse and growing library of assets — video shot in studio, B-roll, licensed images, and audio from multiple sources.

To find out exactly how Sara and her team addressed these challenges and more, read the full case study on UC Scout at UCSC Silicon Valley and learn how their new workflow enables them to concentrate on creating videos, not managing storage.

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post How Backup and Archive are Different for Professional Media Workflows appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cloud-based Tools Combined with AI Can Make Workflows More Powerful and Increase Content Value

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/increase-content-archive-value-via-cloud-tools/

CPU + Metadata Mining + Virtual Machines & Apps + AI in the cloud

This is part two of a series. You can read part one at Modern Storage Workflows in the Age of Cloud.

Modern Storage Workflows in the Age of Cloud, Part 2

In Modern Storage Workflows in the Age of Cloud, Part One, we introduced a powerful maxim to guide content creators (anyone involved in video or rich media production) in choosing storage for the different parts of their content creation workflows:

Choose the storage that best fits each workflow step.

It’s true that every video production environment is different, with different needs, and the ideal solution for an independent studio of a few people is different than the solution for a 50-seat post-production house. But the goal of everyone in the business of creative storytelling is to tell stories and let your vision and craft shine through. Anything that makes that job more complicated and more frustrating keeps you from doing your best work.

Given how prevalent, useful, and inexpensive cloud technologies are, almost every team today is rapidly finding they can jettison whole classes of storage that are complicating their workflow and instead focus on two main types of storage:

  1. Fast, shared production storage to support editing for content creation teams (with no need to oversize or overspend)
  2. Active, durable, and inexpensive cloud storage that lets you move all of your content in one protected, accessible place — your cloud-enabled content backplane

It turns out there’s another benefit unlocked when your content backplane is cloud enabled, and it’s closely tied to another production maxim:

Organizing content in a single, well managed repository makes that content more valuable as you use it.

When all content is in a single place, well-managed and accessible, content gets discovered faster and used more. Over time it will pick up more metadata, with sharper and more refined tags. A richer context is built around the tags, making it more likely that the content you already have will get repurposed for new projects.

Later, when you come across a large content repository to acquire, or contemplate a digitization or preservation project, you know you can bring it into the same content management system you’ve already refined, concentrating and increasing value further still.

Having more content that grows increasingly valuable over time becomes a monetization engine for licensing, content personalization, and OTT delivery.

You might think that these benefits already present a myriad of new possibilities, but cloud technologies are ready to accelerate the benefits even further.

Cloud Benefits — Pay as You Need It, Scalability, and Burstability

It’s worth recapping the familiar cost-based benefits of the cloud: 1) pay only for the resources you actually use, and only as long as you need them, and, 2) let the provider shoulder the expense of infrastructure support, maintenance, and continuous improvement of the service.

The cost savings from the cloud are obvious, but the scalability and flexibility of the cloud should be weighted strongly when comparing using the cloud versus handling infrastructure yourself. If you were responsible for a large server and storage system, how would you cope with a business doubling every quarter, or merging with another team for a big project?

Too many production houses end up disrupting their production workflow (and their revenue) when they are forced to beef up servers and storage capability to meet new production demands. Cloud computing and cloud storage offer a better solution. It’s possible to instantly bring on new capacity and capability, even when the need is unexpected.

Cloud Delivered Compute Horsepower on Demand

Let’s consider the example of a common task like transcoding content and embedding a watermark. You need to process 3,600 frames of a two hour movie to resize the frame and add a watermark, and that compute workload takes 100 minutes and ties up a single server.

You could adapt that workflow to the cloud by pulling high resolution frames from cloud storage, feed them to 10 cloud servers in parallel, and complete the same job in 10 minutes. Another option is to spin up 100 servers and get the job done in one minute.

The cloud provides the flexibility to cut workflow steps that used to take hours down to minutes by adding the compute horsepower that’s needed for the job, then turn it off when it’s no longer needed. You don’t need to worry about planning ahead or paying for ongoing maintenance. In short, compute adapts to your workflow rather than the other way around, which empowers you to make workflow choices that instead prioritize the creative need.

Your Workflow Applications Are Moving to the Cloud, Too

More and more of the applications used for content creation and management are moving to the cloud, as well. Modern web browsers are gaining astonishing new capabilities and there is less need for dedicated application servers accompanying storage.

What’s important is that the application helps you in the creative process, not the mechanics of how the application is served. Increasingly, this functionality is delivered by virtual machines that can be spun up by the thousands as needed or by cloud applications that are customized for each customer’s specific needs.

iconik media workflow management screenshot

An example of a cloud-delivered workflow application — iconik asset discovery and project collaboration

iconik is one example of such a service. iconik delivers cloud-based asset management and project collaboration as a service. Instead of dedicated servers and storage in your data center, each customer has their own unique installation of iconik’s service that’s ready in minutes from first signup. The installation is exclusive to your organization and tailored to your needs. The result is a workflow utilizing virtual machines, compute, and storage that matches your workflow with just the resources you need. The resources are instantly available whenever or wherever your team is using the system, and consume no compute or storage resources when they are not.

Here’s an example. A video file can be pumped from Backblaze B2 to the iconik application running on a cloud compute instance. The proxies and asset metadata are stored in one place and available to every user. This approach is scalable to as many assets and productions you can throw at it, or as many people as are collaborating on the project.

The service is continuously upgraded and updated with new features and improvements as they become available, without the delay of rolling out enhancements and patches to different customers and locations.

Given the advantages of the cloud, we can expect that more steps in the creative production workflow that currently rely on dedicated on-site servers will move to the highly agile and adaptable environment offered by the cloud.

The Next Evolution — AI Becomes Content-Aware

Having your content library in a single content backplane in the cloud provides another benefit: ready access to a host of artificial intelligence (AI) tools.

Examples of AI Tools That Can Improve Creative Production Workflows:

  • Text to speech transcription
  • Language translation
  • Object recognition and tagging
  • Celebrity recognition
  • Brand use recognition
  • Colorization
  • High resolution conversion
  • Image stabilization
  • Sound correction

AI tools can be viewed as compute workers that develop processing rules by training for a desired result on a data set. An AI tool can be trained by having it process millions of images until it can tell the difference between sky and grass, or pick out a car in a frame of video. Once such a tool has been trained, it provides an inexpensive way to add valuable metadata to content, letting you find, for example, every video clip across your entire library that has sky, or grass, or a car in it. Text keywords with an associated timecode can be automatically added to aid in quickly zeroing in on a specific section of a long video clip. That’s something that’s not practical for a human content technician over thousands of files, but is easy, repeatable, and scalable for an AI tool.

Let AI Breathe New Life into Existing Content

AI tools can breathe new life in older content and intelligently clean up older format source video by removing film scratches or upresing content to today’s higher resolution formats. They can be valuable for digital restoration and preservation projects, too. With AI tools and source content in the cloud, it’s now possible to give new life to analog source footage. Digitize it, let AI clean it up, and you’ll get fresh, monetizable assets in your library.

axle ai automatically tags:

An example of the time-synched tags that can be generated with an AI tool

Many workflow tools, such as asset and collaboration tools, can use AI tools for speech transcription or smart object recognition, which brings additional capabilities. axle.ai, for example, can connect with a visual search tool to highlight an object in the frame like a wine bottle, letting you subsequently find every shot of a wine bottle across your entire library.

Visual search for brands and products also is possible. Just highlight a brand logo and find every clip where the camera panned over that logo. It’s smart enough to gets results even when only part of the logo is shown.

We’ve barely touched on the many tools that can be applied to content on ingest or content already in place. Whichever way they’re applied, they can deliver on the promise of making your workflows more efficient and powerful, and your content more valuable.

All Together Now

Taken together, these trends are great news for creatives. They can serve your creative vision by making your workflow more agile and more efficient. Cloud-enabled technologies enable you to focus on adding value and repurposing content in fresh new ways, resulting in new audiences and better monetization.

By placing your content in a cloud content backplane, and taking advantage of applications as a service, including the latest AI tools, it becomes possible to continually grow your content collection while increasing its value — a desirable outcome for any creative production enterprise.

If you could focus only on delivering great creative content, and had a host of AI tools to automatically make your content more valuable, what would you do?

The post Cloud-based Tools Combined with AI Can Make Workflows More Powerful and Increase Content Value appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

An Inside Look at the Backblaze Storage Pod Museum

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-storage-pod-museum/

image of the back of a Backblaze Storage Pod

Merriam-Webster defines a museum as “an institution devoted to the procurement, care, study, and display of objects of lasting interest or value.” With that definition in mind, we’d like to introduce the Backblaze Storage Pod Museum. While some folks think of a museum as a place of static, outdated artifacts, others realize that those artifacts can tell a story over time of experimentation, evolution, and innovation. That is certainly the case with our Storage Pods. Modesty prevents from us saying that we changed the storage industry with our Storage Pod design, so let’s say we added a lot of red to the picture.

Over the years, Larry, our data center manager, has stashed away the various versions of our Storage Pods as they were removed from service. He also kept drives, SATA cards, power supplies, cables, and more. Thank goodness. With the equipment that Larry’s pack-rat tendencies saved, and a couple of current Storage Pods we borrowed (shhhh, don’t tell Larry), we were able to start the Backblaze Storage Pod Museum. Let’s take a quick photo trip through the years.

Before Storage Pod 1.0

Before we announced Storage Pod 1.0 to the world nearly 10 years ago, we had already built about twenty or so Storage Pods. These early pods used Western Digital 1.0 TB Green drives. There were multiple prototypes, but once we went into production, we had settled on the 45-drive design with 3 rows of 15 vertically mounted drives. We ordered the first batch of ten chassis to be built and then discovered we did not spec a hole for the on/off switch. We improvised.

Storage Pod 1.0 — Petabytes on a Budget

We introduced the storage world to inexpensive cloud storage with Storage Pod 1.0. Funny thing, we didn’t refer to this innovation as version 1.0 — just a Backblaze Storage Pod. We not only introduced the Storage Pod, we also open-sourced the design, publishing the design specs, parts list, and more. People took notice. We introduced the design with Seagate 1.5 TB drives for a total of 67 TB of storage. This version also had an Intel Desktop motherboard (DG43NB) and 4 GB of memory.

Storage Pod 2.0 — More Petabytes on a Budget

Storage Pod 2.0 was basically twice the system that 1.0 was. It had twice the memory, twice the speed, and twice the storage, but it was in the same chassis with the same number of drives. All of this combined to reduce the cost per GB of the Storage Pod system over 50%: from $0.117/GB in version 1 to $0.055/GB in version 2.

Among the changes: the desktop motherboard in V1 was upgraded to a server class motherboard, we simplified things by using three four-port SATA cards, and reduced the cost of the chassis itself. In addition, we used Hitachi (HGST) 3 TB drives in Storage Pod 2.0 to double the total amount of storage to 135 TB. Over their lifetime, these HGST drives had an annualized failure rate of 0.82%, with the last of them being replaced in Q2 2017.

Storage Pod 3.0 — Good Vibrations

Storage Pod 3.0 brought the first significant chassis redesign in our efforts to make the design easier to service and provide the opportunity to use a wider variety of components. The most noticeable change was the introduction of drive lids — one for each row of 15 drives. Each lid was held in place by a pair of steel rods. The drive lids held the drives below in place and replaced the drive bands used previously. The motherboard and CPU were upgraded and we went with memory that was Supermicro certified. In addition, we added standoffs to the chassis to allow for Micro ATX motherboards to be used if desired, and we added holes where needed to allow for someone to use one or two 2.5” drives as boot drives — we use one 3.5” drive.

Storage Pod 4.0 — Direct Wire

Up through Storage Pod 3.0, Protocase helped design and then build our Storage Pods. During that time, they also designed and produced a direct wire version, which replaced the nine backplanes with direct wiring to the SATA cards. Storage Pod 4.0 was based on the direct wire technology. We deployed a small number of these systems but we fought driver problems between our software and the new SATA cards. In the end, we went back to our backplanes and Protocase continued forward with direct wire systems that they continued to deploy successfully. Conclusion: there are multiple ways you can be successful with the Storage Pod design.

Storage Pod 4.5 — Backplanes are Back

This version started with the Storage Pod 3.0 design and introduced new 5-port backplanes and upgraded to SATA III cards. Both of these parts were built on Marvel chipsets. The backplanes we previously used were being phased out, which prompted us to examine other alternatives like the direct wire pods. Now we had a ready supply of 5-port backplanes and Storage Pod 4.5 was ready to go.

We also began using Evolve Manufacturing to build these systems. They were located near Backblaze and were able to scale to meet our ever increasing production needs. In addition, they were full of great ideas on how to improve the Storage Pod design.

Storage Pod 5.0 — Evolution from the Chassis on Up

While Storage Pod 3.0 was the first chassis redesign, Storage Pod 5.0 was, to date, the most substantial. Working with Evolve Manufacturing, we examined everything down to the rivets and stand-offs, looking for a better, more cost efficient design. Driving many of the design decisions was the introduction of Backblaze B2 Cloud Storage that was designed to run on our Backblaze Vault architecture. From a performance point-of-view we upgraded the motherboard and CPU, increased memory fourfold, upgraded the networking to 10 GB on the motherboard, and moved from SATA II to SATA III. We also completely redid the drive enclosures, replacing the 15-drive clampdown lids to nine five-drive compartments with drive guides.

Storage Pod 6.0 — 60 Drives

Storage Pod 6.0 increased the amount of storage from 45 to 60 drives. We had a lot of questions when this idea was first proposed, like would we need: bigger power supplies (answer: no), more memory (no), a bigger CPU (no), or more fans (no). We did need to redesign our SATA cable routes from the SATA cards to the backplanes as we needed to stay under the one meter spec length for the SATA cables. We also needed to update our power cable harness, and, of course, add length to the chassis to accommodate the 15 additional drives, but nothing unexpected cropped up — it just worked.

What’s Next?

We’ll continue to increase the density of our storage systems. For example, we unveiled a Backblaze Vault full of 14 TB drives in our 2018 Drive Stats report. Each Storage Pod in that vault contains 840 terabytes worth of hard drives, meaning the 20 Storage Pods that make up the Backblaze Vault bring 16.8 petabytes of storage online when the vault is activated. As higher density drives and new technologies like HAMR and MAMR are brought to market, you can be sure we’ll be testing them for inclusion in our environment.

Nearly 10 years after the first Storage Pod altered the storage landscape, the innovation continues to deliver great returns to the market. Many other companies, from 45Drives to Dell and HP, have leveraged the Storage Pod’s concepts to make affordable, high-density storage systems. We think that’s awesome.

The post An Inside Look at the Backblaze Storage Pod Museum appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Cloud-Based MAMs Can Make End-to-End Cloud Workflows a Reality

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/how-to-migrate-mam-to-cloud/

Create, Capture, Distribute, Archive

Ever since commercial cloud services were launched over 12 years ago, media and entertainment professionals have debated how and where cloud services best fit in their workflows. Archive and delivery were seen as the most natural fits. But complete, that is, end-to-end cloud workflows were seen as improbable due to the network bandwidth required to edit full-resolution content. Now, with new cloud-oriented creative tools on the market, cloud is now playing a role at every step of creative workflows.

Of course, it’s one thing to talk about complete cloud workflows and it’s another thing to show how the cloud has transformed an actual customer’s workflow from end-to-end. But that’s exactly what healthcare content provider Everwell did by building a streamlined work from anywhere workflow with cloud storage and cloud-delivered asset management. The best part was that rolling out the new cloud workflow was just as painless as it was transformative for their business.

Where On-Site Asset Management Fails: Scaling Up and Remote Access

Everwell was founded on the idea that millions of TVs in medical office lobbies and waiting rooms could deliver compelling, well-produced healthcare educational content. Hospitals, medical groups, and medical practitioners that sign up with Everwell receive media players pre-loaded with an extensive library of Everwell’s educational videos along with software that allows each practice to customize the service with their own information.

As the number of subscribers and demand for their content grew, Everwell COO Loren Goldfarb realized that their production workflow needed to adapt quickly or they wouldn’t be able to scale their business to meet growth. The production workflow was centered around an on-site media asset management (MAM) server with on-site storage that had served them well for several years. But as the volume of raw footage grew and the file sizes increased from HD to 4K, their MAM struggled to keep up with production deadlines.

At the same time, Everwell’s content producers and editors needed to work more efficiently from remote locations. Having to travel to the main production office to check content into the media asset manager became a critical bottleneck. Their existing MAM was designed for teams working in a single location, and remote team members struggled to maintain access to it. And the off-site team members and Everwell’s IT support staff were spending far too much time managing VPNs and firewall access.

Workarounds Were Putting Their Content Library at Risk

Given the pain of a distributed team trying to use systems designed for a single office, it was no surprise that off-site producers resorted to shipping hard drives directly to editors, bypassing the asset management system altogether. Content was extremely vulnerable to loss while being shipped around on hard drives. And making editorial changes to content afterward without direct access to the original source files wasn’t practical. Content was becoming increasingly disorganized and hard for users to find or repurpose. Loren knew that installing servers and storage at every remote production site was not an option.

What Loren needed was an asset management solution that could keep productions moving smoothly and content organized and protected, even with remote producers and editors, so that his team could stay focused on creating content. He soon realized that most available MAMs weren’t built for that.

Everwell remote workflow

Everwell’s distributed workflow

A Cloud-Based MAM Designed for the Complete Workflow

After reviewing and rejecting several vendors on his own, Loren met with Jason Perr of Workflow Intelligence Nexus. Jason proposed a complete cloud workflow solution with iconik for asset management and B2 for cloud storage. Built by established MAM provider Cantemo, iconik takes an entirely new approach by delivering asset management with integrated workflow tools as an on-demand service. With iconik, everything is available through a web browser.

Jason helped Everwell migrate existing content, then deploy a complete, cloud-based production system. Remote producers can easily ingest content into iconik, making it immediately available to other team members anywhere on the planet. As soon as content is added, iconik’s cloud-based compute resources capture the files’ asset metadata, generate proxies, then seamlessly store both the proxies and full-resolution content to the cloud. What’s more, iconik provides in-the-cloud processing for advanced metadata extraction and other artificial intelligence (AI) analysis to enrich assets and allow intelligent searching across the entire content library.

Another critical iconik feature for Everwell is the support for cloud-based proxy editing. Proxies stored in the cloud can be pulled directly into Adobe Premiere, allowing editors to work on their local machine with lower resolution proxies, rather than having every editor download the full-resolution content and generate their own proxy. After the proxy editing is complete, full-resolution sequences are rendered using the full-resolution originals stored in B2 cloud storage and then returned to the cloud. Iconik also offers cloud-based compute resources that can perform quality checks, transcoding, and other processing its customers need to prepare the content for delivery.

Cloud Storage That Goes Beyond Archive

Working behind the scenes, cloud storage seamlessly supports the iconik asset management system, hosting and delivering proxy and full-resolution content while keeping it instantly available for editing, metadata extraction, and AI or other processing. And because cloud storage is built with object storage instead of RAID, it offers the extreme durability needed to keep valuable content highly protected with the infinite scalability needed to grow capacity on demand.

Backblaze B2’s combination of data integrity, dramatically lower pricing than other leading cloud storage options, and full integration with iconik made it an obvious choice for Everwell. With B2, they no longer have to pay for or manage on-site production storage servers, tape, or disk-based archives — all their assets are securely stored in the cloud.

This was the seamless, real-time solution that Loren had envisioned, with all of the benefits of a truly cloud-delivered and cloud-enabled solution. Both iconik and Backblaze services can be scaled up in minutes and the pricing is transparent and affordable. He doesn’t pay for services or storage he doesn’t use and he was able to phase out his on-site servers.

Migrating Existing Content Archive to the Cloud

Everwell’s next challenge was migrating their enormous content library of raw material and existing asset metadata without impacting production. With Jason of Workflow Intelligence Nexus guiding them, they signed up for Backblaze’s B2 Fireball, the rapid ingest service that avoids time-consuming internet transfers by delivering content directly to their cloud-based iconik library.

As part of the service, Backblaze sent Everwell the 70TB Fireball. Everwell connected it to their local network and copied archived content onto it. Meanwhile, Jason and Loren’s team exported the metadata records from their existing asset manager and with a migration tool from Workflow Intelligence Nexus, they automatically created new placeholder records in iconik with all of that metadata.

Everwell then shipped the Fireball back to the Backblaze data center where all of the content was securely uploaded to their B2 account. iconik then scanned and identified the content and linked it to the existing iconik records. The result was an extremely fast migration of an existing content archive to a new cloud-based MAM that was immediately ready for production work.

Workflow diagram of Everwell media archive to B2 cloud storage

Everwell’s media ingest workflow

Cloud Simplicity and Efficiency, with Growth for the Future

With a cloud-based asset management and storage solution in place, production teams like Loren’s can have creative freedom and add significant new capabilities. They can be free to add new editors and producers on the fly and at a moment’s notice, and let them ingest new content from any location and use a single interface to keep track of every project in their expanding asset library.

Production teams can use new AI-powered discovery tools to find content quickly and can always access the original raw source files to create new videos at any time. And they’ll have more time to add new features to their service and take on new productions and customers when they wish.

Best of all for Loren, he’s now free to grow Everwell’s production operations as fast as possible without having to worry about running out of storage, managing servers, negotiating expensive maintenance contracts, or paying for staff to run it all. Their workflow is more nimble, their workforce is more productive, and Loren finally has the modern cloud-delivered production he’s always wanted.

•  •  •

We invite you to view our demo on integrating iconik with B2, 3 Steps to Making Your Cloud Media Archive Active with iconik and Backblaze B2.

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post How Cloud-Based MAMs Can Make End-to-End Cloud Workflows a Reality appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

B2 on Your Desktop — Cloud Storage Made Easy

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/cloud-storage-made-easy/

B2 on your Desktop

People have lots of different ways that they work with files in B2 Cloud Storage, and there’s a wide range of integrations for different platforms and different uses.

Sometimes, though, being able to use B2 as if it were just another drive on your desktop is the easiest way to go. The applications we’ll be covering in this post make working with B2 as easy as dragging and dropping files from a file manager on your computer directly to B2, or from B2 to your computer. In other cases, you can drag files from a file manager to the application, or between panes inside the application. There’s something for every platform, too, whether you’re on Windows, Macintosh, or Linux. Some of these tools are even free.

Let’s take a look at the applications that make working with B2 a piece of cake! (Or, as easy as pie.)

Use B2 As a Drive on the Desktop

Our first group of applications let you use B2 as if it were a local drive on your computer. The files on B2 are available for you from (depending on platform) File Explorer on Windows, the Finder on Mac, or the File Manager on Linux (as well as the command-line). Some of the applications are free and some require purchase (marked with $).

Most of these apps are simple for anyone to set up. If you are a more advanced user, and comfortable working with the command-line in your OS’s terminal, there are a number of free command-line tools for mounting B2 as a drive, including restic, Rclone, and HashBackup. See their docs for how to mount restic, Rclone, or HashBackup as a drive. We previously wrote about using restic with B2 in our Knowledge Base.

When would dragging and dropping files on the desktop be useful? If you just need to move one or a few files, this could be the fastest way to do that. You can load the application when you need to transfer files, or have it start with your computer so your B2 files and buckets are always just a click away. If you keep archived documents or media in B2 and often need to browse to find a file, this makes that much faster. You can even use shortcuts, search, and other tools you have available for your desktop to find and manage files on B2.

We’ve grouped the applications by platform that let you use B2 as a drive.

Some Screenshots Showing Applications That Let You Use B2 as a Drive

screenshot of Mountain Duck interface for saving to B2 Cloud Storage

Mountain Duck

screenshot of B2 mounted on the desktop with Mountain Duck

B2 mounted on the desktop with Mountain Duck

screenshot of ExpanDrive saving to B2 cloud storage

ExpanDrive

Cloudmounter

Cloudmounter

screenshot of Cloudmounter with B2 open in Mac Finder

Cloudmounter with B2 open in Mac Finder

Use B2 From a Desktop Application

These applications allow you to use B2 from within the application, and also often work with the local OS’s file manager for drag and drop. They support not just B2, but other cloud and sync services, plus FTP, SFTP, Webdav, SSH, SMB, and other protocols for networking and transferring files.

All of the applications below require purchase, but they have demo periods when you can try them out before you decide you’re ready to purchase.

Screenshots of Using B2 From Desktop Applications

Filezilla Pro

Filezilla Pro browsing photos on B2

screenshot of Transmit with B2 files

Transmit with B2 files

screenshot of Cyberduck transmitting files to B2

Cyberduck

screenshot of odrive cloud storage integration

odrive

SmartFTP

SmartFTP

The Cloud on Your Desktop

We hope these applications make you think of B2 as easy and always available on your desktop whenever you need to move files to or from cloud storage. Easy Peasy Lemon Squeezy, right?

If you’ve used any of these applications, or others we didn’t mention in this post, please tell us in the comments how they worked for you.

The post B2 on Your Desktop — Cloud Storage Made Easy appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: DAM vs MAM

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/whats-the-diff-dam-vs-mam/

What's the Diff: DAM vs MAM

There’s a reason digital asset management (DAM) and media asset management (MAM) seem to be used interchangeably. Both help organizations centrally organize and manage assets —  images, graphics, documents, video, audio — so that teams can create content efficiently and securely. Both simplify managing those assets through the content life cycle, from raw source files through editing, to distribution, to archive. And, as a central repository, they enable teams to collaborate by giving team members direct access to shared assets.

A quick answer to the difference is that MAM is considered a subset of the broader DAM, with MAMs providing more video capabilities. But since most DAMs can manage videos, and MAMs vary widely in what kind of video-oriented features they offer, it’s worth diving deeper to understand these different asset management solutions.

What to Expect From Any Asset Manager

Before we focus on the differences, let’s outline the basic structure and the capabilities of any asset manager.  The best place to start is with the understanding that any given asset a team might want to work with — a video clip, a document, an image —  is usually presented by the asset manager as a single item to the user, but is actually composed of three elements: the master source file, a thumbnail or proxy that’s displayed, and metadata about the object itself. Note that in the context of asset management, metadata is more than simple file attributes (i.e. owner, date created, last modified date, size). It’s a broader set of attributes, including details about the actual content of the file. We’ll spell out more on that later. As far as capabilities, any DAM or MAM worth being called an asset manager should offer:

  • Collaboration — Members of content creation teams all should have direct access to assets in the asset management system from their own workstations.
  • Access control — Access to specific assets or groups of assets should be allowed or restricted based on the user’s rights and permission settings. This is particularly important if teams work in different departments or for different external clients.
  • Browse — Assets should be easily identifiable by more than their file name, such as thumbnails or proxies for videos, and browsable in the asset manager’s graphical interface.
  • Metadata search —  Assets should be searchable by attributes assigned to them, known as metadata. Metadata assignment capabilities should be flexible and extensible over time.
  • Preview — For larger or archived assets, a preview or quick review capability should be provided, such as playing video proxies or mouse-over zoom for thumbnails.
  • Versions — Based on permissions, team members should be able to add new versions of existing assets or add new assets so that material can be easily repurposed for future projects.

Why Metadata Matters So Much

Metadata is a critical element that distinguishes asset managers from file browsers. Without metadata, file names end up doing the heavy lifting with long names like 20190118-gbudman-broll-01-lv-0001.mp4, which strings together a shoot date, subject, camera number, clip number, and more. Structured file naming is not a bad practice, but it doesn’t scale easily to larger teams of contributors and creators. And metadata is not used only to search for assets, it can be fed into other workflow applications integrated with the asset manager for use there.

Metadata is particularly important for images and video because, unlike text-based documents, they can’t be searched for keywords. Metadata can describe in detail what’s in the image or video. For example, metadata for an image could be: male, beard, portrait, blue shirt, dark hair, fair skin, middle-aged, outdoors. And since videos are streams of images, their metadata goes one step further to describe elements at precise moments or ranges of time in the video, known as timecodes. For example, video of a football game could include metadata tags such as 00:10:30 kickoff, 00:15:37 interception, and 00:21:04 touchdown.

iconik MAM example displaying meta data for a BMW M635CSi

iconik MAM

Workflow Integration and Archive Support

More robust DAMs and MAMs go beyond the basic capabilities and offer a range of advanced features that simplify or otherwise support the creation process, also known as the workflow. These can include features for editorial review, automated metadata extraction (e.g. transcription for facial recognition), multilingual support, automated transcode, and much, much more. This is where different asset management solutions diverge the most and show their customization for a particular type of workflow or industry.

Regardless of whether you need all the bells and whistles in your asset manager, as your content library grows it will need storage management features, starting with archive. Archiving completed projects and assets that are infrequently used can conserve disk space on your server by moving them off to less expensive storage, such as cloud storage or digital tape. In particular, images and video are huge storage hogs, and the higher the resolution, the more storage capacity they consume. Regular archiving can keep costs down and keep you from having to upgrade your expensive storage server every year.

Asset managers with built-in archiving make moving content into and out of an archive seamless and straightforward. For most asset managers, assets can be archived directly from the graphical interface. After archive, the thumbnails or proxies of the archived assets continue to appear as before, with a visual indication that they’re archived on secondary storage. Users can retrieve the asset as before, albeit with some time delay that depends on the archive storage and network connection chosen.

A good asset manager will offer multiple choices for archive storage, from cloud storage to LTO tape to inexpensive disk, and from different vendors.  An excellent one will let you automatically make multiple copies to different archive storage for added data protection.

What is a MAM?

With all these common characteristics, what makes a media asset manager different than other asset managers is that it’s created for video production. While DAMs can generally manage video assets, and MAMs can manage images and documents, MAMs are designed from the ground up for creating and managing video content in a video production workflow. That means metadata creation and management, application integrations, and workflow orchestration are all video-oriented.

Metadata for video starts when it’s shot, with camera data, shoot notes or basic logging captured on set.  More detailed metadata cataloging happens when the content is ingested from the camera into the MAM for post-production. Nearly all MAMs offer some type of manual logging to create timecode-based metadata. MAMs built for live broadcast events like sports provide shortcut buttons for key events, such as a face off or slap shot in a hockey game.

More advanced systems offer additional tools for automated metadata extraction. For example, some will use facial recognition to automatically identify actors or public figures.

There is also metadata related to how, where, and how many times the asset has been used and what kinds of edits have been made from the original. There’s no end to what you can describe and categorize with metadata. Defining it for a content library of any reasonable size can be a major undertaking.

MAMs Integrate Video Production Applications

Unlike the more general-purpose DAMs, MAMs will integrate tools built specifically for video production. These widely ranging integrated applications include ingest tools, video editing suites, visual effects, graphics tools, transcode, quality assurance, file transport, specific distribution systems, and much more.

Modern MAM solutions integrate cloud storage throughout the workflow, and not just for archive, but also for creating content through proxy editing. In proxy editing, video editors work using a lower-resolution of the video stored locally, then those edits are applied later to the full-resolution version stored in the cloud when the final cut in rendered.

MAMs May be Tailored for Specific Industry Niches and Workflows

To sum up, the longer explanation for DAM vs MAM is that MAMs focus on video production, with better MAMs offering all the integrations needed for complex video workflows. And because video workflows are as varied as they are complex, MAMs often fall into specific niches within the industry: news, sports, post-production, film production, etc. The size of the organization or team matters too. To stay within their budget, a small post house may select a MAM with fewer of the advanced features that may be basic requirements for a larger multinational post-production facility.

That’s why there are so many MAMs on the market, and why choosing one can be a daunting task with a long evaluation process. And it’s why migrating from one asset manager to another is more common than you’d think. Pro tip: working with a trusted system integrator that serves your industry niche can save you a lot of heartache and money in the long run.

Finally, keep in mind that for legacy reasons, sometimes what’s marketed as a DAM will have all the video capabilities you’d expect from MAM.  So don’t let the name throw you off. Instead, look for an asset manager that fits your workflow with the features and integrated tools you need today, while also providing the  flexibility you need as your business changes in the future.

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post What’s the Diff: DAM vs MAM appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Hard Drive Stats for 2018

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-for-2018/

Backblaze Hard Drive Stats for 2018

We published our first “Hard Drive Stats” report just over 5 years ago on January 21, 2014. We titled that report “What Hard Drive Should I Buy.” In hindsight, that might have been a bit of an overreach, but we were publishing data that was basically non-existent otherwise.

Many people like our reports, some don’t, and some really don’t — and that’s fine. From the beginning, the idea was to share our experience and use our data to shine a light on the otherwise opaque world of hard disk drives. We hope you have enjoyed reading our reports and we look forward to publishing them for as long as people find them useful.

Thank you.

As of December 31, 2018, we had 106,919 spinning hard drives. Of that number, there were 1,965 boot drives and 104,954 data drives. This review looks at the hard drive failure rates for the data drive models in operation in our data centers. In addition, we’ll take a look at the new hard drive models we’ve added in 2018 including our 12 TB HGST and 14 TB Toshiba drives. Along the way we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

2018 Hard Drive Failure Rates: What 100,000+ Hard Drives Tell Us

At the end of 2018 Backblaze was monitoring 104,954 hard drives used to store data. For our evaluation we remove from consideration those drives that were used for testing purposes and those drive models for which we did not have at least 45 drives (see why below). This leaves us with 104,778 hard drives. The table below covers what happened just in 2018.

2018 annualized hard drive failure rates

Notes and Observations

If a drive model has a failure rate of 0%, it means there were no drive failures of that model during 2018.

For 2018, the Annualized Failure Rate (AFR) stated is usually pretty solid. The exception is when a given drive model has a small number of drives (fewer than 500) and/or a small number of drive days (fewer than 50,000). In these cases, the APR can be too wobbly to be used reliably for buying or retirement decisions.

There were 176 drives (104,954 minus 104,778) that were not included in the list above. These drives were either used for testing or we did not have at least 45 drives of a given model. We use 45 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics. This is a historical number based on the number of drives needed to fill one Backblaze Storage Pod (version 5 or earlier).

The Annualized Failure Rate (AFR) for 2018 for all drive models was just 1.25%, well below the rates from previous years as we’ll discuss later on in this review.

What’s New in 2018

In 2018 the big trend was hard drive migration: replacing lower density 2, 3, and 4 TB drives, with 8, 10, 12, and in Q4, 14 TB drives. In 2018 we migrated 13,720 hard drives and we added another 13,389 hard drives as we increased our total storage from about 500 petabytes to over 750 petabytes. So in 2018, our data center techs migrated or added 75 drives a day on average, every day of the year.

Here’s a quick review of what’s new in 2018.

  • There are no more 4 TB Western Digital drives; the last of them was replaced in Q4. This leaves us with only 383 Western Digital drives remaining — all 6 TB drives. That’s 0.37% of our drive farm. We do have plenty of drives from HGST (owned by WDC), but over the years we’ve never been able to get the quantity of Western Digital drives we need at a reasonable price.
  • Speaking of HGST drives, in Q4 we added 1,200 HGST 12 TB drives (model: HUH721212ALN604). We had previously tested these drives in Q3 with no failures, so we have filled a Backblaze Vault with 1,200 drives. After about one month we’ve only had one failure, so they are off to a good start.
  • The HGST drives have a ways to go as in Q4 we also added 6,045 Seagate 12 TB drives (model: ST12000NM0007) to bring us to 31,146 of this drive model. That’s 29.7% of our drive farm.
  • Finally in Q4, we added 1,200 Toshiba 14 TB drives (model: MG07ACA14TA). These are helium-filled PMR (perpendicular magnetic recording) drives. The initial annualized failure rate (AFR) is just over 3%, which is similar to the other new models and we would expect the AFR to drop over time as the drives settle in.

Comparing Hard Drive Failure Rates Over Time

When we compare Hard Drive stats for 2018 to previous years two things jump out. First, the migration to larger drives, and second, the improvement in the overall annual failure rate each year. The chart below compares each of the last three years. The data for each year is inclusive of that year only.

Annualized Hard Drive Failure Rates by Year

Notes and Observations

  • In 2016 the average size of hard drives in use was 4.5 TB. By 2018 the average size had grown to 7.7 TB.
  • The 2018 annualized failure rate of 1.25% was the lowest by far of any year we’ve recorded.
  • None of the 45 Toshiba 5 TB drives (model: MD04ABA500V) has failed since Q2 2016. While the drive count is small, that’s still a pretty good run.
  • The Seagate 10 TB drives (model: ST10000NM0086) continue to impress as their AFR for 2018 was just 0.33%. That’s based on 1,220 drives and nearly 500,000 drive days, making the AFR pretty solid.

Lifetime Hard Drive Stats

While comparing the annual failure rates of hard drives over multiple years is a great way to spot trends, we also look at the lifetime annualized failure rates of our hard drives. The chart below is the annualized failure rates of all of the drives currently in production.

Annualized Hard Drive Failure Rates for Active Drives

Hard Drive Stats Webinar

We’ll be presenting the webinar “Backblaze Hard Drive Stats for 2018” on Thursday, January 24, 2019 at 10:00 Pacific time. The webinar will dig deeper into the quarterly, yearly, and lifetime hard drive stats and include the annual and lifetime stats by drive size and manufacturer. You will need to subscribe to the Backblaze BrightTALK channel to view the webinar. Sign up today.

The Hard Drive Stats Data

The complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone; it is free.

If you just want the summarized data used to create the tables and charts in this blog post you can download the ZIP file containing the CSV file.

Good luck and let us know if you find anything interesting.

The post Backblaze Hard Drive Stats for 2018 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Breaking the Cycle of Archive Migrations With B2 Cloud Storage

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/cloud-data-archiving/

Assorted tapes

Back in the 1980s, my family and I took a trip to visit our friends, the Bremers. We all used to live next door, but the Bremers had moved away a decade prior. As our parents were reminiscing on old times, one of the Bremer teens pulled out a 8mm movie projector and we watched home movies his dad had shot of us playing together in the backyard: on the swings, the see-saw, and running about. What I wouldn’t give to see that footage today! It would be the only video of my sisters and me as kids.

Perhaps Mr. Bremer digitized his home movie collection before he passed away. But it’s more likely his children inherited the box of reels, and it’s now buried and decaying in a closet (or gone entirely). And, if they had the tape, would they have a projector or anything to play it? What a pity. Those precious moments captured once upon a time on film are probably lost forever.

Obsolescence isn’t just a concern for home video enthusiasts. Professional content creators likely have content stored on obsolete technology, whether it’s videotape, LTO digital tape, or external drives. And unlike the simplicity of Mr. Brehmer’s film reels and projectors, there are many more factors that can make digital content inaccessible.

Common Causes of Data Obsolescence

Media Failure

The most obvious issue is storage media degradation. If film is carefully stored in a cold, dry environment, it can last an extremely long time. Yet for both videotape and digital tape, there are a myriad of pitfalls: magnetic particles can lose their charge; the tape substrate can deteriorate; and heavily used tapes can stretch. Tapes over 15 years old are at greatest risk, even if stored in the ideal conditions of low-heat and low-humidity.

Hard disk drives have shortfalls too: mechanical failure, overheating, and power spikes. External drives in particular, are at risk of shock damage from being dropped. Even a drive standing on its side, then tipping over, can generate enough shock to damage the drive internals. At our Backblaze data centers, we replace disk drives after four years, and earlier for drive models that show higher-than-usual failure rates. We have ~100,000 drives in our data centers, and document which ones are more likely to fail in our quarterly drive stats posts.

Obsolete Technology

Even if the storage media remains intact and the data uncorrupted, the data format can become obsolete, often more quickly than you’d expect. For example, manufacturers of the commonly used LTO digital tape are now shipping LTO-8 and only guarantee two generations of backward compatibility. That means if you upgrade your tape system for higher-capacity 12TB LTO-8 tapes, you won’t be able to read the LTO-6 tapes that were introduced just six years ago.

Also, if the file data itself was encoded in a proprietary format, you’ll likely need proprietary software installed on a computer running a potentially outdated operating system version to be able to read its data. This is a bigger topic than we’ll cover today, because there can be layers of encoding involved: backup formats, graphics formats, codecs, etc. But suffice to say that you might find yourself having to hunt down a Mac that’s still running macOS X Leopard to migrate some content.

Museum of Obsolete Media

Not sure how much your content is at risk? The Museum of Obsolete Media rates all imaginable media types on both media stability and obsolescence, from Endangered to In Use.

Spoiler alert:  VHS tapes are rated Endangered for media stability and rated Vulnerable for obsolescence.

Migrate…Then Migrate Again

The only way to combat this sort of media decay and obsolescence and maintain access to your content is to migrate it to newer media and/or a newer technology. This unglamorous task sounds simple — read the data off the old media and copy it to new media — but the devil is in the details. Here is a checklist for trying to maintain your physical media:

The Eight Steps of Data Migration

  1. Determine which content is obsolete or at risk. Choose a media and format for the new archive, and calculate whether you can afford to migrate everything. If not, decide what you can afford to lose forever.
  2. Gather all the tapes or drives to be migrated. Are you sure you have the complete set? Your content spreadsheet might not be up to date. You might need to interview team members to gather any unwritten tribal knowledge about the backup sets.
  3. Identify a migration workstation or server that can run the application that wrote the archived media files. Attach the tape drive or disk device and test it. Can it still properly read, write, and then restore test files?
  4. Using a checklist system, feed tapes into the drive or attach the external drive in order. You might need to track down obscure adapters for older technologies like a SATA to EIDE adapter for parallel port disk drives, or a SCSI card and cables.
  5. Initiate the copy of all files to local storage. Hope you have enough space.
  6. Carefully monitor the entire process and make sure that all files are copied completely, and only then can you check the tape or disk off of your migration list. Then repeat with the next tape or disk.
  7. When you’re done extracting all the old files (or earlier if you’re pinched for disk space), reverse the process. Attach any needed devices and write the files to the new media. Cross your fingers that you bought enough tapes or disk drives (but not too many).
  8. Repeat again in 4-7 years before the new media ages or technologies change.

If all of that sounds too painful, you can pay a transfer service to migrate your whole archive for you, but that’s not cheap, and remember you’ll have to pay to do it again sooner than you think. Alternatively, you can migrate content on-demand and cross your fingers that it’s still readable and that you can retrieve it fast enough. The longer you wait, the greater the risk of media failure. You might only get one shot at reading an old tape or film. Few find that an acceptable risk.

Why Data Archiving to the Cloud Is a Better Solution

Migrate Once with Backblaze B2 Cloud Storage

You can break this migration cycle by migrating once to Backblaze B2 Cloud Storage. We’ll take over from there, moving your data to newer storage technologies as needed over time. Backblaze’s erasure coding technology that protects your data from loss happens to make upgrading technologies easier for us. Not that you need to worry about it; it’s included in our service.

No New Media or Hardware

Moving to B2 Cloud Storage for your archive means you won’t have any hardware or media to purchase, manage, or house. No tapes or disks to buy, no clearing off shelf space as your archive grows. You won’t have to feed tapes into an autoloader every time you want to write or retrieve content from the archive. And moving to B2 Cloud Storage gives you the benefit of only paying for what you’re actually using. Pay-as-you-go means your storage costs move from a capital expense to an operating expense.

B2 is Less Expensive than LTO

Did you know that Backblaze B2 is the first cloud storage that’s more affordable than LTO storage solutions? If you want to see the math, check out our LTO vs B2 calculator. Enter the size of your existing archive and how much you expect to add each year and it will show you cost differences after 1-10 years. To understand its cost and operational assumptions, read our recent blog post, LTO Versus Cloud Storage Costs — the Math Revealed. It details the many factors for storage costs that many media professionals don’t always consider.

Data That’s Always Accessible

The only thing worse than having a tape or disk you can’t read is having one that you can read go missing in action. Your content database or spreadsheet is only as accurate as what’s on the shelf. You may believe that an external drive is still in your archive closet when it went home over the weekend with a staff member and never came back. With B2 Cloud Storage, your archived content is stored in a central location that’s not only always accessible, it’s accessible from anywhere through a web browser.

B2 is Proven Technology

With Backblaze, you get a partner with over a decade of cloud storage experience. The erasure coding we use to encode data gives B2 customers a 99.999999999% durability (11 nines) rating for their data stored in our cloud. As NASA says, there’s higher probability of an asteroid destroying the planet than you losing a file with B2.

Make Your Final Migration Painless and Smart

Of course, you’ll still have to migrate once, but we can help make that final migration as painless and smart as possible. B2 Cloud Storage has several options for moving dataAPIs, Web UI, CLIplus our Fireball rapid ingest service for large data sets. We’ve also partnered with vendors and system integrators who have deep experience in managing media archives.

Streamlined LTO Migration

If your current archive is on LTO tapes, we have a newly announced partnership with StorageDNA that can speed migration of LTFS archives. The Storage DNA Smart Migration bundle combines the latest version of their DNAfabric storage with Backblaze B2 cloud storage, plus an autoloading LTO library so you won’t waste time manually loading tapes. To learn more about how it works, register for our upcoming webinar, From LTO to the Cloud: Your Last Data Migration with Backblaze and StorageDNA, on Friday, December 14.

Organize Content with a MAM

Archive migrations are a great time to evaluate your asset management strategy. If you haven’t rolled out a media asset manager (MAM) yet, or you’re dissatisfied with your current one, know that more and more MAMs are integrated with cloud storage and can simplify collaboration across remote teams. With a cloud-integrated MAM solution, your content can be easily searched, filtered, sorted and previewed all from a web browser, from anywhere. To see B2 in action with a cloud MAM solution, watch our recent webinar, Three Steps to Making Your Cloud Media Archive Active with iconik and Backblaze B2.

Automated Backup and Archive

Finally, B2 isn’t just an archive solution, it’s great for backup, too. Most of our customers who archive content to B2 also back up active production data to the same B2 account. We have a growing list of backup, sync and other tools integrated with B2 to make the data movement to the cloud seamless and to make retrieval intuitive and straightforward.

Pro Tip: syncing newly ingested footage or assets to B2 will spare you a big headache when someone accidentally deletes a critical file.

If you have content that’s on media or in a format that’s aging fast, now’s the time to plan for its migration. By migrating it to B2 Cloud Storage, you can not only make it your last migration, it’s priced so that you can afford to migrate ALL your content. You never know what you’ll need, or when you’ll need it. And some content, like Mr. Bremer’s home movies, simply can’t be re-created.

The post Breaking the Cycle of Archive Migrations With B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

LTO Versus Cloud Storage Costs — the Math Revealed

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/lto-versus-cloud-storage/

B2 Cloud Storage $68,813 vs. LTO 8 Tape $119,873

A few months back we did a blog post titled, LTO versus Cloud Storage: Choosing the Model That Fits Your Business. In that post we presented our version of an LTO vs. B2 Cloud Storage calculator, a useful tool to determine whether or not it makes economic sense to consider using cloud storage over your LTO storage.

Rather than just saying, “trust us, it’s cheaper,” we thought it would be a good idea to show you what’s inside the model: the assumptions we used, the variables we defined, and the actual math we used to compute our answers. In fact, we’re making the underlying model available for download.

Our Model: LTO vs Cloud Storage

The LTO vs. B2 calculator that is on our website was based on a Microsoft Excel spreadsheet we built. The Excel file we’ve provided for download below is completely self-contained; there are no macros and no external data sources.

Download Excel file: Backblaze-LTO-Calculator-Public-Nov2018.xlsx

The spreadsheet is divided into multiple sections. In the first section, you enter the four values the model needs to calculate the LTO and B2 cloud storage costs. The website implementation is obviously much prettier, but the variables and math are the same as the spreadsheet. Let’s look at the remaining sections.

Entered Values Section

The second section is for organization and documentation of the data that is entered. You also can see the limits we imposed on the data elements.

One question you may have is why we limited the Daily Incremental Backup value to 10 TB. As the comment notes, that’s about as much traffic you can cram through a 1Gbps upload connection in a 24-hour period. If you have bigger (or smaller) pipes, adjust accordingly.

Don’t use the model for one-time archives. You may be tempted to enter zeros in both the Yearly Added Data and Daily Incremental Backup fields to compare the cost of a one-time archive. The model is not designed to compare the cost of a one-time archive. It will give you an answer, but the LTO costs will be overstated by anywhere from 10%-50%. The model was designed for the typical LTO use case where data is written to tape, typically daily, based on the data backup plan.

Variables Section

The third section stores all the variable values you can play with in the model. There is a short description for each variable, but let’s review some general concepts:

Tapes — We use LTO-8 tapes that will decrease in cost about 20% per year down to $60. Non-compressed, these tapes store 12 TB each and take about 9.5 hours to fully load. We use 24 TB for each tape assuming 2:1 compression. If some or all of your data is comprised of video or photos, then compression cannot be used, which makes actual tape capacity number much lower and increases the cost of the LTO solution.

Tapes Used — Based on the grandfather-father-son (GFS) model and assumes you replace tapes once a year.

Maintenance — Assumes you have no spare units, so you cannot miss more than one business day for backups. You could add a spare unit and remove the maintenance or just decide it is OK to miss a day or two while the unit is being repaired.

Off-site Storage — The cost of getting your tapes off-site (and back) assuming a once a week pick-up/drop-off.

Personnel — The cost of the person doing the LTO work, and how much time per week they spend doing the LTO related work, including data restoration. The cost of a person doing the cloud storage work is calculated from this value as described in the Time Savings paragraph below.

Data Restoration — How much of your data on average you will restore each month. The model is a bit limited here in that we use an average for all time periods when downloads are typically uneven across time. You are, of course, welcome to adjust the model. One thing to remember is that you’ll want to test your restore process from time to time, so make sure you allocate resources for that task.

Time Savings — We make the assumption that you will only spend 25% of the time working with cloud storage versus managing and maintaining an LTO system, i.e. no more buying, mounting, unmounting, labeling, cataloging, packaging, reading, or writing tapes.

Model Section

The last section is where the math gets done. Don’t change specific values in this section as they all originate in previous sections. If you decide to change a formula, remember to do so across all 10 years. It is quite possible that many of these steps can be combined into more complex formulas. We break them out to try to make an already complicated calculation somewhat easier to follow. Let’s look at the major subsections.

Data Storage — This section is principally used to organize the different data types and amounts. The model does not apply any corporate data retention policies such as deleting financial records after seven years. Data that is deleted is done so solely based on the GFS backup model, for example, deleting incremental data sets after 30 days.

LTO Costs — This starts with defining the amount of data to store, then calculates the quantity of tapes needed and their costs, along with the number of drive units and their annual unit cost and annual maintenance cost. The purchase price of a tape drive unit is divided evenly over a 10-year period.

Why 10 years? The LTO foundation, states is will support LTO tapes two versions back and expects to release a new version every two years. If you buy an LTO-8 system is 2018, in 2024 LTO-11 will not be able to read your LTO-8 tapes. You are now using obsolete hardware. We assume your LTO-8 hardware will continue to be supported through third party vendors for at least four years (to 2028) after it goes obsolete.

We finish up with calculating the cost of the off-site storage service and finally the personnel cost of managing the system and maintaining the tape library. Other models seem to forget this cost or just assume it is the same as your cloud storage personnel costs.

Cloud Storage Costs — We start with calculating the cost to store the data. This uses the amount of data at the end of the year, versus trying to compute monthly numbers throughout the year. This overstates the total amount a bit, but simplifies the math without materially changing the results. We then calculate the cost to download the data, again using the number at the end of the period. We calculate the incremental cost of enhancing the network to send and restore cloud data. This is an incremental cost, not the total cost. Finally, we add in the personnel cost to access and check on the cloud storage system as needed.

Result Tables — These are the totals from the LTO and cloud storage section in one place.

B2 Fireball Section

There is a small section and some variables associated with the B2 Fireball data transfer service. This service is useful to transfer large amounts of data from your organization to Backblaze. There is a cost for this service of $550 per month to rent the Fireball, plus $75 for shipping. Organizations with existing LTO libraries often don’t want to use their network bandwidth to transfer their entire library, so they end up keeping some LTO systems just to read their archived tapes. The B2 Fireball can move the data in the library quickly and let you move completely away from LTO if desired.

Summary

While we think the model is pretty good there is always room for improvement. If you have any thoughts you’d like to share, let us know in the comments. One more thing: the model is free to update and use within your organization, but if you publicize it anywhere please cite Backblaze as the original source.

The post LTO Versus Cloud Storage Costs — the Math Revealed appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: NAS vs SAN

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/whats-the-diff-nas-vs-san/

What's the Diff? Network Attachd Storage (NAS) vs Storage Area Network (SAN)

Both network-attached storage (NAS) and storage area network (SAN) were developed to solve the problem of making stored data available to a lot of users at once. Each of them provides dedicated storage for a group of users, but they couldn’t be more different in their approach to achieving their mission.

A NAS is a single storage device that serves files over Ethernet and is relatively inexpensive and easy to set up, while a SAN is a tightly coupled network of multiple devices that work with block-based data and is more expensive and complex to set up and manage. From a user perspective, the biggest difference between NAS and SAN is that NAS devices look like volumes on a file server and use protocols like NFS and SMB/CIFS, while SAN-connected disks appear to the user as local drives.

We provide an overview of the differences between NAS and SAN below. We’ll also briefly cover solutions that combine NAS and SAN and offer many of the advanced benefits of SAN without its high cost.

Basic Definitions — What is NAS?

A NAS is a computer connected to a network that provides file-based data storage services to other devices on the network. The primary strength of NAS is how simple it is to set up and deploy. NAS volumes appear to the user as network mounted volume. The files to be served are typically contained on one or more storage drives, often arranged into logical, redundant storage containers or RAID. The device itself is a network node, much like computers and other TCP/IP devices, all of which maintain their own IP address and can effectively communicate with other networked devices. Although a NAS is usually not designed to be a general-purpose server, NAS vendors and third parties are increasingly offering other software to provide server-like functionality on a NAS.

NAS devices offer an easy way for multiple users in diverse locations to access data, which is valuable when uses are collaborating on projects or sharing information. NAS provides good access controls and security to support collaboration, while also enabling someone who is not an IT professional to administer and manage access to the data. It also offers good fundamental data security through the use of redundant data structures — often RAID — and automatic backup services to local devices and to the cloud.

Benefits of NAS

A NAS is frequently the next step up for a home office or small business that is using DAS (direct attached storage). The move up to NAS results from the desire to share files locally and remotely, having files available 24/7, data redundancy, the ability to replace and upgrade hard drives in the system, and and the availability of other services such as automatic backup.

Summary of NAS Benefits

  • Relatively inexpensive
  • 24/7 and remote data availability
  • Good expandability
  • Redundant storage architecture
  • Automatic backups to other devices and cloud
  • Flexibility

Network attached Storage (NAS)

Synology NAS

NAS with eight drive bays for 3.5″ disk drives

Limitations of NAS

The weaknesses of a NAS are related to scale and performance. As more users need access, the server might not be able to keep up and could require the addition of more server horsepower. The other weakness is related to the nature of Ethernet itself. By design, Ethernet transfers data from one place to another via packets, dividing the source into a number of segments and sending them along to their destination. Any of those packets could be delayed, or sent out of order, and might not be available to the user until all of the packets arrive and are put back in order.

Any latency (slow or retried connections) is usually not noticed by users for small files, but can be a major problem in demanding environments such as video production, where files are extremely large and latency of more than a few milliseconds can disrupt production steps such as rendering.

Basic Definitions — What is SAN?

A SAN is a way to provide users shared access to consolidated, block level data storage, even allowing multiple clients to access files at the same time with very high performance. A SAN enhances the accessibility of storage devices such as disk arrays and tape libraries by making them appear to users as if they were external hard drives on their local system. By providing a separate storage-based network for block data access over high-speed Fibre Channel, and avoiding the limitations of TCP/IP protocols and local area network congestion, a SAN provides the highest access speed available for media and mission critical stored data.

Storage area network (SAN)

SAN connecting yellow storage devices with orange servers via purple Fibre Channel switches

SAN connecting yellow storage devices with orange servers via purple Fibre Channel switches

Benefits of SAN

Because it’s considerably more complex and expensive than NAS, SAN is typically used by large corporations and requires administration by an IT staff. For some applications, such as video editing, it’s especially desirable due to its high speed and low latency. Video editing requires fair and prioritized bandwidth usage across the network, which is an advantage of SAN.

A primary strength of a SAN is that all of the file access negotiation happens over Ethernet while the files are served via extremely high speed Fibre Channel, which translates to very snappy performance on the client workstations, even for very large files. For this reason SAN is widely used today in collaborative video editing environments.

Summary of SAN Benefits

  • Extremely fast data access
  • Dedicated network for storage relieves stress on LAN
  • Highly expandable
  • OS level (block level) access to files
  • High quality-of-service for demanding applications such as video editing

Limitations of SAN

The challenge of SAN can be summed up in its cost and administration requirements — having to dedicate and maintain both a separate Ethernet network for metadata file requests and implement a Fibre Channel network can be a considerable investment. That being said, SANs are really the only way to provide very fast data access for a large number of users that also can scale to supporting hundreds of users at the same time.

What’s the Diff: NAS vs SAN

NASSAN
Typically used in homes and small to medium sized businesses.Typically used in professional and enterprise environments.
Less expensiveMore expensive
Easier to manageRequires more administration
Data accessed as if it were a network-attached drive (files)Servers access data as if it were a local hard drive (blocks)
Speed dependent on local TCP/IP usually Ethernet network, typically 100 megabits to one gigabit per second. Generally slower throughput and higher latency due to slower file system layer.High speed using Fibre Channel, 2 gigabits to 128 gigabits per second. Some SANs use iSCSI as a less expensive but slower alternative to Fibre Channel.
I/O protocols: NFS, SMB/CIFS, HTTPSCSI, iSCSI, FCoE
Lower-end not highly scalable; high-end NAS scale to petabytes using clusters or scale-out nodesNetwork architecture enables admins to scale both performance and capacity as needed
Does not work with virtualizationWorks with virtualization
Requires no architectural changesRequires architectural changes
Entry level systems often have a single point of failure, e.g. power supplyFault tolerant network with redundant functionality
Susceptible to network bottlenecksNot affected by network traffic bottlenecks. Simultaneous access to cache, benefiting applications such as video editing.
File backups and snapshots economical and schedulable.Block backups and mirrors require more storage.

NAS/SAN Convergence

The benefits of SAN are motivating some vendors to offer SAN-like products at lower cost chiefly by avoiding the high expense of Fibre Channel networking. This has resulted in a partial convergence of NAS and SAN approaches to network storage at a lower cost than purely SAN.

One example is Fibre Channel over Ethernet (FCoE), which supports block level transfers over standard LAN at speeds of 10GB/sec+. For smaller deployments, iSCSI is even less expensive, allowing SCSI commands to be sent inside of IP packets on a LAN. Both of these approaches avoid expensive Fibre Channel completely, resulting in slower, but less expensive ways to get the block level access and other benefits of a SAN.

Are You Using NAS, SAN, or Both?

If you are using NAS or SAN, we’d love to hear from you about what you’re using and how you’re using them. Also, please feel free to suggest other topics for this series.

The post What’s the Diff: NAS vs SAN appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Buying a Hard Drive this Holiday Season? These Tips Will Help

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-buying-guide/

Hard drives with bows
Over the last few years we’ve shared many observations in our quarterly Hard Drive Stats reports that go beyond the hard drive failure rates. We decided to consolidate some of these additional observations into one post just in time for the holiday buying season. If you have “buy a hard drive” on your shopping list this holiday season, here is just about everything we know about hard disk drives.

First, let’s establish that we are talking about hard disk drives (HDDs) here and not solid state drives (SSDs). Here’s a Backblaze “What’s the Diff” blog post where we discuss the differences between HDD and SSD drives.

How Will You Use Your HDD?

Hard drive manufacturers build drive models for different use cases; that is, a given drive model is optimized for a given purpose. For example, a consumer drive may spin slower to save energy and provides little if any access to tools that can adjust the firmware settings on the drive. An enterprise class drive, on the other hand, is typically much faster and provides the user with access to features they can tweak to adjust performance and/or power usage.

Each drive manufacturer has their own criteria for their use cases, but in general there are five categories: consumer, NAS (network attached storage), archiving/video recording, enterprise, and more recently, data center. The different drive manufacturers have different variations on these categories, so the first thing you should do is to know what you are going to do with the drive before you start looking.

Hard Drive Recording Technologies

For a long time, the recording technology a drive manufacturer used was not important. Then SMR (shingled magnetic recording) drives appeared a couple of years ago.

Let’s explain:

PMR: Perpendicular Magnetic Recording
This is the technology inside of most hard drives. With PMR data is written to and read from circular tracks on a spinning platter.
SMR: Shingled Magnetic Recording
This type of drive overlaps recording tracks to store data at a lower cost than PMR technology. The downside occurs when data is deleted and that space is reused. If existing data overlaps the space you want to reuse, this can mean delays in writing the new data. These drives are great for archive storage (write once, read many) use cases, but if your files turn over with some regularity, stick with PMR drives.

That sounds simple, but here are two things you should know:

  1. SMR drives are often the least expensive drives available when you consider the cost per gigabyte. If you are price sensitive, you may believe you are getting a great deal, but you may be buying the wrong drive for your use case. For example, buying SMR drives for your NAS device running RAID 6 would be ugly because of all the rewrites that may be involved.
  2. It is sometimes really hard to figure out if the drive you want to buy is an SMR or PMR drive. For example, based on the cost per gigabyte, the 8TB Seagate external drive (model: STEB8000100) is one of the least expensive external drives out there right now. But, the 8TB drive inside is an SMR drive, and that fact is not obvious to the buyer. To be fair, the manufacturers try to guide buyers to the right drive for their use case, but a lot of that guiding information is lost on reseller sites such as Amazon and Newegg, where the buyer is often blinded by price.

Over the next couple of years, HAMR (heat-assisted magnetic recording) by Seagate and MAMR (microwave-assisted magnetic recording) by Western Digital will be introduced, making the drive selection process even more complicated.

What About Refurbished Drives?

Refurbished drives are hard drives that have been returned to the manufacturer and repaired in some way to make them operational. Given the cost, repairs are often limited to what can be done in the software or firmware of the failed drive. For example, the repair may consist of identifying a section of bad media on a drive platter and telling the drive to read and write around it.

Once repaired, refurbished drives are tested and often marked certified by the manufacturer, e.g. “Certified Refurbished.” Refurbished drives are typically less expensive and come with a limited warranty, often one year or less. You can decide if you want to use these types of drives in your environment.

Helium-Filled versus Air-Filled Drives

Helium-filled drives are finally taking center stage after spending years as an experimental technology. Backblaze has in part used helium-filled drives since 2015, and over the years we’ve compared helium-filled drives to air-filled drives. Here’s what we know so far.

The first commercial helium-filled drives were 6TB; the transition to helium took hold at 8TB as we started seeing helium-filled 8TB drives from every manufacturer. Today helium-filled 12 and 14TB drives are now available at a reasonable price per terabyte.

Helium drives have two advantages over their air-filled cohorts: they create less heat and they use less power. Both of these are important in data centers, but may be less important to you, especially when you consider the primary two disadvantages: a higher cost and lack of experience. The street-price premium for a helium-filled drive is roughly 20% right now versus an air-filled drive of the same size. That premium is expected to decrease as time goes on.

While price is important, the lack of experience of helium-filled drives may be more interesting as these drives have only been in the field in quantity a little over four years. That said, we have had helium-filled drives in service for 3.5 years. They are solid performers with a 1.2% annualized failure rate and show no signs of hitting the wall.

Enterprise versus Consumer Drives

In our Q2 2018 Hard Drive Stats report we delved into this topic, so let’s just summarize some of the findings below.

We have both 8TB consumer and enterprise models to compare. Both models are from Seagate. The consumer drive is model ST800DM002 and the Enterprise drive model is ST800NM0055. The chart below, from the Q2 2018 report, shows the failure rates for each of these drive models at the same average age of all of the drives of the specified model.

Annualized Hard Drive Failure Rates by Time table

When you constrain for the average age of each of the drive models, the AFR (annualized failure rate) of the enterprise drive is consistently below that of the consumer drive for these two drive models — albeit not by much. By the way, conducting the same analysis at an average age of 15 months showed little change, with the consumer drive recording a 1.10% AFR and the enterprise drive holding at 0.97% AFR.

Whether every enterprise model is better than every corresponding consumer model is unknown, but below are a few reasons you might choose one class of drive over another:

Enterprise Class Drives

  • Longer Warranty: 5 years vs. 2 years
  • More Accessible Features, i.e. Seagate PowerChoice technology
  • Faster reads and writes

Consumer Class Drives

  • Lower Price: Up to 50% less
  • Similar annualized failure rates as enterprise drives
  • Uses less power and produces less heat

Hard Drive Failure Rates

As many of you know, each quarter Backblaze publishes our Hard Drive Stats report for the hard drives in our data centers. Here’s the lifetime chart from our most recent Q3 2018 report.

Backblaze Lifetime Hard Drive Failure Rates table

Along with the report, we also publish the data we used to create the reports. We are not alone. Let’s look at the various ways you can find hard drive failure rates for the drive you wish to purchase.

Backblaze AFR (annualized failure rate)
The failure rate of a given hard drive model based on the number of days a drive model is in use and the number of failures of that drive model. Here’s the formula:

( ( Drive Failures / ( Drive Days / 365 ) ) * 100 )
MTBF (mean time between failures)
TBF is the term some disk drive manufacturers use to quantify disk drive average failure rates. It is the average number of service hours between failures. This is similar to MTTF (mean time to failure), which is the average time to the first failure. MTBF has been superseded by AFR for some drive vendors as described below.
AFR (Seagate and Western Digital)
These manufacturers have decided to replace MTBF with AFR. Their definition of AFR is the probable percent of failures per year, based on the manufacturer’s total number of installed units of similar type. While Seagate and WD don’t give the specific formula for calculating AFR, Seagate notes that AFR is similar to MTBF and differs only in units. One way for converting MTBF to AFR can be found here.
Comparing Backblaze AFR to the Seagate/WD AFR
The Backblaze environment is a closed system, meaning we know with a high degree of certainty the variables we need to compute the Backblaze AFR percentage. We also know most, if not all, of the mitigating factors. The Seagate/WD AFR environment is made up of potentially millions of drives in the field (home, office, mobile, etc.) where the environmental variables can be quite varied and in some cases unknown. Either of the AFR calculations can be considered as part of your evaluation if you are comfortable with how they are calculated.
CDL (component design life)
This term is used by Western Digital in their support knowledge base although we don’t see it in their technical specifications yet. The example provided in the knowledge base article is, “The Component Design Life of the drive is 5 years and the Annualized Failure Rate is less than 0.8%.” With those two numbers you can calculate that no more than four out of 100 drives will die in a five-year period. The is really good information, but it is not readily available yet.

Which Hard Drive Do I Need?

While hard drive failure rates are interesting, we believe that our Hard Drive Stats reports are just one of the factors to consider in your hard drive buying decision. Here are some things you should think about, in no particular order:

  • Your use case
    • What you will do with the drive.
  • What size drive do you need?
    • Using it as a Time Machine backup? It should be 3-4 times the size of your internal hard drive. Using it as an archive for your photo collection? — bigger is better.
  • How long do you want the drive to last?
    • Forever is not a valid answer. We suggest starting with the warranty period and subtracting a year if you move the drive around a lot or if you fill it up and stuff it in the closet.
  • The failure rate of the drive
    • We talked about that above.
  • What your friends think
    • You might get some good advice.
  • What the community thinks
    • reddit, Hacker News, Spiceworks, etc.
  • Product reviews
    • I read them, but only to see if there is anything else worth investigating via other sources.
  • Product review sites
    • These days, many review sites on the internet are pay-to-play, although not all. Pay-to-play means the vendor pays the site either for their review or if the review leads to a sale. Sometimes, whoever pays the most gets to the top of the list. This isn’t true for all sites, but often it is really hard to tell the good guys. One of our favorite sites, Tom’s Hardware, has stopped doing HDD reviews, so if you have a site you trust for such reviews, share it in the comments, we’d all like to know.
  • The drive manufacturer
    • Most drive manufacturer websites provide information that can help you determine the right drive for your use case. Of course, they are also trying to sell you a drive, but the information, especially the technical specs, can be useful.

What about price? We left that out of our list as many people start and end their evaluation with just price and we wanted to mention a few other things we thought could be important. Speaking of price…

What’s a Good Price for a Hard Drive?

Below is our best guess as to what you could pay over the next couple of months for different sized internal drives. Of course, there are bound to be some great discounts on Black Friday, Cyber Monday, Hanukkah, Christmas, Kwanzaa, Boxing Day, Winter Solstice, and Festivus — to name a few holiday season reasons for a sale on hard disk drives.

Drive SizePriceCost per GB
1TB$35$0.035
2TB$50$0.25
3TB$75$0.25
4TB$100$0.25
6TB$170$0.28
8TB$250$0.31
10TB$300$0.30
12TB$380$0.32
14TB$540$0.39

How Much Do External Hard Drives Cost?

We wanted to include the same information about external hard drives, but there is just too much unclear information to feel good about doing it. While researching this topic, we came across multiple complaints about a wide variety of external drive systems containing refurbished or used drives. In reviewing the advertisements and technical specs, the fact that the HDD inside an external drive sometimes is not new often gets left off the specifications. In addition, on Amazon and similar sites, many of the complaints were from purchases made via third party sellers and not the original external drive manufacturers, so check the “by” tag before buying.

Let’s make it easy: an external hard drive should have at least a two-year warranty and be available from a trusted source. The list price for the external drive should be about 10-15% higher than the same sized internal drive. What you will actually pay, the street price, is based on supply and demand and a host of other factors. Don’t be surprised if the cost of an external drive is sometimes less than a corresponding internal drive — that’s just supply and demand at work. Following this guidance doesn’t mean the drive won’t fail, it just means you’ll have better odds at getting a good external drive for your money.

One More Thing Before You Buy

The most important thing to consider when buying a hard drive is the value of the data on the drive and what it would cost to replace that data. If you have a good backup plan and practice the 3-2-1 backup strategy, then the value of a given drive is low and limited to the time and cost it takes to replace the drive that goes bad. That’s annoying, yes, but you still have your data. In other words, if you want to get the most for your money when buying a hard drive, have a good backup plan.

The post Buying a Hard Drive this Holiday Season? These Tips Will Help appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Modern Storage Workflows in the Age of Cloud

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/cloud-based-video-production-workflows/

Video Production Workflow

Not too long ago, hardware storage vendors held an iron grip on what kinds of storage underpinned your creative, film, and broadcast workflows. This storage took many complex forms — RAIDs, JBODs, SANs, NAS systems, tape robots, and more. All of it was expensive, deeply complex, and carried fat vendor margins and high support costs.

How Storage Can Make Your Video Production Workflow More Efficient

But when you’re considering storage in today’s technology environment — whether it’s cloud, on-site storage, or a USB stick — the guiding principle in choosing storage for your creative production should simply be to choose the storage that best fits each workflow step.

Production Storage Maxim: Choose the storage that best fits each workflow step

Doing your best creative work is what builds your customer base, boosts your reputation, and earns you revenue and royalties. So any time sunk into legacy storage solutions, wrestling with complexity, unneeded production steps, refereeing competing vendors, and overpaying for, well, everything, just gets in the way of what you really want to do, create.

The right answer for your specific production needs is a function of the size of your production team and the complexity of your operating environment. Whatever that answer is, it should be as frictionless an environment as possible that helps you get your work done more efficiently and gives you the most flexibility.

An independent filmmaker can follow this production storage evaluation process for each stage of their workflow and decide to make do with a small deskside RAID system for primary production storage, and depend on the cloud for everything else.

A large, global production team will probably need multiple SANs in each production office and a complex series of cloud and dedicated playout applications and systems. If your environment falls somewhere between those two extremes, then your ideal solution mix does as well.

Traditional Content Production Workflow - Ingest > Work-in-Process > Deliver > Archive

The traditional content production workflow is thought of as a linear process. Content is ingested as raw camera files pulled into a shared work-in-process storage for editors, the final cut is then delivered to the client, and when the project is finished all files are saved off to an archive.

Simplified Production Workflow Steps

Let’s look at what the storage requirements and needs are for each of the common steps in a production workflow and where cloud can add value. Along the way, we’ll call out concrete examples of cloud capabilities at each stage with B2 cloud storage.

Ingest Stage - Ingest Stage Goals: Safely retrieve and protect files from capture media and move to production environment. Ingest Stage Needs: File data protection - Easy path to Production Storage. Where Cloud Can Add Value: Ingest and archive in one step

The Ingest Stage

Media copied in the ingest phase typically needs to get off of camera carts and flash drives as quickly and safely as possible and transported to the editing environment. Since those camera carts need to be used again for the next shot, pressure to get files copied over quickly (but safely) is intense.

Any time that critical content exists only in one place is dangerous. At this stage, lost or corrupted files mean a reshoot, which may not be practical or even possible.

Storage Needs for Ingest

Storage at the ingest stage can be very rudimentary and is often satisfied by just copying files from camera carts to an external drive, then to another drive as a safety, or by putting a RAID system on a crash cart on-set. Every team tends to come up with a different solution.

Where Cloud Can Add Value to Ingest

But even if your data wranglers aren’t ready to give up external hard drives here, one way cloud can help in the ingest stage is to help combine your ingest and archive for safety steps.

Instead of carrying carts from the shoot location to the production environment and copying them over to production storage, you could immediately start uploading content via the internet to your cloud storage, simultaneously copying over those files safely, and making them available to your entire team immediately.

When you restructure your workflow like this, you’ll get better than RAID-level protection for your content in the cloud. And by checking content into your archive first, your asset manager tools can immediately start processing those files by adding tags and generating lighter weight proxies. As soon as the files hit cloud storage, your entire team can start working on them. They can immediately begin tagging and reviewing files, and even mark edit points before handing off to editors, thereby speeding up production dramatically.

Some creatives have hit a roadblock in trying to take advantage of the cloud. Data transfer has historically been gated by the available upload bandwidth at your given location, but our customers have solved this in some interesting ways.

Producers, editors, and reporters are finding that even cellular 4G internet connections make it feasible to immediately start uploading raw shots to their cloud storage. Others make it routine to stop off at a data center or affiliate with excellent upload speeds on their way in from the field.

Either way, even novice shooters and freelancers can safely get content into your system quickly in a system that can be as simple as an upload bucket in your B2 account and making sure that your media or project manager tools are configured to watch those upload points.

Cloud Capability Example — Use a Backblaze Fireball to Rapidly Ingest Content

Backblaze offers a Rapid Ingest Service to help get large amounts of your content into your Backblaze account quickly. Backblaze ships you a 70TB storage system that you connect to your network and copy content to. When the system is shipped back to Backblaze, it is quickly moved directly into your B2 account, dramatically reducing ingest times.

 

Cloud Capability Example — Share Files Directly From Cloud

Archive.zip file in B2

An example of navigating to a file-review bucket in the B2 web interface to copy the direct sharing link to send to a reviewer

In addition to the archive on ingest technique, many customers share files for approval review or dailies directly from their Backblaze B2 account’s web interface.

If your B2 bucket for finished files is public, you can get a direct share link from the Backblaze account management website and simply send that to your customer, thereby eliminating a copy step.

You can even snapshot a folder of your content in B2, and have Backblaze ship it directly to your customer.

Work in Process Stage - WIP Stage Goals: Support collaborative, simultaneous editing of source files to finished content. WIP Stage Needs: Performance to support shared, collaborative editing access for many users. Very large file support. Where Cloud Can Add Value: Keeping expensive primary production storage running efficiently.

The Work-In-Process Stage

Work-in-process or primary production storage is the main storage used to support collaborative editing and production of content. The bulk of what’s thought of as collaborative editing happens in this stage.

For simplicity we’re combining several steps under the umbrella of work-in-process such as craft editing, voiceover, sound, ADR, special effects, and even color grading and finish etc. under a far simpler work-in-process step.

As audio, color grading and SFX steps get more complex, they sometimes need to be broken out into separate, extremely high performance storage such as more exotic (and expensive) flash-based storage that then feeds the result back to WIP storage.

Work-in-Process Stage Storage Needs

Storage performance requirements in this stage are extremely hard to meet, demanding the ability to serve multiple editors, each pulling multiple, extremely large streams of video files as they edit raw shots into a complex, visual story. Meeting this requirement usually requires either equipment intensive SAN, or a NAS that scales to eye-watering size and price.

Many production environments have gotten in the habit of keeping older projects and media assets on the shared production environment alongside current production files, knowing that if those files are needed they can be retrieved quickly. But this also means that production storage fills up quickly, and it’s tempting to let more and more users not involved in primary production have access to those files as well, both of which can slow down production storage and creation of your content.

Having to make a rush purchase to expand or add to your SAN is not fun, especially in the middle of a project, so regularly moving any files not needed for current production to your content archive is a great strategy to keep your production storage as light and small as possible so that it can last over several seasons.

Where Cloud Can Add Value to Work-in-Process

By regularly moving content from your production storage you keep it light, fast, and simpler to manage. But that content still needs to be readily available. Cloud is an excellent choice here as content is both immediately available and stored on highly resilient object storage. In effect, you’re lightening the burden on your primary storage, and using cloud as an always ready, expanding store for all of your content. We’ll explore this concept more in the archive stage.

Deliver Stage - Deliver Stage Goals: Securely deliver finished files to upstream/downstream clients. Deliver Stage Needs: High reliability. Separation from primary production storage. Where Cloud Can Add Value: Share files directly and securely from cloud without copying.

The Deliver Stage

The deliver stage, where your finished work is handed off to your customer, varies depending on what type of creative you are. Broadcast customers will almost always need dedicated playout server appliances, and others will simply copy files to where they’re needed by downstream customers, or upstream to a parent organization for distribution. But, at some level, we all have to deliver our work when it’s done.

Deliver Stage Storage Needs

Files for delivery should be moved off of your primary production storage and delivered in a separate workflow available to dedicated workflow or playout tools. Whatever the workflow, this storage needs to be extremely reliable and available for your customers whenever it is needed.

Where Cloud Can Add Value to Deliver

Whether content delivery in your workflow is met by copying files to a playout server or giving a finished file to a customer, cloud can help cut down on the number of steps to get the content to its final destination while giving you extreme reliability.

Cloud Capability Example — Serve Time-Limited Links to Content

Many customers use the Backblaze B2 API to add expiration limits that can last from seconds to a week to shared links:

B2 command-line

An example of using the B2 command-line tool to generate time-expiring tokens for content sharing and delivery

If your team is comfortable writing scripts to automate your workflow, this can be a powerful way to directly share files simply and quickly with tools provided by Backblaze.

For more information see this B2 Article: Get Download Authorization

 

Cloud Capability Example — Move Content Directly to Your Delivery and Distribution Servers

Serving your content to a wide audience via your website, content channel, or app is an increasingly popular way to deliver content. And thanks to our recent Cloudflare agreement, you can now move content from your B2 storage over to Cloudflare’s content delivery network at zero transfer cost for your content application or website.For more information see this B2 article: How to Allow Cloudflare to Fetch Backblaze B2 Content

Archive Stage - Archive Stage Goals: Securely deliver finished files to upstream/downstream clients. Archive Stage Needs: High reliability. Separation from primary prodcution storage. Where Cloud Can Add Value: Serve as your content backplane across all workflow steps.

The Archive Stage

At last, we come to the archive stage of content creation, traditionally thought of as the end of the traditional content creation chain, the source of the most frustration for creatives, and the hardest storage to size properly.

Traditionally, when a project or season of a show is finished, all of the files used to create the content are moved off of expensive primary production storage and stored on separate, highly reliable storage in case they are needed again.

Archive Stage Storage Needs

Archive storage needs to be a safe repository for all of the content that you’ve created. It should scale well at a sustainable price, and make all archived content available immediately when requested by your users and workflow tools like asset managers.

Tape was often chosen to store these archive files because it was cheaper than disk-based storage and offered good reliability. But choosing tape required a large investment in specialized tape systems, tape media, and the associated support contracts and maintenance.

Tape based archiving strategies usually rely on compressing content as it’s written to tape to hit the advertised storage capacity of tape media. But video content is already stored in a compressed container, so compressing those files as they’re written and retrieved from tape offers no advantage and only slows the process down.

Here we find the chief drawback of tape based content archives for many customers: the time required to retrieve content from those tape systems. As the pace of production has increased, many customers find they can no longer wait for tape systems to return archive sets or unarchive files.

Where Cloud Can Add Value to Archive

The archive stage is where cloud has the most impact on your entire workflow. The benefits of cloud itself are familiar: the ability to scale up or down instantly as your needs change, paying only for the storage you actually use, extremely high object storage file reliability, and availability anywhere there is a network connection.

Modern Content Production Workflow - Ingest > Archive as a Cloud Content Backplane ><Work-In-Process

Creating The Cloud Content Backplane

Having all of your content immediately available to your production storage and your asset management systems is emerging as the killer feature of cloud for production environments. By adding cloud, your content production goes from a linear process to a highly active one where content can freely check in and out of all of your other workflow steps as you’re producing content.

By shifting your content archives to cloud like Backblaze B2, you are creating, in effect, a cloud content backplane that supports your entire content creation and delivery process with these new capabilities:

  • New productions now have access to every file you might possibly need without waiting, letting you explore more creative choices
  • A single, authoritative content repository backing all of your creative production lets you phase out other storage and the associated management headaches and expense
  • You can now serve and deliver files directly from your cloud-based content archive with no impact on production storage
  • Having content in a single place means that your workflow tools like asset managers work better. You can find files across your entire content store instantly, and even archive or move files from your production storage to your cloud content archive automatically

The content not needed on your work-in-process storage is both highly protected and immediately available wherever you need it. Your entire workflow can get much simpler with fewer steps, and you can phase out storage you no longer need on-site.

Above all, you’ll have fewer steps between you and creating great content, and you’ll be able to explore new creative options faster while shifting to a pay-as-you-use-it model for all of your content storage.

In part two, we’ll explore the ways your new cloud-delivered content archive backplane can dramatically improve how you create, deliver, and monetize content with other cloud-based technologies in the age of cloud.

The post Modern Storage Workflows in the Age of Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Drive Stats for Q3 2018: Less is More

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/2018-hard-drive-failure-rates/

Backblaze Drive Stats Q3 2018

As of September 30, 2018 Backblaze had 99,636 spinning hard drives. Of that number, there were 1,866 boot drives and 97,770 data drives. This review looks at the quarterly and lifetime statistics for the data drive models in operation in our data centers. In addition, we’ll say goodbye to the last of our 3TB drives, hello to our new 12TB HGST drives, and we’ll explain how we have 584 fewer drives than last quarter, but have added over 40 petabytes of storage. Along the way, we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

Hard Drive Reliability Statistics for Q3 2018

At the end of Q3 2018, Backblaze was monitoring 97,770 hard drives used to store data. For our evaluation, we remove from consideration those drives that were used for testing purposes and those drive models for which we did not have at least 45 drives (see why below). This leaves us with 97,600 hard drives. The table below covers what happened in Q3 2018.

Backblaze Q3 2018 Hard Drive Failure Rates chart

Notes and Observations

  • If a drive model has a failure rate of 0%, it only means there were no drive failures of that model during Q3 2018.
  • Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of Drive Days.
  • There were 170 drives (97,770 minus 97,600) that were not included in the list above because we did not have at least 45 of a given drive model. We use 45 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics.

When to Replace a Hard Drive

As noted, at the end of Q3 that we had 584 fewer drives, but over 40 petabytes more storage space. We replaced 3TB, 4TB, and even a handful of 6TB drives with 3,600 new 12TB drives using the very same data center infrastructure, i.e. racks of Storage Pods. The drives we are replacing are about 4 years old. That’s plus or minus a few months depending on how much we paid for the drive and a number of other factors. Keeping lower density drives in service when higher density drives are both available and efficiently priced does not make economic sense.

Why Drive Migration Will Continue

Over the next several years, data growth is expected to explode. Hard drives are still expected to store the bulk of that data, meaning cloud storage companies like Backblaze will have to increase capacity by either increasing existing storage density and/or building, or building out, more data centers. Drive manufacturers, like Seagate and Western Digital, are looking at HDD storage densities of 40TB as early as 2023, just 5 years away. It is significantly less expensive to replace lower density operational drives in a data center versus building a new facility or even building out an existing facility to house the higher density drives.

Goodbye 3TB WD Drives

For the last couple of quarters, we had 180 Western Digital 3TB drives (model: WD30EFRX) remaining — the last of our 3TB drives. In early Q3, they were removed and replaced with 12TB drives. These 3TB drives were purchased in the aftermath of the Thailand drive crisis and installed in mid-2014 and were still hard at work when we replaced them. Sometime over the next couple of years we expect to say goodbye to all of our 4TB drives and upgrade them to 14, 16, or even 20TB drives. After that it will be time to “up-density” our 6TB systems, then our 8TB systems, and so on.

Hello 12TB HGST Drives

In Q3 we added 79 HGST 12TB drives (model: HUH721212ALN604) to the farm. While 79 may seem like an unusual number of drives to add, it represents “stage 2” of our drive testing process. Stage 1 uses 20 drives, the number of hard drives in one Backblaze Vault tome. That is, there are are 20 Storage Pods in a Backblaze Vault, and there is one “test” drive in each Storage Pod. This allows us to compare the performance, etc., of the test tome to the remaining 59 production tomes (which are running already-qualified drives). There are 60 tomes in each Backblaze Vault. In stage 2, we fill an entire Storage Pod with the test drives, adding 59 test drives to the one currently being tested in one of the 20 Storage Pods in a Backblaze Vault.

To date, none of the 79 HGST drives have failed, but as of September 30th, they were installed only 9 days. Let’s see how they perform over the next few months.

A New Drive Count Leader

For the last 4 years, the drive model we’ve deployed the most has been the 4TB Seagate drive, model ST4000DM000. In Q3 we had 24,208 of this drive model, which is now only good enough for second place. The 12TB Seagate drive, model ST12000NM0007, became our new drive count leader with 25,101 drives in Q3.

Lifetime Hard Drive Reliability Statistics

While the quarterly chart presented earlier gets a lot of interest, the real test of any drive model is over time. Below is the lifetime failure rate chart for all the hard drive models in operation as of September 30th, 2018. For each model, we compute their reliability starting from when they were first installed.

Backblaze Lifetime Hard Drive Failure Rates Chart

Notes and Observations

  • The failure rates of all of the larger drives (8, 10, and 12 TB) are very good: 1.21% AFR (Annualized Failure Rate) or less. In particular, the Seagate 10TB drives, which have been in operation for over 1 year now, are performing very nicely with a failure rate of 0.48%.
  • The overall failure rate of 1.71% is the lowest we have ever achieved, besting the previous low of 1.82% from Q2 of 2018.

The Hard Drive Stats Data

The complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone. It is free.

If you just want the summarized data used to create the tables and charts in this blog post you can download the ZIP file containing the MS Excel spreadsheet.

Good luck and let us know if you find anything interesting.

The post Hard Drive Stats for Q3 2018: Less is More appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Move Even Your Largest Archives to B2 with Fireball and Archiware P5

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/archiware-p5-cloud-backup/

Archiware P5 and Fireball

Backblaze B2’s reliability, scalability, and affordable, “pay only for what you use” pricing means that it’s an increasingly popular storage option for all phases of content production, and that’s especially true for media archiving.

By shifting storage to B2, you can phase out hard-to-manage and expensive local backup storage and clear space on your primary storage. Having all of your content in a single place — and instantly available — can transform your production and keep you focused on the creative process.

Fireball Rapid Ingest to Speed Your First Migration to Backblaze B2

Once you sign up for Backblaze B2, one tool that can speed an initial content migration tremendously is Backblaze’s Fireball rapid ingest service. As part of the service, Backblaze ships you a 70TB storage system. You then copy over all the content that you want in B2 to the Fireball system: all at local network speeds. Once the system is shipped to Backblaze, it’s quickly moved to your B2 account, a process far faster than uploading those files over the internet.

Setting Up Your Media Archive

Since manually moving files to archive and backing up project folders can be very time-consuming, many customers choose software like Archiware P5 that can manage this automatically. In P5’s interface you can choose files to add to archive libraries, restore individual files to your local storage from B2, and even browse all of your archive content on B2 with thumbnail previews, and more.

However, many media and entertainment customers have terabytes and terabytes of content in “archive” — that is, project files and content not needed for a current production, but necessary to keep nearby, ready to pull into a new production.

They’d love to get that content into their Backblaze B2 account and then manage it with an archive, sync, backup solution like Archiware P5. But the challenge facing too many is how to get all these terabytes up to B2 through the existing bandwidth in the office. Once the large, initial archive is loaded, the incrementals aren’t a problem, but getting years of backlog pushed up efficiently is.

For anyone facing that challenge, we’re pleased to announce the Archiware P5 Fireball Integration. Our joint solution provides any customer with an easy way to get all of their archives loaded into their B2 account without having to worry about bandwidth bottlenecks.

Archiware P5 Fireball Integration

A backup and archive manager like Archiware P5 is a great way to get your workflow under control and automated while ensuring that your content is safely and reliably stored. By moving your archives offsite, you get the highest levels of data protection while keeping your data immediately available for use anytime, anywhere.

With the newest release, Archiware P5 can archive directly to Fireball at fast, local network speeds. Then, once your Fireball content has been uploaded to your Backblaze account, a few clicks are all that is needed to point Archiware at your Backblaze account as the new location of your archive.

Finally, you can clear out those closets of hard drives and tape sets!

Archiware P5 to B2 workflow

Archiware P5 can now archive directly to Fireball at local network speeds, which are then linked to their new locations in your B2 accounts. With a few clicks you can get your entire archive uploaded to the B2 cloud without suffering any downtime or bandwidth issues.

For detailed information about configuring Archiware to archive directly to Fireball:

For more information about Backblaze B2 Fireball Rapid Ingest Service:

Archiware on Synology and QNAP NAS Devices

Archiware, NAS and B2

Archiware P5 can also now run directly on several Synology, QNAP, and G-Tech NAS systems to archive and move content to your Backblaze B2 account over the internet

With their most recent releases Archiware now supports several NAS system devices from QNAP, Synology, and G-Tech as P5 clients or servers.

The P5 software is installed as an application from the NAS vendor’s app store and runs directly on the NAS system itself without having to install additional hardware.

This means that all of your offices or departments with these NAS systems can now fully participate in your sync, archive, and backup workflows, and each of them can archive off to your central Backblaze B2 account.

For more information:

Archiware plus Backblaze: A Complete Front-to-Back Media Solution

Archiware P5, Fireball, and Backblaze B2 are all important parts of a great backup, archive, and sync plan. By getting all of your content into archive and B2, you’ll know that it’s highly protected, instantly available for new production workflows, and also readily discoverable through thumbnail and search capability.

With the latest version of P5, you not only have your entire production and backup workflows managed, with Fireball you can get even the largest and hardest to move archive safely and quickly into B2, as well!

For more information about the P5 Software Suite: Archiware P5 Software Suite

And to order a Fireball as part of our Rapid Ingest Service, start here: Backblaze B2 Fireball


You might also be interested in reading our recent guest post written by Marc N. Batschkus of Archiware about how to save time, money, and gain peace of mind with an archive solution that combines Backblaze B2 and Archiware P5.

Creating a Media Archive Solution with Backblaze B2 and Archiware P5

 

The post Move Even Your Largest Archives to B2 with Fireball and Archiware P5 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/iconik-and-backblaze-cloud-production-solution/

Cantemo iconik Plus Backblaze B2 for Media Cloud Production

Cantemo iconik Plus Backblaze B2 for Media Cloud Production

Many of our customers are archiving media assets in Backblaze B2, from long-running television productions, media distributors, AR/VR video creators, corporate video producers, houses of worship, and many more.

They are emptying their closets of USB hard drives, clearing off RAID arrays, and migrating LTO tapes to cloud storage. B2 has been proven to be the least expensive storage for their media archives, while keeping the archives online and accessible. Gone are the days of Post-its, clipboards, and cryptic drive labels defining whether old video footage can be found or not. Migrating archives from one form of storage to another will no longer suck up weeks and weeks of time.

So now that their archives are limitless, secure, always active, and available, the next step is making them actionable.

Our customers have been asking us — how can I search across all of my archives? Can I preview clips before I download the hi-res master, or share portions of the archive with collaborators around the world? Why not use the latest AI tools to intelligently tag my footage with metadata?

To meet all of those needs and more, we are excited to announce that Cantemo’s iconik cloud media management service now officially supports Backblaze B2.

iconik — A Media Management Service

iconik is an affordable and simple-to-use media management service that can read a Backblaze B2 bucket full of media and make it actionable. Your media assets are findable, sortable with full previews, and ready to pull into a new project or even right into your editor, such as Adobe Premiere, instantly.

Cantemo iconik user interface

iconik — Cantemo’s new media management service with AI features to find, sort, and even suggest assets for your project across your entire library

As a true media management service, iconik’s pricing model is a pay-as-you-go service, transparently priced per-user, per month. There are no minimum purchases, no servers to buy, and no large licensing fees to pay. To use iconik, all your users need is a web browser.

iconik Pricing

To get an idea of what “priced-per-user” might look like, most organizations will need at least one administrative user ($89/month), standard users ($49/month) who can organize content, create workflows, and ingest new media, and browse-only users ($19/month), who can search and download what they need. There’s also a “share-only” level that has no monthly charge that lets you incorporate customer and reviewer comments. This should accommodate teams of all kinds and all sizes.

Best of all, iconik is intelligent about how it uses storage, and while iconik charges small consumption fees for proxy storage, bandwidth, etc., they have found that for customers that bring media from Backblaze B2 buckets, consumption charges should be less than 5% of the monthly bill for user licenses.

As part of their launch promotion, if you get started in October, Cantemo will give Backblaze customers a $300 getting started credit!

You can sign up and get started here using the offer code of BBB22018.

Everwell’s Experience with iconik and Backblaze

One of the first customers to adopt iconik with Backblaze is Everwell, a video production company. Everwell creates a constant stream of videos for medical professionals to show in their waiting rooms. Rather than continuously buying upgrades for their in-house asset management system and local storage, iconik allows Everwell to shift their production to the cloud for all of their users. Their new solution will allow Everwell to manage their growing library of videos as new content constantly comes online, and kick off longer form productions with full access to all the assets they need across a fast-moving team that can be anywhere their production takes them.

collage of Everwll video images

Everwell is a fast-growing medical content developer for healthcare givers

To speed up their deployment of iconik, Everwell started with Backblaze’s data ingestion service, Fireball. Everwell copied their content to Fireball, and once back in the Backblaze data center, the data from Fireball was quickly added directly to Everwell’s B2 buckets. iconik could immediately start ingesting the content in place and make it available to every user.

Learn more about Backblaze B2 Fireball

With iconik and Backblaze, Everwell dramatically simplified their workflow as well, collapsing several critical workflow steps into one. For example, by uploading source files to Backblaze B2 as soon as they’re shot, Everwell not only reduces the need to stage local production storage at every site, they ingest and archive in a single step. Every user can immediately start work on their part of the project.

“The ‘everyone in the same production building’ model didn’t work for us any longer as our content service grew, with more editors and producers checking in content from remote locations that our entire team needed to use immediately. With iconik and Backblaze, we have what feels like the modern cloud-delivered production tool we’ve always wanted.”

— Loren Goldfarb, COO, Everwell

See iconik in Action at NAB NYC October 17-18

NAB Show New York - Media In Action October 17-18 2018

Backblaze is at NAB New York. Meet us there!

We’re excited to bring you several chances to see iconik and Backblaze working together.

The first is the NAB New York show, held October 17-18 at the Javits Center. iconik will be shown by Professional Video Technology in Booth N1432, directly behind Backblaze, Booth N1333.

Have you signed up for NAB NY yet? You can still receive a free exhibits pass by entering Backblaze’s Guest Code NY8842.

And be sure to sign up to meet with the Backblaze team at NAB by signing up on our calendar.

Attend the iconik and B2 Webinar on November 20

Soon after NAB NY, Backblaze and iconik will host a webinar to demo the solution called “3 Steps to Making Your Cloud Media Archive ‘active’ With iconik and Backblaze B2.” The webinar will be presented on November 20 and available on demand after November 20. Be sure to sign up for that too!

3 Steps Demo with: iconik and Backblaze B2 Cloud Storage

Sign up for the iconik/B2 Webinar

Don’t Miss the iconik October Launch Promotion

The demand for creative content is growing exponentially, putting more demands on your creative team. With iconik and B2, you can make all of your media instantly accessible within your workflows while adopting a infinitely scalable, pay only for what you use, storage solution.

To take advantage of the iconik October launch promotion and receive $300 free credit with iconik, sign up using the BBB22018 code.

The post iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze and Cloudflare Partner to Provide Free Data Transfer

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/backblaze-and-cloudflare-partner-to-provide-free-data-transfer/

 Backblaze B2 Free Data Transfer to Cloudflare

Today we are announcing that beginning immediately, Backblaze B2 customers will be able to download data stored in B2 to Cloudflare for zero transfer fees. This happens automatically once Cloudflare is configured to distribute your B2 files. This means that Backblaze B2 can now be used as an origin store for the Cloudflare CDN and edge network, providing customers enhanced performance and access to their content stored on B2. The result is that customers can save up to 75% on storage versus Amazon S3 to store their content in the cloud and deliver it worldwide.

The zero B2 transfer fees are available to all Cloudflare customers using any plan. Cloudflare customers can also use paid add-ons such as Argo and Workers to enhance the routing and security of the B2 files being delivered over the Cloudflare CDN. To implement this service, Backblaze and Cloudflare have directly connected, thereby allowing near-instant data transfers from B2 to Cloudflare.

Backblaze has prepared a guide on “Using Backblaze B2 storage with Cloudflare.” This guide provides step-by-step instructions on how to set up Backblaze B2 with Cloudflare to take advantage of this program.

The Bandwidth Alliance

The driving force behind the free transfer program is the Bandwidth Alliance. Backblaze and Cloudflare are two of the founding members of this group of forward-thinking cloud and networking companies that are committed to providing the best and most cost-efficient experience for our mutual customers. Additional founding members of the Bandwidth Alliance include Automattic (WordPress), DigitalOcean, IBM Cloud, Microsoft Azure, Packet, and other leading cloud and networking companies.

How Companies Can Leverage the Bandwidth Alliance

Below are examples of how Bandwidth Alliance partners can work together to save customers on their data transfer fees.

Hosting Website Assets

Whether you are a professional webmaster or just run a few homegrown sites, you’ve lived the frustration of having a slow website. Over the past few years these challenges have become more acute as video and other types of rich media have become core to the website experience. This new content has also translated to higher storage and bandwidth costs. That’s where Backblaze B2 and Cloudflare come in.diagram of zero cost data transfer from Backblaze B2 to Cloudflare CDN

Customers can store their videos, photos, and other assets in Backblaze B2’s pay-as-you-go cloud storage and serve the site with Cloudflare’s CDN and edge services. The result is an amazingly affordable cloud-based solution that dramatically improves web site performance and reliability. And customers pay each service for what they do best.

“I am extremely happy with my experience serving html/css/js and over 17 million images from B2 via Cloudflare Workers. Page load time has been great and costs are minimal.”

— Jacob Hands, Lead Developer, FactorioMaps.com

Media Content Distribution

The ability to download content from B2 cloud storage to the Cloudflare CDN for zero transfer cost is the just the beginning. A company needing to distribute media can now store original assets in Backblaze B2, send them to a compute service to transcode and transmux them, and forward the finished assets to be served up by Cloudflare. Backblaze and Packet previously announced zero transfer fees between Backblaze B2 storage and Packet compute services. This enabled customers to store data in B2 at 1/4th the price of competitive offerings and then process data for transcoding, AI, data analysis, and more inside of Packet without worrying about data transfer fees. Packet is also a member of the Bandwidth Alliance and will deliver content to Cloudflare for zero transfer fees as well.

diagram of zero cost data transfer flow from Backblaze B2 to Packet Compute to Cloudflare CDN

Process Now, Distribute Later

A variation of the example above is for a company to store the originals in B2, transcode and transmux the files in Packet, then put those versions back into B2, and finally serve them up via Cloudflare. All of this is done with zero transfer fees between Backblaze, Packet, and Cloudflare. The result is all originals and transmuxed versions are stored at 1/4th the prices of other storage, and served up efficiently via Cloudflare.diagram of data transfer flow between B2 to Packet back to B2 to Cloudflare

In all cases you would only pay for services you use and not for the cost to move data between those services. This results in a predictable and affordable cost for a given project using industry leading best-of-breed services.

Moving Forward

The members of the Bandwidth Alliance are committed to enabling the best and most cost efficient cloud services when it comes to working with data stored in the cloud. Backblaze has committed to a transfer fee of $0 to move content from B2 to either Cloudflare or Packet. We think that’s a great step in the right direction. And if you are cloud provider, let us know if you’d be interested in taking a step like this one with Backblaze.

The post Backblaze and Cloudflare Partner to Provide Free Data Transfer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze B2 API Version 2 Beta is Now Open

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-b2-api-version-2-beta-is-now-open/

cloud storage workflow image

Since B2 cloud storage was introduced nearly 3 years ago, we’ve been adding enhancements and new functionality to the B2 API, including capabilities like CORS support and lifecycle rules. Today, we’d like to introduce the beta of version 2 of the B2 API, which formalizes rules on application keys, provides a consistent structure for all API calls returning information about files, and cleans up outdated request parameters and returned data. All version 1 B2 API calls will continue to work as is, so no changes are required to existing integrations and applications.

The API Versions section of the B2 documentation on the Backblaze website provides the details on how the V1 and V2 APIs differ, but in the meantime here’s an overview into the what, why, and how of the V2 API.

What Has Changed Between the B2 Cloud Storage Version 1 and Version 2 APIs?

The most obvious difference between a V1 and V2 API call is the version number in the URL. For example:

https://apiNNN.backblazeb2.com/b2api/v1/b2_create_bucket

https://apiNNN.backblazeb2.com/b2api/v2/b2_create_bucket

In addition, the V2 API call may have different required request parameters and/or required response data. For example, the V2 version of b2_hide_file always returns accountId and bucketId, while V1 returns accountId.

The documentation for each API call will show whether there are any differences between API versions for a given API call.

No Change is Required For V1 Applications

With the introduction of V2 of the B2 API there will be V1 and V2 versions for every B2 API call. All applications using V1 API calls will continue to work with no change in behavior. In some cases, a given V2 API call will be different from its companion V1 API call as noted in the B2 API documentation. For the remaining API calls a given V1 API call and its companion V2 call will be the same, have identical parameters, return the same data, and have the same errors. This provides a B2 developer the flexibility to choose how to upgrade to the V2 API.

Obviously, if you want to use the functionality associated with a V2 API version, then you must use the V2 API call and update your code accordingly.

One last thing: beginning today, if we create a new B2 API call it will be created in the current API version (V2) and most likely will not be created in V1.

Standardizing B2 File Related API Calls

As requested by many B2 developers, the V2 API now uses a consistent structure for all API calls returning information about files. To enable this there are some V2 API calls that return additional fields, for example:

Restricted Application Keys

In August we introduced the ability to create restricted applications keys using the B2 API. This capability allows an account owner the ability to restrict who, how, and when the data in a given bucket can be accessed. This changed the functionality of multiple B2 API calls such that a user could create a restricted application key that could break a 3rd party integration to Backblaze B2. We subsequently updated the affected V1 API calls, so they could continue to work with the existing 3rd party integrations.

The V2 API fully implements the expected behavior when it comes to working with restricted application keys. The V1 API calls continue to operate as before.

Here is an example of how the V1 API and the V2 API will act differently as it relates to restricted application keys.

Set-up

  • The B2 account owner has created 2 public buckets, “Backblaze_123” and “Backblaze_456”
  • The account owner creates a restricted application key that allows the user to read the files in the bucket named “Backblaze_456”
  • The account owner uses the restricted application key in an application that uses the b2_list_buckets API call

In Version 1 of the B2 API

  • Action: The account owner uses the restricted application key (for bucket Backblaze_456) to access/list all the buckets they own (2 public buckets).
  • Result: The results returned are just for Backblaze_456 as the restricted application key is just for that bucket. Data about other buckets is not returned.

While this result may seem appropriate, the data returned did not match the question asked, i.e. list all buckets. V2 of the API ensures the data returned is responsive to the question asked.

In Version 2 of the B2 API

  • Action: The account owner uses the restricted application key (for bucket Backblaze_456) to access/list all the buckets they own (2 public buckets).
  • Result: A “401 unauthorized” error is returned as the request for access to “all” buckets does not match the restricted application key, e.g. bucket Backblaze_456. To achieve the desired result, the account owner can specify the name of the bucket being requested in the API call that matches the restricted application key.

Cleaning up the API

There are a handful of API calls in V2 where we dropped fields that were deprecated in V1 of the B2 API, but were still required. So in V2:

  • b2_authorize_account: The response no longer contains minimumPartSize. Use partSize and absoluteMinimumPartSize instead.
  • b2_list_file_names: The response no longer contains size. Use contentLength instead.
  • b2_list_file_versions: The response no longer contains size. Use contentLength instead.
  • b2_hide_file: The response no longer contains size. Use contentLength instead.

Support for Version 1 of the B2 API

As noted previously, V1 of the B2 API continues to function. There are no plans to stop supporting V1. If at some point in the future we do deprecate the V1 API, we will provide advance notice of at least one year before doing so.

The B2 Java SDK and the B2 Command Tool

Both the B2 Java SDK and the B2 Command Line Tool, do not currently support Version 2 of B2 API. They are being updated and will support the V2 API at the time the V2 API exits Beta and goes GA. Both of these tools, and more, can be found in the Backblaze GitHub repository.

More About the Version 2 Beta Program

We introduced Version 2 of the B2 API as beta so that developers can provide us feedback before V2 goes into production. With every B2 integration being coded differently, we want to hear from as many developers as possible. Give the V2 API a try and if you have any comments you can email our B2 beta team at b2beta@backblaze.com or contact Backblaze B2 support. Thanks.

The post Backblaze B2 API Version 2 Beta is Now Open appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.