All posts by Skip Levens

Rclone Power Moves for Backblaze B2 Cloud Storage

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/rclone-power-moves-for-backblaze-b2-cloud-storage/

Rclone is described as the “Swiss Army chainsaw” of storage movement tools. While it may seem, at first, to be a simple tool with two main commands to copy and sync data between two storage locations, deeper study reveals a hell of a lot more. True to the image of a “Swiss Army chainsaw,” rclone contains an extremely deep and powerful feature set that empowers smart storage admins and workflow scripters everywhere to meet almost any storage task with ease and efficiency.


Rclone—rsync for cloud storage—is a powerful command line tool to copy and sync files to and from local disk, SFTP servers, and many cloud storage providers. Rclone’s Backblaze B2 Cloud Storage page has many examples of configuration and options with Backblaze B2.

Continued Steps on the Path to rclone Mastery

In our in-depth webinar with Nick Craig-Wood, developer and principal maintainer of rclone, we discussed a number of power moves you can use with rclone and Backblaze B2. This post takes it a number of steps further with five more advanced techniques to add to your rclone mastery toolkit.
Have you tried these and have a different take? Just trying them out for the first time? We hope to hear more and learn more from you in the comments.

Use --track-renames to Save Bandwidth and Increase Data Movement Speed

If you’re moving files constantly from disk to the cloud, you know that your users frequently re-organize and rename folders and files on local storage. Which means that when it’s time to back up those renamed folders and files again, your object storage will see the files as new objects and will want you to re-upload them all over again.

Rclone is smart enough to take advantage of Backblaze B2 Native APIs for remote copy functionality, which saves you from re-uploading files that are simply renamed and not otherwise changed.

By specifying the --track-renames flag, rclone will keep track of file size and hashes during operations. When source and destination files match, but the names are different, rclone will simply copy them over on the server side with the new name, saving you having to upload the object again. Use the --progress or --verbose flags to see these remote copy messages in the log.

rclone sync /Volumes/LocalAssets b2:cloud-backup-bucket \
–track-renames –progress –verbose

2020-10-22 17:03:26 INFO : customer artwork/145.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork//159.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork/163.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork/172.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork/151.jpg: Copied (server side copy)

With the --track-renames flag, you’ll see messages like these when the renamed files are simply copied over directly to the server instead of having to re-upload them.

 

Easily Generate Formatted Storage Migration Reports

When migrating data to Backblaze B2, it’s good practice to inventory the data about to be moved, then get reporting that confirms every byte made it over properly, afterwards.
For example, you could use the rclone lsf -R command to recursively list the contents of your source and destination storage buckets, compare the results, then save the reports in a simple comma-separated-values (CSV) list. This list is then easily parsable and processed by your reporting tool of choice.

rclone lsf –csv –format ps amzns3:/customer-archive-source
159.jpg,41034
163.jpg,29291
172.jpg,54658
173.jpg,47175
176.jpg,70937
177.jpg,42570
179.jpg,64588
180.jpg,71729
181.jpg,63601
184.jpg,56060
185.jpg,49899
186.jpg,60051
187.jpg,51743
189.jpg,60050

rclone lsf –csv –format ps b2:/customer-archive-destination
159.jpg,41034
163.jpg,29291
172.jpg,54658
173.jpg,47175
176.jpg,70937
177.jpg,42570
179.jpg,64588
180.jpg,71729
181.jpg,63601
184.jpg,56060
185.jpg,49899
186.jpg,60051
187.jpg,51743
189.jpg,60050

Example CSV output of file names and file hashes in source and target folders.

 
You can even feed the results of regular storage operations into a system dashboard or reporting tool by specifying JSON output with the --use-json-log flag.

In the following example, we want to build a report listing missing files in either the source or the destination location:

The resulting log messages make it clear that the comparison failed. The JSON format lets me easily select log warning levels, timestamps, and file names for further action.

{“level”:”error”,”msg”:”File not in parent bucket path customer_archive_destination”,”object”:”216.jpg”,”objectType”:”*b2.Object”,”source”:”operations
/check.go:100″,”time”:”2020-10-23T16:07:35.005055-05:00″}
{“level”:”error”,”msg”:”File not in parent bucket path customer_archive_destination”,”object”:”219.jpg”,”objectType”:”*b2.Object”,”source”:”operations
/check.go:100″,”time”:”2020-10-23T16:07:35.005151-05:00″}
{“level”:”error”,”msg”:”File not in parent bucket path travel_posters_source”,”object”:”.DS_Store”,”objectType”:”*b2.Object”,”source”:”operations
/check.go:78″,”time”:”2020-10-23T16:07:35.005192-05:00″}
{“level”:”warning”,”msg”:”12 files missing”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:225″,”time”:”2020-10-23T16:07:35.005643-05:00″}
{“level”:”warning”,”msg”:”1 files missing”,”object”:”parent bucket path travel_posters_source”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:228″,”time”:”2020-10-23T16:07:35.005714-05:00″}
{“level”:”warning”,”msg”:”13 differences found”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:231″,”time”:”2020-10-23T16:07:35.005746-05:00″}
{“level”:”warning”,”msg”:”13 errors while checking”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:233″,”time”:”2020-10-23T16:07:35.005779-05:00″}
{“level”:”warning”,”msg”:”28 matching files”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:239″,”time”:”2020-10-23T16:07:35.005805-05:00″}
2020/10/23 16:07:35 Failed to check with 14 errors: last error was: 13 differences found

Example: JSON output from rclone check command comparing two data locations.

 

Use a Static Exclude File to Ban File System Lint

While rclone has a host of flags you can specify on the fly to match or exclude files for a data copy or sync task, it’s hard to remember all the operating system or transient files that can clutter up your cloud storage. Who hasn’t had to laboriously delete macOS’s hidden folder view settings (.DS_Store), or Window’s ubiquitous thumbnails database from your pristine cloud storage?

By building your own customized exclude file of all the files you never want to copy, you can effortlessly exclude all such files in a single flag to consistently keep your storage buckets lint free.
In the following example, I saved a text file under my user directory’s rclone folder and call it with --exclude-from rather than using --exclude (as I would if filtering on the fly):

rclone sync /Volumes/LocalAssets b2:cloud-backup-bucket \
–exclude-from ~/.rclone/exclude.conf

.DS_Store
.thumbnails/**
.vagrant/**
.gitignore
.git/**
.Trashes/**
.apdisk
.com.apple.timemachine.*
.fseventsd/**
.DocumentRevisions-V100/**
.TemporaryItems/**
.Spotlight-V100/**
.localization/**
TheVolumeSettingsFolder/**
$RECYCLE.BIN/**
System Volume Information/**

Example of exclude.conf that lists all of the files you explicitly don’t want to ever sync or copy, including Apple storage system tags, Trash files, git files, and more.

 

Mount a Cloud Storage Bucket or Folder as a Local Disk

Rclone takes your cloud-fu to a truly new level with these last two moves.

Since Backblaze B2 is active storage (all contents are immediately available) and extremely cost-effective compared to other media archive solutions, it’s become a very popular archive destination for media.

If you mount extremely large archives as if they were massive, external disks on your server or workstation, you can make visual searching through object storage, as well as a whole host of other possibilities, a reality.

For example, suppose you are tasked with keeping a large network of digital signage kiosks up-to-date. Rather than trying to push from your source location to each and every kiosk, let the kiosks pull from your single, always up-to-date archive in Backblaze!

With FUSE installed on your system, rclone can mount your cloud storage to a mount point on your system or server’s OS. It will appear instantly, and your OS will start building thumbnails and let you preview the files normally.

rclone mount b2:art-assets/video ~/Documents/rclone_mnt/

Almost immediately after mounting this cloud storage bucket of HD and 4K video, macOS has built thumbnails, and even lets me preview these high-resolution video files.

 
Behind the scenes, rclone’s clever use of VFS and caching makes this magic happen. You can tweak settings to more aggressively cache the object structure for your use case.

Serve Content Directly From Cloud Storage With a Pop-up Web or SFTP Server

Many times, you’re called on to give users temporary access to certain cloud files quickly. Whether it’s for an approval, a file hand off, or whatever, this requires thinking about how to get the file to a place where the user can have access to it with tools they know how to use. Trying to email a 100GB file is no fun, and spending the time to download and move it to another system that the user can access can take up a lot of time.

Or perhaps you’d like to set up a simple, uncomplicated way to let users browse a large PDF library of product documents. Instead of moving files to a dedicated SFTP or web server, simply serve them directly from your cloud storage archive with rclone using a single command.

Rclone’s serve command can present your content stored with Backblaze via a range of protocols as easy for users to access as a web browser—including FTP, SFTP, WebDAV, HTTP, HTTPS, and more.

In the following example, I export the contents of the same folder of high-resolution video used above and present it using the WebDAV protocol. With zero HTML or complicated server setups, my users instantly get web access to this content, and even a searchable interface:

rclone serve b2:art_assets/video
2020/10/23 17:13:59 NOTICE: B2 bucket art_assets/video: WebDav Server started on http://127.0.0.1:8080/

Immediately after exporting my cloud storage folder via WebDAV, users can browse to my system and search for all “ProRes” files and download exactly what they need.

 
For more advanced needs, you can choose the HTTP or HTTPS option and specify custom data flags that populate web page templates automatically.

Continuing Your Study

Combined with our rclone webinar, these five moves will place you well on your path to rclone storage admin mastery, letting you confidently take on complicated data migration tasks with an ease and efficiency that will amaze your peers.

We look forward to hearing of the moves and new use cases you develop with these tools.

The post Rclone Power Moves for Backblaze B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Not So Suite: Dealing With Google’s New 2TB Caps

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/not-so-suite-dealing-with-googles-new-2tb-caps/

It’s easy to get used to “all you can eat” data plans—and one of the biggest justifications to use G Suite until now was that users could store as much as they wanted. But when we have unlimited data, we tend to forget about how much our content is growing until someone tells us our unlimited data plan is now… limited?

So it was a bit of a shock for lots of G Suite users to learn that they now only get 2TB per user for their $12 per user per month plan.

Hat tip to Jacob Hands who alerted us about this on Twitter!

G Suite users have to upgrade to the Enterprise class of service to retain unlimited storage. It’s unclear how much that costs because their pricing chart refers you to a sales representative if you want to get a quote. But as is true in restaurants: If you need to ask, it’s probably more expensive than you’d care to know.

If you’ve been using G Suite for long, and especially if you work with large data sets or rich media, you’re probably using more than 2TB per user. You’re going to need a plan to not only reduce your storage footprint on Google, but also safely store the content you’re forced to move while making it available and useful for your users. What do you do?

Side Note: Backblaze has proudly offered unlimited backup plans at a fixed price for close to 14 years, and we’ll continue to do so. This article focuses on solutions for teams using G Suite for collaboration. If you just need a solid backup, check out our guide on backing up your
G Suite data. If you’re looking for an incredible cloud storage offering, read on to learn about Backblaze B2 Cloud Storage.

Take Control of Your Shared User Content

Good question. You can make the largest reduction quickly by shifting videos, image libraries, and data sets out of Google Drive and into Backblaze B2 Cloud Storage!

Backblaze B2, of course, is our easy-to-use cloud storage that stores everything you want to protect at only $5/TB per month, and it makes everything you store there immediately available to you, the instant you need it.

Getting started is as simple as signing up, then you can upload files and browse them in Backblaze’s web interface, or use any one of hundreds of solutions that incorporate Backblaze B2 seamlessly, such as the popular (and free) Cyberduck SFTP file browser.

With your Backblaze B2 account set up, it’s time to start pruning files in Google Drive and preparing them for transfer!

Step One: Take an Inventory of What You Have in Google Drive

Back in Google Drive, organize your efforts by file size. In other words, move the biggest stuff first. The simplest way to uncover large files is to use a not so obvious search feature to organize by file type: ZIP archives, videos, and photos will almost surely be filling the most space. To select files by type, click the tiny triangle at the right of the search field to reveal a file type dropdown.

Using Google Drive’s search field, and the dropdown triangle, you can specify large files to move manually.
Advanced Tips: If you’re reasonably proficient and have a ton of shared files to dig through, there are a few handy tools you can use to tackle this step.

Cyberduck: In Cyberduck, for example, you can add a new Bookmark for Google Drive, and follow the wizard to authenticate to G Suite. Now you can browse Team Drives, Shared Documents, and My Drive contents easily, identify the largest files, or simply move all of this content to local storage, then into your Backblaze B2 account.

Rclone: Rclone offers another way to mount and copy all of this content off of Google Drive as well, though that path is only recommended for more advanced users. You can then start moving the largest files, or even copy your entire Google Drive folder of content with rclone.

Step Two: Migrate Your Data to Backblaze B2

Now it’s time to carry out your plan: Download from G Suite, copy to Backblaze B2 Cloud Storage Buckets, and organize your data! Once your files are safely downloaded, and uploaded to your Backblaze B2 account, they’re safe to remove from Google Drive.

Step Three: Give Your Users Remote-friendly, yet Managed Storage

If you’re a solo operator, you should be all set. But if you’re working with a team, you’re going to want to take one more step to help out your users. The best way to present storage to your team is a solution that is both accessible for remote workers, yet also easily administered by you.

For Small Teams (3-20 Users): In Backblaze B2, you can define your users into workgroups, create Buckets of storage for them, and issue app keys for users that correspond to those Buckets—all from your account management page. Users can apply those app keys to a number of tools to make their storage available on their laptops, such as Cyberduck, Mountain Duck, or ExpanDrive.

For Larger Teams (21-Infinity): You can present fully managed storage to your users that offers the same experience as corporate, shared NAS storage with a pairing of LucidLink and Backblaze B2. This approach can be set up quickly, and tie into your company’s directory services for provisioning without having to buy and set up your own hardware NAS systems.

Welcome!

We hope you’ll join us—we look forward to protecting your content and helping you serve your users!

The post Not So Suite: Dealing With Google’s New 2TB Caps appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon Drive and Third Parties—Derailed

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/amazon-drive-and-third-parties-derailed/

Backblaze laptop

If you ever used Amazon Drive for storing files or photos, it’s a good time to think about how to transition your content to a new platform—especially if you had been backing up your Synology NAS system!

Today, Amazon customers notified Amazon Drive and Amazon Photo customers that beginning November 1st, only Amazon’s proprietary web and mobile apps will be able to access your files.

This means, for example, that Synology users who had relied on Synology Cloud Sync or HyperBackup to back up their systems to Amazon Drive will have their access shut off via those tools.

Getting Back on Track

If you’re still using Amazon Drive to store general files, and Amazon Photos to store photos, you might be wondering how to protect that content with a tool that you prefer before the November 1st deadline hits.

1. Recover Your Content

Your first task will be to recover all of your content from Amazon Drive. Via their website, download the Amazon Photos app. Install, and select “Download” to download all of your Amazon Drive content locally. To download photos stored in Amazon, you may find it helpful to click “Home,” then “Photos Backed Up” will take you to a webpage that lets you download photos directly, say, by year.

First, recover all of your content from Amazon Photos and Amazon Drive.

2. Welcome to Your New Platform

With your content stored locally, we invite you to try Backblaze B2 Cloud Storage for unlimited storage for all of your files and photos at a better price than Amazon Drive.

Sign up for your Backblaze B2 account first. Your first 10GB of storage every month is free, and beyond that is only $5 per terabyte of storage per month, vs. Amazon’s $6.99 for a terabyte of storage.

3. Choose the Tool That Fits How You Work

Best of all, with Backblaze B2 you have a choice of over 60 solutions to connect to your new account!

If you’re a Synology user, you can keep using Synology Cloud Sync or HyperBackup to back up your files—simply select your new Backblaze account instead of Amazon Drive.

If you prefer graphical tools that help present your cloud storage as files and folders, Cyberduck is a great choice, and Mountain Duck will even mount your Backblaze B2 account as drives on your Mac or Windows system.

You can browse our guides for all integration tools here.

Cyberduck connected to your Backblaze B2 account makes it as simple as browsing files and folders to upload and download your files.
Mountain Duck will even mount your Backblaze B2 cloud storage on your computer as a drive—here showing a thumbnail of an 8K video clip.

And if you prefer command-line tools, rclone is an excellent choice, as is Backblaze’s own command-line tool.

For more information about using rclone, join our webinar, “Tapping the Power of Cloud Copy & Sync with Rclone” on September 17th. Rclone’s creator, Nick Craig-Wood, will explain how to use its simple command line interface to:

  • Optimize your copy/sync in line with best practices
  • Mirror storage for security without adding complexity
  • Transfer data reliably despite limited bandwidth and/or intermittent connection

Whichever tool you choose, getting it set up is as simple as visiting your Backblaze B2 Account Page, generating an Application Key, entering the Application Key ID and Application Key in your new tool’s configuration settings.

4. Protect, and Access Your Content Freely

With your tool of choice configured, it’s time to move your local content to your new Backblaze B2 Cloud Storage.

Welcome!

We hope you’ll join us—we look forward to protecting your files and photos!

The post Amazon Drive and Third Parties—Derailed appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Media Stats 2019: Top Takeaways From iconik’s New Report

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/media-stats-2019-top-takeaways-from-iconiks-new-report/

Recently, the team at iconik, a popular cloud-based content management and collaboration app, released a stats-driven look at how their business has grown over the past year. Given that we just released our Q1 Hard Drive Stats, we thought now was a good time to salute our partners at iconik for joining us in sharing business intelligence to help our industries grow and progress.

Their report is a fascinating look inside a disruptive business that is a major driver of growth for Backblaze B2 Cloud Storage. With that in mind, we wanted to share our top takeaways from their report and highlight key trends that will dramatically impact businesses soon—if they haven’t already.

Takeaway 1: Workflow Applications in the Cloud Unlock Accelerated Growth

iconik doubled all assets in the final quarter of 2019 alone.

Traditional workflow apps thrive in the cloud when paired with active, object storage.

We’ve had many customers adopt iconik with Backblaze B2, including Everwell, Fin Films, and Complex Networks, among several others. Each of these customers not only quickly converted to an agile, cloud-enabled workflow, they also immediately grew their use of cloud storage as the capacities it unlocked fueled new business. As such, it’s no surprise that iconik is growing fast, doubling all assets in Q4 2019 alone.

iconik is a prime example of an application that was traditionally installed on physical servers and storage in a facility. A longtime frustration with such systems is trying to ‘right-size’ the amount of server horsepower and storage to allocate to the system. Given how quickly content grows, making the wrong storage choice could be incredibly costly, or incredibly disruptive to your users as the system ‘hits the wall’ of capacity and the storage needs to be expanded frequently.

By moving the entire application to the cloud, users get the best of all worlds: a responsive and immersive application that keeps them focused on collaboration and production tasks, protection for the entire content library while keeping it immediately retrievable, and seamless growth to any size needed without any disruptions.

And these are only the benefits of moving your storage solution to the cloud. Almost every other application in your workflow that traditionally needs on-site servers and storage can be similarly shifted to the cloud, lending benefits like “pay-as-you-use-it” cost models, access from everywhere, and the ability to extend features with other cloud delivered services like transcoding, machine learning, AI services, and more. (Our own B2 Cloud Storage service just launched S3 Compatible APIs, which allows infinitely more solutions for diverse workflows.)

Takeaway 2: Now, Every Company Is a Media Company

41% of iconik’s customer base are not from traditional media and entertainment entities.

Every company benefits by leveraging the power of collaboration and content management in their business.

Every company generates massive amounts of rich content, including graphics, video, product and sales literature, training videos, social media clips, and more. And every company fights ‘content sprawl’ as documents are duplicated, stored on different department’s servers, and different versions crop up. Keeping that content organized and ensuring that your entire organization has perfect access to the up-to-the-minute changes in all of it is easily done in iconik, and now accounts for 41% of their customers.

Even if your company is not an ad agency, or involved in film and television production, thinking and moving like a content producer and organizing around efficient and collaborative storytelling can transform your business. By doing so, you will immediately improve how your company creates, organizes, and updates the content that carries your image and story to your end users and customers. The end result is faster, more responsive, and cleaner messaging to your end users.

Takeaway 3: Solve For Video First

Video is 17.67% of all assets in iconik—but 78.36% of storage used.

Make sure your workflow tools and storage are optimized for video first to head off future scaling challenges.

Despite being a small proportion of content in iconik’s system, video takes up the most storage.
While most customers have large libraries of HD or even SD content now, 4K size video is rapidly gaining ground as it becomes the default resolution.

Video files have traditionally been the hardest element of a workflow to balance. Most shared storage systems can serve several editors working on HD streams, but only one or two 4K editors. So a system that proves that it can handle larger video files seamlessly will be able to scale as these resolution sizes continue to grow.

If you’re evaluating changes in your content production workflow, make sure that it can handle 4K video sizes and above, even if you’re predominantly managing HD content today.

Takeaway 4: Hybrid Cloud Needs to Be Transparent

47% of content stored locally, 53% in cloud storage.

Great solutions transparently bridge on-site and cloud storage, giving you the best features of each.

iconik’s report calls out the split of the storage location for assets it stores—whether on-site, or in the cloud. But the story behind the numbers reveals a deeper message.

Where assets are stored as part of a hybrid-cloud solution is a bit more complex. Assets in heavy use may exist locally only, while others might be stored on both local storage and the cloud, and the least often used assets might exist only in the cloud. And then, many customers choose to forego local storage completely and only work with content stored in the cloud.

While that may sound complex, the power of iconik’s implementation is that users don’t need—and shouldn’t need to know—about all that complexity. iconik keeps a single reference to the asset no matter how many copies there are, or where they are stored. Creative users simply use the solution as their interface as they move their content through production, internal approval, and handoff.

Meanwhile, admin users can easily make decisions about shifting content to the cloud, or move content back from cloud storage to local storage. This means that current projects are quickly retrieved from local storage, then when the project is finished the files can move to the cloud, freeing up space on local storage for other active projects.

For customers working with Backblaze B2, the cloud storage expands to whatever size needed on a simple, transparent pricing model. And it is fully active, or in other words, it’s immediately retrievable within the iconik interface. In this way it functions as a “live” archive as opposed to offline content archives like LTO tape libraries, or a cold storage cloud which could require days for file retrieval. As such, using ‘active’ cloud storage like Backblaze B2 eases the admin’s decision-making process about what to keep, and where to keep it. With transparent cloud storage, they have the insight needed to effectively scale their data.

Looking into Your (Business) Future

iconik’s report confirms a number of trends we’ve been seeing as every business comes to terms with the full potential and benefits of adopting cloud-based solutions:

  • The dominance of video content.
  • The need for transparent reporting and visibility of the location of data.
  • The fact that we’re all in the media business now.
  • And that cloud storage will unlock unanticipated growth.

Given all we can glean from this first report, we can’t wait for the next one.

But don’t take our word for it, you should dig into their numbers and let us and iconik know what you think. Tell us how these takeaways might help your business in the coming year, or where we might have missed something. We hope to see you in the comments.

The post Media Stats 2019: Top Takeaways From iconik’s New Report appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

TV Insider Tips to Upscale Your Next Video Conference

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/tv-insider-tips-to-upscale-your-next-video-conference/

Am I Set for Video Conferencing

These tips were compiled with the help of:

Video conferencing is absolutely everywhere now, and for good reason! Social distancing appears to be helping “flatten the curve” around the world, and the ability to do video conferencing has certainly helped.

Seeing a ring of friendly faces on our calls has proven a fantastic way to bring teams, families, and friends together—even if we’re all calling from our kitchen tables instead of our offices, or chatting from the couch instead of together at a coffee shop. And it’s not just us normal folks—it’s now common to see chat show presenters and cable TV interviews delivered from laptops and iPads as well.

We’re lucky to work with a lot of folks on staff and among our customers who’ve spent careers helping people look good on camera. So we figured, since everyone’s thinking about how to put their best faces forward, we’d reach out to some of our friends who produce beautiful video and images for a living and collect their top tips to bring TV studio magic to your own video conferencing, or even prepare for your own remote TV interview.

As we cover each area where you can upgrade your setup, we’ll provide tips that cost little or nothing, but be sure to call out options where you can get more dramatic results for modest expense.

The Expert’s Guide to Upping your Video Conference Game

Before you start to worry about lighting your “good side” let’s make sure you’ve got the fundamentals dialed in first. You want to be sure that your “stage”—including baselines like network bandwidth, power, and your computer’s desktop—is set for success. No matter how well you set the scene, if your shot is blurry because the rest of your housemates are streaming 4K video, your work will have been for nothing.

Equipment Check

Mount

Most of us use our laptops for video conferencing, but iPhones and iPads are great self-contained video streaming devices too. Whichever you use, make sure that you have them on a stable platform. Even mild shaking as you type away or move at your desk can translate to jittery video for everyone else, distracting your audience from you and what you’re presenting.

Network

Having a solid network connection is one of the most important factors in your setup. If you’re on WiFi, see if you can prioritize your connection using your router’s Quality of Service feature, or move everyone not presenting to their own network. Use a 5GHz network for work, and a 2.5GHz network for, well, everything else.

If you’re on a laptop, try to get a wired ethernet connection for the biggest and most forgiving bandwidth. If you’re like most people and run a 100% WiFi household, don’t worry, this isn’t as hard as it sounds. Your WiFi router likely has an ethernet port you can use to get a direct, wired connection to get the maximum bandwidth. Of course, depending on the computer you have, you may need an adapter and possibly a long ethernet cable to complete this fix.

HowToGeek’s Home Router Guide to Quality of Service.

Power

It may seem obvious, but make sure everything’s plugged into power or at least has a healthy charge—your computer, bluetooth keyboard and mouse, everything—before starting that critical video conference or webinar. And be sure to temporarily set your laptop to “no sleep mode” to avoid suddenly seeing a lock screen on your computer while you’re talking!

Good Housekeeping

If there’s even a chance you’ll be sharing your screen during your presentation, you should clean up your desktop icons (you should really do this anyway) and swap in a solid color desktop background, which can actually reduce video compression artifacts (all of that weird stuff you see on the screen when your connection is bad).

In your browser, make sure to close out applications and web browser tabs you don’t need for your presentation and turn off notifications so the whole office doesn’t see you playing Buzzword Bingo during your next video conference!

Turn off notifications on a Mac: Option-Click on notifications in the menu bar.

Camouflage on Apple App Store.

Camera’s Ready, Set… Rolling!

Those tiny cameras in laptops, phones and tablets sit on the edge of the device, but since you tend to look at the middle of your screen during a web conference where your colleagues’ faces are, you’ll want to raise your laptop so that the camera is on eye level while you’re speaking. This way, you’re not looking down on, or up to everyone else in the conference. When you’re talking or presenting, speak to the camera as if it actually is the person you’re speaking to; it may help to put a smiley face on a Post-it under your camera to help you remember.

Or, you can actually use your iPhone or iPad as an external camera instead of using the one in your laptop thanks to some clever apps like EpocCam by Kinoni. As long as your iPhone is on the same WiFi network as your laptop, or connected via your USB cable, you can select it as a new camera in your video conferencing app.

For the best results possible, add a dedicated external camera. While some go for purpose-built webcams like the Logitech P220, or even action cameras like the GoPro Hero, you might be able to turn your existing DSLR camera into a TV studio quality workhorse by mounting it on a mini-tripod on your desk, connecting it to your laptop with an HDMI cable, and putting the camera into streaming mode. You may also need to download an app from your camera’s manufacturer to put it in streaming mode; specific steps vary for each camera platform.

The new breed of mirrorless DSLRs make a fantastic external camera for your video conferencing needs. They also double as a full-featured still photo and movie camera you can take anywhere.

YouTube and Twitch streaming stars find this to be the single best investment to make their streaming look as professional as possible. Some popular choices here are the Canon M200 and the Canon M6 Mark II, and Nikon and Sony have similar offerings. And when not connected to your laptop for conferencing, you have an amazing still and video camera. (Which you can’t say with dedicated streaming video cameras.)

Soundcheck

For the most part, the sound and microphone in your laptop does a great job of picking up your voice and playing back the conference audio. If you’re in a noisy environment, you’ll want to use your headphones to isolate the conference audio, and be sure to hit that mute button when you’re not speaking. Run a test with a friend and check for room audio problems like the background noise of a fan or refrigerator, or boominess. Moving just a few feet to a different location could make a world of difference.

Podcaster style microphones plug into your laptop’s USB port and deliver warm, rich radio-style sound.

But to really upgrade the sound of your voice, you’ll want a much better microphone than the tiny one buried in your laptop or phone. Here, we take a top tip from podcasters and make a modest investment in podcast style microphones. Blue makes superb microphones such as the Yeti and Snowball that help make you sound like a radio star.

Blue Microphones.

Check Your Six

Just like TV studio set designers, it pays to think about what else you’re showing on the screen. What’s behind you? To keep the focus on you and what you’re saying, situate yourself against a plain background and reduce visible clutter. Some find that room divider or Shoji-style screens work well, too. Or, you may want to decorate your office “stage” with a more professional background, like a bookcase tastefully arranged with awards and recognition. For the most part, a simple, non-distracting background is what you’re going for.

Digital backgrounds can drop in a professional background, or transform your image into any number of fun avatars—just be sure to practice how to toggle it on and off before your next big video conference!

Or, make the leap into digital set design. Some conferencing apps like Zoom let you swap in a picture to replace your background, digitally. (Just be sure to practice turning it off and on at the risk of delivering your next status report as a potato.)

If you have the room, you can pair a simple photographer’s backdrop and frame to turn your home office into a home studio suitable for filming testimonials and interviews. If you also add a greenscreen, your Zoom backgrounds will look as good as the cable news channels.

Zoom Virtual Backgrounds.

Snap Camera to choose custom backgrounds and filters that work for Zoom, Google Hangouts, etc.

How to Set Up Your Collapsible Background.

Lights

Good lighting takes your web conference video from flat to dramatic. Your room’s interior lights are designed to light your room, not your face. Any lights beaming directly into your laptop camera will confuse it and it will shift constantly to try to compensate for video hot spots.

If you have good natural daylight available from a window, try repositioning your laptop with different angles to make sure that the sun shines on you, and not in your viewer’s eyes. An indirect angle on your face works best. Sheer drapes will soften any harsh light.

Inexpensive LED ring lights have many uses and help deliver studio-quality face lighting.

Even with good daylight, a large LED light ring can deliver “I’m ready for my close-up” lighting. For the most flattering image that really pops on-screen, you can easily adopt a classic two or three light photographer’s setup. You can arrange desk lamps and floor lamps, or buy inexpensive clamp lights to create a high key light angled at your face, a lower angled fill or low key light, and optionally, another light behind you that shines up to help fill your background.

The results with even common household lights are dramatic. For modest expense, you can add clamp lights with fluorescent or LED “daylight” bulbs, and you can easily soften shadows by clamping translucent paper over the front of the bulb.

Entry level three point lighting kits are surprisingly inexpensive and give your video conferencing or video blogging setup a dramatic look.

If you’ll be doing a lot of streaming, you can get an inexpensive three point lighting kit that can be moved and set up anywhere quickly.

RocketJump’s excellent eight minute Lighting 101 Tutorial:

Ready for Your Close-Up?

Now, what to wear? One top tip: don’t wear black at the risk of appearing as a floating head. Try to position your camera far enough away so that your head fills about two thirds of the screen and the tops of your shoulders are visible. Now pull your shirt down in the back to smooth out wrinkles.

And finally, for the full TV star treatment, eye-brightening or allergy relief drops will make your eyes their whitest, simple bronzer or self-tanner will add color to your face, and if you see lights bouncing off bright spots on your forehead, video producers everywhere swear by Neutrogena Matte finish applied with foam wedges.

How to Apply Camera-Ready Makeup for Men and Women.

PhotoJoseph’s Makeup Tutorial:

3, 2, 1…

Finally, it’s good practice to prepare for the unexpected. Write down important dial in codes or URLs in case your internet or power conks out and you have to rejoin quickly from your phone. Have a glass of water handy just within reach and place a note on your door that you’re in a conference.

And, Action!

Yet no matter how carefully we prepare, life happens. So when your dog jumps in your lap, or your suddenly homeschooled office mate gives you their art project to look at—it’s ok! We’re all going through it too, and we’re all in it together!

What tips or stories can you share? Comment below!

The post TV Insider Tips to Upscale Your Next Video Conference appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Speed of Collaboration: How Industrious Films Adopted Cloud Storage

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/how-industrious-films-adopted-cloud-storage/

Industrious Films video collaboration

For Additional Information: This post is one of a series focusing on solutions for professionals in media and entertainment. If you’d like to learn more, we’re hosting a series of webinars about these solutions over the coming months. Please join us for our first on March 26 with iconik!

Jeff Nicosia, owner and founder of Industrious Films, will be the first to tell you that his company of creatives is typical of the new, modern creative agency. They make their name and reputation every day by delivering big advertising agency creative services directly to clients without all the extra expense of big advertising agency overhead and excess.

Part of this lighter approach includes taking a more flexible attitude towards building their team. With decades of experience at some of the best-known agencies in LA and New York, Industrious Films knows that the best people for their projects are spread out all over the country, so they employ a distributed set of creatives on almost every job.

But with entire teams working remotely, they need tools that boost collaboration and reduce the inefficiency of sending huge media files back and forth as everyone pushes to meet client delivery deadlines.

Backblaze hired Industrious Films to produce videos for our NAB booth presence last year, and during our collaborative process we introduced Backblaze B2 Cloud Storage to their team. For this group of road-tested veterans, the potential for our cloud storage project to help their process was eye-opening indeed.

How Cloud Storage Has Impacted Industrious Films Workflow

As we re-engaged with Industrious Films to work on new projects this year, we wanted to hear Jeff’s thoughts on what cloud storage has accomplished for his team, and what it was like before they started using B2.

Industrious Films Shooting a Video for Quantum

Skip Levens: Jeff, can you tell me about the niche that Industrious Films has carved out, and what your team is like?

Jeff Nicosia: Industrious Films brings the best of advertising agency video production directly to companies by eliminating the middleman. We tell customer and company stories really well, with craft, and have found a way to do it at a price that lets companies do a lot more video in their marketing vs. a once-a-year luxury. We’ve really found our niche in telling company stories and customer videos, working for companies like Quantum, Backblaze (of course), DDN, Thanx, Unisan, ExtraHop and tons more.

We’re all creatives that worked at ad agencies, design studios, post houses, etc. We’re spread out but come together for projects all over the country, and actually the world. Right now I’m in Manhattan Beach (Los Angeles, CA) while our main editor is on the other side of LA—25 minutes or 2 hours by car away depending on time of day—and our main graphics editor is in Iowa. Oh, and our colorist is either in Los Angeles or Brazil, depending on the time of year.

As for shooting we use sound guys, shooters, PA’s, etc., either from LA, or we hire locally wherever we’re shooting the video. We have crews we have collaborated with on multiple occasions in LA, Seattle, New York, London, and San Francisco. I actually shot a timelapse of a fairly typical shoot day: “A 14-Hour Shoot in 60s” to give you an idea of what it’s like.

SL: Jeff, before we talk about how you adopted Backblaze B2 and cloud storage in general, can you paint a picture of what it’s usually like to shoot and deliver a large video project like the one you created for us?

JN: It’s a never-ending exchange of hard drives and bouncing between Dropbox, Box, Google Drive, and what have you, as everyone is swapping files and sending updates. We’re also chasing customers and asking, “Did you get the file?” Or, “Did you send the file?” All of this was hard enough when video size was HD—now, when everything’s 4K or higher it just doesn’t work at all. A single 4K RAW file of 3-4GB might take up an entire Google Drive allowance, and it gets very expensive to save to Google Drive beyond that size. We’ve spent an entire day trying to upload a single critical file that looks like its uploading, then have it crap out hours later. At that point, we’ve just wasted a day and we’re back to slapping files on a hard drive and trying to make the FedEx cutoff.

“Any small business or creative endeavor has to be remote nowadays. You want to hire the best people, no matter where they are. When people live where they want and work out of a home office they charge less for their rates—that’s how we deliver a full-service ad agency and video production service at the prices we can.”

SL: I remember, from working together on other projects, that we were constantly swapping hard drives and saying, “Is this one yours?” Or finally seeing you again years later, and handing you back your hard drive.

JN: Right! It’s so common. And you can’t just put files on a hard drive and ship it. We’ve had overnight services lose drives on us enough times that we’ve learned to always make extra copies on entirely new hard drives before sending a drive out. It’s always a time crunch and you have to make sure you have a spare drive and that it’s big enough. And you just know that when you send it to a client you’re never going to see that drive again. It’s a cost of business, and hundreds of dollars a month just gone—or at least it used to be. I’ve spent way too much time stalking Best Buy buying extra hard drives when there’s a sale because we were constantly buying drives.

SL: So that was the mindset when we kicked off our NAB Video Project last year (for NAB 2019) and I said, instead of handing you a hard drive with all of our B-roll, logos, etc., let’s use Backblaze B2.

Technical Note: I helped Industrious Films set up three buckets: a private bucket that I issued write-only app keys for (basically a ‘drop bucket’ for any incoming content); a private bucket for everyone on the project to access; and a public bucket for sharing review files directly from Backblaze if needed.

Next, I cut app keys for Industrious Films that spanned all three buckets so that they could easily organize and rearrange content as needed. I entered the app key into a bookmark in Cyberduck, and gave Industrious Films the bookmark to drop into Cyberduck.

Industrious Films on Set

JN: Well, we work for technical clients, but I’m not really a technical guy. And my team are all creatives, not techies, so anything we use has to be incredibly simple. I wasn’t sure how it was going to work. Most of us were familiar with FTP clients, and this interface looks like the files and folders we’d see on a typical shared storage server, so it was very easy to adapt.

“Even though I have a background in tech, I’ve worked in technology companies, and my customers are tech companies, I’m not a tech savvy guy at all and I don’t want to be. So the tools I use have to be simple and let me get on with telling my customer’s story.”

Everyone on my team works out of their home offices, or shared workspaces. I’ve got a 100 Megabit connection, up and down, and our graphics guy has the same—and he’s in the middle of Iowa. We each started uploading files in Cyberduck, then we jumped on a Skype call together and watched 6GB files fly across and we were just blown away. We just couldn’t believe that this was cloud storage, and it seemed like the more we put in, the faster it got. Our graphics guy was just raving about it, trying out bigger and bigger file uploads. He was freaking out—he kept saying, “What kind of secret sauce do these guys have!?”

SL: Can you tell me how the team adjusted to using a shared bucket? What did collaboration look like?

JN: First of all, since we had a files and folders interface, I jumped right in and did the usual organization of assets. One folder for Backblaze customer video reels, one for Backblaze B-roll, one for logos, one for audio, one for storyboards, motion graphics templates, etc. Then everyone downloads what they need from the folder to work locally, and puts changed and finished files back up in the shared bucket for everyone to see. That way we can review on the fly.

I sync everything to a local RAID array, but most of the time my focus is only on the shared bucket with the team. I don’t use an asset manager or project manager solution—I can always drop in something like iconik later if we’re doing overlapping large projects simultaneously. This works for our team for now and is exactly what we need.

“My graphics lead moved from North Hollywood to Iowa. And whether he’s 25 miles away from me or 2000, if we’re not in the same room, we need a way to send files to each other quickly to work together. So if the tools are good enough, it doesn’t matter where the team is anymore.”

Industrious Films Video Production

SL: I seem to remember we needed some of those files for tweaks and changes as we were deploying on the NAB show floor?

JN: Right, since we had the entire project and all the source files online, in all the chaos of NAB booth building before the show opened, as we played our video on the huge screens—we realized we could still swap in a better graphic. So, we just pulled from the Backblaze web interface and dropped it in right there. Otherwise, we’d have had to track down the new file and have someone deliver it to us, or more likely not make the change at all.

“Speed is collaboration for us as a small team. When uploads and downloads are fast and we’re not fighting to move files around, then we can try new ideas quickly, zero in on the best approach, and build and turn projects faster. It’s how we punch way, way above our weight as a small shop and compete with the biggest agencies.”

SL: What advice would you give creatives who want to try to rely less on dragging hard drives around? Any final thoughts?

A: Well, first of all, hard drives are never totally going away. At least not until something very simple and very cheap comes along. I might work for technical customers, but sometimes their marketing leads will hand me hard drives, or when I want to deliver a file or have them review a file they’ll ask me to put it on a private YouTube or Vimeo link. They want to review on their phone or at lunch, so it needs to be simple for them, too. But at least we can organize everything we do on Backblaze and there’s a lot fewer hard drives in my life at least.

One of the biggest revelations I’ve had is not just for editors and producers working on projects like we did, but for shooters too. On a shoot, everyone takes a copy of the raw files and no one leaves the shoot until there are two copies. If there’s a problem with the camera carts (storage cards) this whole process can be agonizingly slow. If only more people knew they could upload a copy to something like Backblaze that would not only function as a shared copy but also allow everyone to start reviewing and editing files right away instead of waiting until they got back to the shop.

And finally, everyone can do what we’ve done. The way we’ve thrived and how creatives find their niche and thrive in a gig economy is to use simple, easy to use tools that let you tell those stories, offer better service, and compete with bigger agencies with higher overhead. We did it, anyone else can too.

SL: Absolutely! Thanks Jeff, for taking time out to talk to us. We really appreciate your team’s work and look forward to working together on our next project!

The post The Speed of Collaboration: How Industrious Films Adopted Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Metadata: Your File’s Hidden DNA and You

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/metadata-your-files-hidden-dna-and-you/

A Photo Overlaid with Metadata Information

The files you use every day on your Mac or PC, whether at home or at work, carry around a slew of hidden data that can be incredibly useful to you… or problematically revealing to others. For example, the image in the header reveals latitude and longitude details in an iPhone photo that you could use to organize the photo along with others taken in the same place. But anyone else can access the same data and enter it directly into Google Maps to discover exactly where that picture was taken! Not quite as useful.

But if you know what this hidden information is—and how to use it—it can be incredibly helpful in diagnosing problems with files, organizing or protecting data, and even removing information you don’t want revealed! If you don’t, it can be a huge annoyance, and potentially even dangerous.

“It” is “metadata” and it’s something everyone works with, even if they don’t know it. Whenever you move a file—through email, into or out of a sync or cloud storage service, or to another device—you’re likely altering its metadata. It’s something we work with at Backblaze every day. And because moving files into and out of computer backup and cloud storage services can affect metadata, we thought we’d take a high-level look at how this information works in common file types to help you understand how to optimize its use in your own file management.

You can follow along as we walk through several examples, then tackle some real world file mysteries with the power of metadata. At the end of the post, you will find a list of several tools for Macs, PC’s, and command line to test out and add to your own ‘metadata toolbox.’

What is file metadata?

A great way to think of file metadata is as extra information about a file, carried along with that file, that makes it easier to use and find. So it’s not the actual document or photo itself, it’s information about it—like the file’s name, thumbnail image, or creation date. This information is embedded in or associated with the file, and helps make it easier for you, your applications, and your computer to actually use those files.

Information about a File for Humans

The most obvious kind of metadata is a file’s name, extension, icon, and the timestamp of the its creation date. This simple metadata alone makes searching across an entire hard drive of files and folders as easy as typing a part of the name into the finder or search bar, sorting the results by date, then singling out the file you want by the proper thumbnail or filename.

Information about a File for Computers

A less well-known example of file metadata is meant to make working with files easier or safer for your operating system. Your files might carry notes for the operating system that they should be opened with a specific application. Or a flag might be set on a file you’ve downloaded from the internet or mail attachment warning your OS that it may not be safe to use.

Examples of Different File Previews
An example of basic file information on macOS and Windows.

Other critical information about a file is the permissions, or privilege levels, extended to users on that computer:

An Example of Mac OS User Permissions Metadata
An example of permissions settings on a file in Mac OS.

For example, files on UNIX-like systems, like Linux and macOS X, are marked with the name of the user account that created them (the ‘owner’), the computer account group they belong to, and the permissions for the owner and other users to open and view that file, or make changes to it.

When permissions on files are set correctly, you rarely need to think about them as a user. But if this permissions information changes, users could lose access to files, or files could be opened by users that shouldn’t have access.

Information about a File for Applications

Another category of information is human-readable, but really intended for your applications to use. Some of this information can be incredibly detailed. The best-known example of ‘application metadata’ is camera and location data embedded in images by the cameras when you take pictures, such as the camera information and the camera’s lens and shutter setting when the particular picture was taken.

Application Specific Metadata
Application metadata in an iPhone picture reveals the camera model and settings, and even GPS coordinates.

All this information is read by your image editing software to enable new features. For example, in iPhoto you can search for all images taken in the same location, or find all images shot with the same camera. That means that these files are a trove of interesting information such as the camera type, shutter speed, and even GPS coordinates where the picture was taken.

Information You Won’t Want to Share

You may already know that you do not want to broadcast the location of photos you share, but even plain old documents can have information embedded in them that you’d rather keep to yourself.

A file unknowingly containing personally identifiable data
Inspecting a file’s metadata that contains personal identification information.

In the image above, you’ll see the file metadata of an old word processing document that happily includes names and email addresses for anyone to see! It’s common for files to include information like usernames, email addresses, GPS coordinates, or server mount paths. This is the kind of information you might want to delete before making a file public.

How Metadata Changes as You Move Files from Place to Place

As your files move around—copied from user to user and system to system—all of this useful metadata is vulnerable to being changed or lost. This has implications for your workflow, especially when you inevitably need to reconcile different versions and copies of files.

Unfortunately, the operating-system-specific tags or comments you place on files are the first to be lost when they move from location to location, and system to system.

For example, if I carefully color tag a folder of images on my Mac, then send them to be reviewed by a colleague who works on a PC, all those tags are gone when I get the files back. For this reason, true workflow-specific tags are usually applied in an external system that is dedicated to managing this kind of metadata for files—like a photo manager or a digital asset manager.

File Permissions Can Change from Macs, Windows, and Linux

It’s also common for files received on one OS to come over with non-standard permissions set. For whatever reason, documents saved on a PC end up having the executable bit set when they are moved to a Mac. The files will still open, but there’s no reason for them to be marked like an application.

File Creation and Modification Dates Can Change, Too

When you create or change a file on your computer, the time is recorded as part of the file’s metadata. But what happens when the time on one computer differs from another? Most modern OS’s do a good job of syncing to special time servers, and compensating for universal time based on location, but there are still changes introduced that make sorting files by time a challenge.

Permissions and Timestamps Can Change from Network and Cloud Storage File Metadata and Cloud Servers

When files are copied to network servers, or the cloud, things can get completely changed. Depending on how the file is moved, and how the storage provider handles files, your modification dates could get completely blown away, and since the ‘old’ file you’re uploading is new to the storage system, it becomes a new file with an entirely new creation date.

Individually, these changes are annoying, but collectively they threaten to kill with a thousand cuts. As time stamps, tags, and permissions are changed, your carefully organized file hierarchy or valuable archival information could be in tatters.

A Real World Example of Changing File Metadata

To see how metadata changes, let’s follow a single file downloaded to a Mac, then a PC, then upload and download them to different cloud storage options to see what changes get introduced.

First: A Computer-to-Computer Test

In this test I downloaded a PDF from Backblaze’s website to a Mac. On the Mac, I added color tags, and even comments using the Finder’s preview pane. Next, I downloaded that same file on a Windows system, then copied it over to the Mac.

Despite appearing to be the exact same PDF file, let’s fire up a terminal window on the Mac to inspect them further and make sure.

To follow along, navigate to the folder of files you want to inspect so that it’s handy. Then open another finder window and double click on the ‘Terminal’ application, which is found in the Utilities folder inside of your Applications folder. The terminal application will launch, and you’re placed at the ‘prompt’ ready for your command.

To navigate to the folder you want to work with, type in ‘cd’ at the terminal prompt to change directory, enter a space, then drag the folder of files you want to work with into the terminal window and drop it. You’ll see that the path to the folder is automatically resolved to that folder’s location, saving you a lot of typing.

Now that I’m in the proper folder, the tool I want to use is the humble ‘ls’ command to list a folder’s files. To do so, type in “ls” and then a space, then a dash, immediately followed by “[email protected]”—this will retrieve the long form of results, and the ‘@’ flag will explicitly show extended metadata on the Mac.

Comparing Two Files' Metadata
Detailed ls results comparison of the two files reveals extended attributes metadata and file permissions mismatches.

As you can already see, the following changes have been introduced:

  1. The Windows file has non-standard permissions (the PDF file is marked as executable as if it were an application, which you can tell by the asterisk marker at the end of the file name, and the permissions sets are all marked with an ‘x,’ indicating that the file is ‘executable’ or treated like an application or command instead of a document.)
  2. The Mac’s Finder shows that the file color tag and comments that I’ve entered are missing in the Windows version.
  3. The Mac has flagged files downloaded on the Mac for its file Quarantine, which is part of the Gatekeeper security feature on mac OS X that marks and prevents potential malware or security risks to your system. This was completely bypassed when copying it over from Windows, so no Quarantine flags were set.

Next Stop, the Cloud

Now, I’ll move these files to and from three different types of cloud storage—Backblaze B2 Cloud Storage, Google Drive, and Dropbox—and see how they change.

To move the files to Backblaze B2, I used rclone, which is an extremely popular tool to copy and sync files from any mix of storage and cloud systems. For Google Drive, I used their web interface, and for Dropbox I uploaded via the web, then retrieved the files as a compressed file.

Now, when I compare all the files side by side I can see how different all of the file metadata is.

Comparing the Files Post-Cloud Download
Slight metadata differences emerge post download from different cloud storage services.
Command Line File Comparison
The downloaded test files’ differing metadata information as returned by the ls command.

First, all of my user-entered metadata, like tags and comments, were not picked up by cloud storage, as expected. Secondly, the Mac’s Gatekeeper security feature also promptly labeled every file downloaded with the ‘Quarantine’ flag. Backblaze B2 returned files with proper file permissions, (644 or read/write for the user, read for the group, and read for all others) and preserves the creation date of the original file.

Both GDrive and Dropbox applied new file creation and file modification timestamps—and bizarrely, the files returned by Dropbox have a “modified date” 8 hours in the future! Does Dropbox know something we don’t?

You can see how searching and sifting through all of these copies on my Mac has become tremendously complicated now.

Solving Metadata Workflow Mysteries and Challenges

Hopefully it’s clear that unless your files only live on your local system, as they move from system to system, the metadata they carry around will change.

Workflow Example 1: Using Metadata Tools to Learn About a ‘Mystery’ File

Let’s apply what we’ve learned in some common examples of how metadata is changed in files, how to inspect them, and some suggestions to correct them.

Inspecting a file’s metadata information can be helpful in diagnosing misnamed files, or files that have lost their file extension. The operating system usually blindly trusts the file extension. For example any file named with a .pdf extension will try to open it as a PDF file even if it’s really something else!

MacOS file information for a mystery file
MacOS File Information for a “Mystery File.”

Above, I have a file from a very old backup that is missing an extension. The Mac is having trouble interpreting the way the original Windows OS file system encoded the date, so my Mac thinks the file was created December 31, 1969! (I’m pretty sure I wasn’t using MS Office in 1969.)

Without an extension, my Mac assumes this file must be a text file, and offers to open it in TextEdit, the default app for opening text files. When I double click on the file, the OS tries to open it but throws an error.

Solving the Mystery Using Exiftool
Mystery solved by inspecting the hidden file metadata: It’s an old Word doc backup!

Reaching into the toolbox, I use a command-line program called exiftool, a powerful tool to reveal a file’s embedded file metadata. (Navigate to the bottom of the post to read more about exiftool and where you can learn more about how to use it). By calling the exiftool from the terminal application, and passing in the name of the file I want to inspect, all is revealed! This is, in fact, a Microsoft Word file.

Looking closer, I can even see that this isn’t the original file, it was autosaved from the original file, which has an entirely different name. Mystery solved! I can now safely add the ‘.doc’ extension to the file, and it will open properly with my word processor that can still import this version of Microsoft Word.

Workflow Example 2: Uncovering Duplicate Files

Next, let’s take this entire folder of PDF copies that I used for upload tests. After all that uploading and downloading, my single original file has 8 copies. I ‘know’ that I only need one of these, so let’s try de-duping them!

De-Dupe Confusion in Gemini
Due to the file permissions and extended attributes differences, I might accidentally delete file versions I want to keep.

When I try to dedupe this folder using a tool like Gemini, a duplicate file finding tool, I’m presented with several choices of duplicates for me to remove. In other words, Gemini 2 was able to determine that there are duplicates, but isn’t sure which set of files it should keep.

If I select by ‘oldest’ duplicates, it leaves me with the Dropbox versions, by ‘newest’ it leaves me with the GDrive versions, etc. In this particular case, the ‘automatic’ selection tool lets me mark the GDrive and Dropbox versions as the duplicates I will delete. However, the differences in file permissions and extended attributes in Mac’s Finder are preventing these files from being de-duped any further.

I still have two files—the ‘original’ files downloaded to my Mac and PC. Gemini insists they are different files, but we know they are not, so let’s meet some new tools.

Setting Proper Permissions

I could, of course, use Mac’s Finder to reset the permissions of this single file downloaded from Windows. But what if I’m faced with having to reset permissions on thousands of files at once?

Chaining Two Commands Together
In this more advanced example, I’m chaining two commands together at once to first find, then reset permissions on all documents at once.

To show how you can combine several tools at once, chain the ‘find’ and the ‘chmod’ commands together to first find all documents in my current folder, then change permissions on all of them at once.

Cleaning Mac Extended Attributes

Next, I’ve decided that I want to clear all of the extended attributes that the Mac has set on these files. For this task, I’ll use Apple’s xattr tool.

xattr Code Snippet
Here, I’m using Apple’s xattr tool to remove all Finder extended attributes like comments, color tags, and Quarantine flags, etc.

Now, when I rerun Gemini 2 on this folder, I identify the last duplicate, delete it and I’m back to one file again.

The Final Results of the Gemini Test
With fixed permissions and removing macOS extended attributes, I can now fully de-dupe these files.

File Metadata Takeaways

As we’ve seen, the metadata carried by the files you use every day changes over the life of the file as it moves from system to system, and server to server. And those changes can be problematic when it comes to the usefulness and security of your data.

You now have the power to see that information, inspect it, and—with the tools listed below—you can change it, solve the mysteries that crop up trying to mediate those changes, and clean up metadata you don’t want made widely known when you share the files.

Do you have more questions about file metadata and how it affects how you use and save your files? Let us know! Meanwhile, the tools listed below are excellent starting points to aid in further exploration.

Addendum: Tools Reference

Here is a list of tools referenced in the article, and other interesting command-line and GUI tools to move, dedupe, and rename files:

exiftool—Hands-down the most widely used metadata exploration tool, which lets you inspect and manipulate standard EXIF and other associated metadata. Latest Windows and macOS downloads are available on the exiftools.org website, via Linux package system, or on a mac with ‘brew install exiftool.’ There are many GUI ports available from the website as well.

rclone—Uses rsync style syntax to copy and sync file locations to and from the widest variety of destinations including almost every known cloud storage choice.

xattr—A macOS system tool to inspect, create, or remove file extended attributes.

ranger—An old school ‘file commander’ that includes an embedded metadata pane. Binaries available, build from source, or on a Mac install with ‘brew install ranger.’

MacPaw Gemini2—Still one of the most widely-used GUI de-dupe tools on the Mac.

fdupes—One of several available command-line de-duping tools.

A Better Finder Rename—A GUI tool to rename batches of files, and even rename according to parent folder structure and EXIF information.

Bulk Rename Utility—A Windows analogue of ‘A Better Finder Rename’ on the Mac.

rename—(or ‘brew install rename’) A truly impressive tool to rename entire batches of files with regex, or simple text replacement or addition. Be sure to use the “–dry-run” flag to test what changes it will make first!

The post Metadata: Your File’s Hidden DNA and You appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Digital Nomad: Amphibious Filmmaker Chris Aguilar

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/digital-nomad-amphibious-filmmaker-chris-aguilar/

Map showing Digital Nomad Chris Aguilar at Work in the Ka'iwi Channel

The Digital Nomads Series

In this blog series, we explore how you can master the nomadic life—whether for a long weekend, an extended working vacation, or maybe even the rest of your career. We profile professionals we’ve met who are stretching the boundaries of what (and where) an office can be, and glean lessons along the way to help you to follow in their footsteps. In our first post in the series, we provided practical tips for working on the road. In this edition, we profile Chris Aguilar, Amphibious Filmmaker.


There are people who do remote filming assignments, and then there’s Chris, the Producer/Director of Fin Films. For him, a normal day might begin with gathering all the equipment he’ll need—camera, lenses, gear, cases, batteries, digital storage—and securing it in a waterproof Pelican case which he’ll then strap to a paddleboard for a long swim to a race boat far out on the open ocean.

This is because Chris, a one-man team, is the preeminent cinematographer of professional paddleboard racing. When your work day involves operating from a beachside hotel, and being on location means bouncing up and down in a dinghy some 16 miles from shore, how do you succeed? We interviewed Chris to figure out.

Chris filming on the water during Molokai 2 Oahu Paddleboard World Championships
Chris filming on the water during Molokai 2 Oahu Paddleboard World Championships — Photo by Troy Nebecker

Getting Ready for a Long Shoot

To save time in the field, Chris does as much prep work as he can. Knowing that he needs to be completely self-sufficient all day—he can’t connect to power or get additional equipment—he gathers and tests all of the cameras he’ll need for all the possible shots that might come up, packs enough SD camera cards, and grabs an SSD external drive large enough to store an entire day’s footage.

Chris edits in Adobe Premiere, so he preloads a template on his MacBook Pro to hold the day’s shots and orders everything by event so that he can drop his content in and start editing it down as quickly as possible. Typically, he chooses a compatible format that can hold all of the different content he’ll shoot. He builds a 4K timeline at 60 frames per second that can take clips from multiple cameras yet can export to other sizes and speeds as needed for delivery.

Map of the Molokai 2 Oahu Paddleboard Route
The Molokai 2 Oahu Paddleboard Race Route

Days in the Life

Despite being in one of the most exotic and glamorous locations in the world (Hawaii), covering a 32-mile open-ocean race is grueling. Chris’s days start as early as 5AM with him grabbing shots as contestants gather, then filming as many as 35 interviews on race-day eve. He does quick edits of these to push content out as quickly as possible for avid fans all over the world.

The next morning, before race time, he double-checks all the equipment in his Pelican case, and, when there’s no dock, he swims out to the race- or camera boat. After that, Chris shoots as the race unfolds, constantly swapping out SD cards. When he’s back on dry land his first order of business is copying over all of the content to his external SSD drive.

Even after filming the race’s finish, awards ceremonies, and wrap-up interviews, he’s still not done: By 10PM he’s back at the hotel to cut a highlight reel of the day’s events and put together packages that sports press can use, including the Australian press that needs content for their morning sports shows.

For streaming content in the field, Chris relies on Google Fi through his phone because it can piggyback off of a diverse range of carriers. His backup network solution is a Verizon hotspot that usually covers him where Google Fi cannot. For editing and uploading, he’s found that he can usually rely on his hotel’s network. When that doesn’t work, he defaults to his hotspot, or a coffee shop. (His pro tip is that, for whatever reason, the Starbucks in Hawaii typically have great internet.)

A paddleboarder

Building a Case

After years of shooting open-ocean events, Chris has settled on a tried and true combination of gear—and it all fits in a single, waterproof Pelican 1510 case. His kit has evolved to be as simple and flexible as possible, allowing him to cover multiple shooting roles in a hostile environment including sand, extreme sun-glare on the water, haze, fog, and of course, the ever-present ocean water.

At the same time, his gear needs to accommodate widely varied shooting styles: Chris needs to be ready to capture up close and personal interviews; wide, dramatic shots of the pre-race ceremonies; as well as a combination of medium shots of several racers on the ocean and long, telephoto shots of individuals—all from a moving boat bobbing on the ocean. Here’s his “Waterproof Kit List”:

The Case
Pelican 1510

Pelican 1510 Waterproof Case

The Cameras

Chris likes compact, rugged camcorders from Panasonic. They have extremely long battery life, and the latest generation have large sensor sizes, wide dynamic range and even built-in ND filter wheels to compensate for the glare on the water. He’ll also bring other cameras for special shots, like an 8mm film camera for artistic shots, or a GoPro for the classic ‘from under the sea to the waterline’ shots.

Primary Interview Camera

  • Panasonic EVA1 5.7K Compact Cinema Camcorder 4K 10b 4:2:2 with EF lens-mount (with rotating lens kit depending on the event)

Panasonic EVA1 5.7K Compact Cinema Camcorder 4K 10b 4:2:2 with EF lens-mount

Action Camera and B-Roll

  • Panasonic AG-CX350 (or EVA1 kitted out similarly if the CX350 isn’t available)

Panasonic AG-CX350 Camera

Stills and Video

 

  • Panasonic GH5 20.3MP and 4K 60fps 4:2:2 10-b Mirrorless ILC camera

    Panasonic GH5 20.3MP and 4K 60fps 4:2:2 10-b Mirrorless ILC camera

 

Special Purpose and B-Roll Shots

  • Eumig Nautica Super 8 film self-sealed waterproof camera

    Eumig Nautica Super 8 film self-sealed waterproof camera

  • 4K GoPro in a waterproof dome housing

    4K GoPro in a waterproof dome housing

    Storage

    As a one-person show, Chris invests in enough SD cards for his cameras that can cover the entire day’s shooting without having to reuse cards. Chris will then copy all of those card’s content to a bus-powered SSD drive.

  • 8-12 64GB or 128GB SD cards

    Panasonic SD Card

  • 1 TB SSD Glyph or G-Tech SSD drive

    1 TB SSD Glyph drive

    Other Equipment

    • Multiple Neutral Density filters. These filters reduce the intensity of all wavelengths without affecting color. With ND filters the operator can dial in combinations of aperture, exposure time and sensor sensitivity without being overexposed, and delivers more ‘filmic’ looks, setting the aperture to a low value for sharper images, or wide open for a shallow depth-of-field
    • Extra batteries. Needless to say having extra batteries for his cameras and his phone is critical when he may not be able to recharge for 12 hours or more.

    Now, The Real Work Begins

    When wrapping up an event’s coverage, all of the content captured needs to be stored and managed. Chris’s previous workflow required transferring the raw and finished files to external drives for storage. That added up to a lot of drives. Chris estimates that over the years he had stored about 20 terabytes of footage on paddleboarding alone.

    Managing all those drives proved to be too big of a task for someone who is rarely in his production office. Chris needed access to his files from wherever he was, and a way to view, catalog, and share the content with collaborators.

    As he got his approach dialed to accommodate remote broadband speed, storage drive wrangling, inexpensive cloud storage, and cloud-based digital asset management systems, putting all his content into the cloud became an option for Chris. Using Backblaze’s B2 Cloud Storage along with iconik content management software, what used to take several days in the office searching through hard drives for specific footage to edit or share with a collaborator now involves just a few keyword searches and a matter of minutes to share via iconik.

    For a digital media nomad like Chris, digitally native solutions based in the cloud make a lot of sense. Plus, Chris knows that the content is safely and securely stored, and not exposed to transport challenges, accidents (including those involving water), and other difficulties that could spoil both his day and that of his clients.

    Learn More About How Chris Works Remotely

    You can learn more about Chris, Fin Film Company, and how he works from the road in our case study on Fin Films. We’ve also linked to Chris’s Kit Page for those of you who just can’t get enough of this gear…


    We’d Love to Hear Your Digital Nomad Stories

    If you consider yourself a digital nomad and have an interesting story about using Backblaze Cloud Backup or B2 Cloud Storage from the road (or wherever), we’d love to hear about it, and perhaps feature your story on the blog. Tell us what you’ve been doing on the road at mailbag@backblaze.com.

    You can view all the posts in this series on the Digital Nomads page in our Blog Archives.

The post Digital Nomad: Amphibious Filmmaker Chris Aguilar appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Shocking Truth — Managing for Hard Drive Failure and Data Corruption

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/managing-for-hard-drive-failures-data-corruption/

hard disk drive covered in 0s, 1s, ?s

Ah, the iconic 3.5″ hard drive, now approaching a massive 16TB of storage capacity. Backblaze storage pods fit 60 of these drives in a single pod, and with well over 750 petabytes of customer data under management in our data centers, we have a lot of hard drives under management.

Yet most of us have just one, or only a few of these massive drives at a time storing our most valuable data. Just how safe are those hard drives in your office or studio? Have you ever thought about all the awful, terrible things that can happen to a hard drive? And what are they, exactly?

It turns out there are a host of obvious physical dangers, but also other, less obvious, errors that can affect the data stored on your hard drives, as well.

Dividing by One

It’s tempting to store all of your content on a single hard drive. After all, the capacity of these drives gets larger and larger, and they offer great performance of up to 150 MB/s. It’s true that flash-based hard drives are far faster, but the dollars per gigabyte price is also higher, so for now, the traditional 3.5″ hard drive holds most data today.

However, having all of your precious content on a single, spinning hard drive is a true tightrope without a net experience. Here’s why.

Drivesaver Failure Analysis by the Numbers

Drive failures by possible external force

I asked our friends at Drivesavers, specialists in recovering data from drives and other storage devices, for some analysis of the hard drives brought into their labs for recovery. What were the primary causes of failure?

Reason One: Media Damage

The number one reason, accounting for 70 percent of failures, is media damage, including full head crashes.

Modern hard drives stuff multiple, ultra thin platters inside that 3.5 inch metal package. These platters spin furiously at 5400 or 7200 revolutions per minute — that’s 90 or 120 revolutions per second! The heads that read and write magnetic data on them sweep back and forth only 6.3 micrometers above the surface of those platters. That gap is about 1/12th the width of a human hair and a miracle of modern technology to be sure. As you can imagine, a system with such close tolerances is vulnerable to sudden shock, as evidenced by Drivesavers’ results.

This damage occurs when the platters receive shock, i.e. physical damage from impact to the drive itself. Platters have been known to shatter, or have damage to their surfaces, including a phenomenon called head crash, where the flying heads slam into the surface of the platters. Whatever the cause, the thin platters holding 1s and 0s can’t be read.

It takes a surprisingly small amount of force to generate a lot of shock energy to a hard drive. I’ve seen drives fail after simply tipping over when stood on end. More typically, drives are accidentally pushed off of a desktop, or dropped while being carried around.

A drive might look fine after a drop, but the damage may have been done. Due to their rigid construction, heavy weight, and how often they’re dropped on hard, unforgiving surfaces, these drops can easily generate the equivalent of hundreds of g-forces to the delicate internals of a hard drive.

To paraphrase an old (and morbid) parachutist joke, it’s not the fall that gets you, it’s the sudden stop!

Reason Two: PCB Failure

The next largest cause is circuit board failure, accounting for 18 percent of failed drives. Printed circuit boards (PCBs), those tiny green boards seen on the underside of hard drives, can fail in the presence of moisture or static electric discharge like any other circuit board.

Reason Three: Stiction

Next up is stiction (a portmanteau of friction and sticking), which occurs when the armatures that drive those flying heads actually get stuck in place and refuse to operate, usually after a long period of disuse. Drivesavers found that stuck armatures accounted for 11 percent of hard drive failures.

It seems counterintuitive that hard drives sitting quietly in a dark drawer might actually contribute to its failure, but I’ve seen many older hard drives pulled from a drawer and popped into a drive carrier or connected to power just go thunk. It does appear that hard drives like to be connected to power and constantly spinning and the numbers seem to bear this out.

Reason Four: Motor Failure

The last, and least common cause of hard drive failure, is hard drive motor failure, accounting for only 1 percent of failures, testament again to modern manufacturing precision and reliability.

Mitigating Hard Drive Failure Risk

So now that you’ve seen the gory numbers, here are a few recommendations to guard against the physical causes of hard drive failure.

1. Have a physical drive handling plan and follow it rigorously

If you must keep content on single hard drives in your location, make sure your team follows a few guidelines to protect against moisture, static electricity, and drops during drive handling. Keeping the drives in a dry location, storing the drives in static bags, using static discharge mats and wristbands, and putting rubber mats under areas where you’re likely to accidentally drop drives can all help.

It’s worth reviewing how you physically store drives, as well. Drivesavers tells us that the sudden impact of a heavy drawer of hard drives slamming home or yanked open quickly might possibly damage hard drives!

2. Spread failure risk across more drives and systems

Improving physical hard drive handling procedures is only a small part of a good risk-reducing strategy. You can immediately reduce the exposure of a single hard drive failure by simply keeping a copy of that valuable content on another drive.This is a common approach for videographers moving content from cameras shooting in the field back to their editing environment. By simply copying content over from one fast drive to another, the odds of both drives failing at once are less likely. This is certainly better than keeping content on only a single drive, but definitely not a great long-term solution.

Multiple drive NAS and RAID systems reduce the impact of failing drives even further. A RAID 6 system composed of eight drives not only has much faster read and write performance than a single drive, but two of its drives can fail and still serve your files, giving you time to replace those failed drives.

Mitigating Data Corruption Risk

The Risk of Bit Flips

Beyond physical damage, there’s another threat to the files stored on hard disks: small, silent bit flip errors often called data corruption or bit rot.

Bit rot errors occur when individual bits in a stream of data in files change from one state to another (positive or negative, 0 to 1, and vice versa). These errors can happen to hard drive and flash storage systems at rest, or be introduced as a file is copied from one hard drive to another.

While hard drives automatically correct single-bit flips on the fly, larger bit flips can introduce a number of errors. This can either cause the program accessing them to halt or throw an error, or perhaps worse, lead you to think that the file with the errors is fine!

Bit Flip Errors by the Book

In a landmark study of data failures in large systems, Disk failures in the real world:
What does an MTTF of 1,000,000 hours mean to you?
, Bianca Schroeder and Garth A. Gibson reported that “a large number of the problems attributed to CPU and memory failures were triggered by parity errors, i.e. the number of errors is too large for the embedded error correcting code to correct them.”

Flash drives are not immune either. Bianca Shroeder recently published a similar study of flash drives, Flash Reliability in Production: The Expected and the Unexpected, and found that “…between 20-63% of drives experienced at least one of the (unrecoverable read errors) during the time it was in production. In addition, between 2-6 out of 1,000 drive days were affected.”

“These UREs are almost exclusively due to bit corruptions that ECC cannot correct. If a drive encounters a URE, the stored data cannot be read. This either results in a failed read in the user’s code, or if the drives are in a RAID group that has replication, then the data is read from a different drive.”

Exactly how prevalent bit flips are is a controversial subject, but if you’ve ever retrieved a file from an old hard drive or RAID system and see sparkles in video, corrupt document files, or lines or distortions in pictures, you’ve seen the results of these errors.

Protecting Against Bit Flip Errors

There are many approaches to catching and correcting bit flip errors. From a system designer standpoint they usually involve some combination of multiple disk storage systems, multiple copies of content, data integrity checks and corrections, including error-correcting code memory, physical component redundancy, and a file system that can tie it all together.

Backblaze has built such a system, and uses a number of techniques to detect and correct file degradation due to bit flips and deliver extremely high data durability and integrity, often in conjunction with Reed-Solomon erasure codes.

Thanks to the way object storage and Backblaze B2 works, files written to B2 are always retrieved exactly as you originally wrote them. If a file ever changes from the time you’ve written it, say, due to bit flip errors, it will either be reproduced from a redundant copy of your file, or even mathematically reconstructed with erasure codes.

So the simplest, and certainly least expensive way to get bit flip protection for the content sitting on your hard drives is to simply have another copy on cloud storage.

Resources:

The Ideal Solution — Performance and Protection

With some thought, you can apply these protection steps to your environment and get the best of both worlds: the performance of your content on fast, local hard drives, and the protection of having a copy on object storage offsite with the ultimate data integrity.

The post The Shocking Truth — Managing for Hard Drive Failure and Data Corruption appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Profound Benefits of Cloud Collaboration for Business Users

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/cloud-collaboration-for-business-users/

The Profound Benefits of Cloud Collaboration for Business Users

Apple’s annual WWDC is highlighting high-end desktop computing, but it’s laptop computers and the cloud that are driving a new wave of business and creative collaboration

WWDC, Apple’s annual megaconference for developers kicks off this week, and Backblaze has team members on the ground to bring home insights and developments. Yet while everyone is drooling over the powerful new Mac Pro, we know that the majority of business users use a portable computer as their primary system for business and creative use.

The Rise of the Mobile, Always On, Portable Workstation

Analysts confirm this trend towards the use of portable computers and the cloud. IDC’s 2019 Worldwide Quarterly Personal Computing Device Tracker report shows that desktop form-factor systems comprise only 22.6% of new systems and laptops and portables are chosen almost twice as much at 42.4%.

After all, these systems are extremely popular with users and the DevOps and IT teams that support them. Small and self-contained, with massive compute power, modern laptops have fast SSD drives and always-connected Wi-Fi, helping users be productive anywhere: in the field, on business trips, and at home. Surprisingly, companies today can deploy massive fleets of these notebooks with extremely lean staff. At the inaugural MacDevOps conference a few years ago Google’s team shared that they managed 65,000 Macs with a team of seven admins!

Laptop Backup is More Important Than Ever

With the trend towards leaner IT staffs, and the dangers of computers in the field being lost, dropped or damaged, having a reliable backup system that just works is critical. Despite the proliferation of teams using shared cloud documents and email, all of the other files on your laptop you’re working on — the massive presentation due next week or the project that’s not quite ready to share on Google Drive — all have no protection without backup, which is of course why Backblaze exists!

Cloud as a Shared Business Content Hub is Changing Everything

When a company is backing up users’ files comfortably to the cloud, the next natural step is to adopt cloud-based storage like Backblaze B2 for your teams. With over 750 petabytes of customer data under management, Backblaze has worked with businesses of every size as they adopt cloud storage. Each customer and business does so for different reasons.

In the past, a business department typically would get a share of a company’s NAS server and was asked to keep all of the department’s shared documents there. But outside the corporate firewall, it turns out these systems are hard to access remotely from the road. They require VPNs and a constant network connection to mount a corporate shared drive via SMB or NFS. And, of course, running out of space and storing large files was an ever present problem.

Sharing Business Content in the Cloud Can be Transformational for Businesses

When considering a move to cloud-based storage for your team, some benefits seem obvious, but others are more profound and show that cloud storage is emerging as a powerful, organizing platform for team collaboration.

Shifting to cloud storage delivers these well-known benefits:

  • Pay only for storage you actually need
  • Grow as large and as quickly as you might need
  • Service, management, and upgrades are built in to the service
  • Pay for service as you use it out of operating expenses vs. onerous capital expenses

But shifting to shared, cloud storage yields even more profound benefits:

Your Business Content is Easier to Organize and Manage: When your team’s content is in one place, it’s easier to organize and manage, and users can finally let go of stashing content all over your organization or leaving it on their laptops. All of your tools to mine and uncover your business’s content work more efficiently, and your users do as well.

You Get Simple Workflow Management Tools for Free: Storage can fit your business processes much easier with cloud storage and do it on the fly. If you ever need to set up separate storage for teams of users, or define read/write rules for specific buckets of content, it’s easy to configure with cloud storage.

You Can Replace External File-Sharing Tools: Since most email services balk at sending large files, it’s common to use a file sharing service to share big files with other users on your team or outside your organization. Typically this means having to download a massive file, re-upload it to a file-sharing service, and publish that file-sharing link. When your files are already in cloud, sharing it is as simple as retrieving a URL location.

In fact, this is exactly how Backblaze organizes and serves PDF content on our website like customer case studies. When you click on a PDF link on the Backblaze website, it’s served directly from one of these links from a B2 bucket!

You Get Instant, Simple Policy Control over Your Business or Shared Content: B2 offers simple-to-use tools to keep every version of a file as it’s created, keep just the most recent version, or choose how many versions you require. Want to have your shared content links time-out after a day or so? This and more is all easily done from your B2 account page:

B2 Lifecycle Settings
An example of setting up shared link rules for a time-sensitive download: The file is available for 3 days, then deleted after 10 days

You’re One Step Away from Sharing That Content Globally: As you can see, beyond individual file-sharing, cloud storage like Backblaze B2 can serve as your origin store for your entire website. With the emergence of content delivery networks (CDN), you’re now only a step away from sharing and serving your content globally.

To make this easier, Backblaze joined the Bandwidth Alliance, and offers no-cost egress from your content in Backblaze B2 to Cloudflare’s global content delivery network.

Customers that adopt this strategy can dramatically slash the cost of serving content to their users.

"The combination of Cloudflare and Backblaze B2 Cloud Storage saves Nodecraft almost 85% each month on the data storage and egress costs versus Amazon S3." - James Ross, Nodecraft Co-founder/CTO

Read the Nodecraft/Backblaze case study.

Get Sophisticated Content Discovery and Compliance Tools for Your Business Content: With more and more business content in cloud storage, finding the content you need quickly across millions of files, or surfacing content that needs special storage consideration (for GDPR or HIPAA compliance, for example) is critical.

Ideally, you could have your own private, customized search engine across all of your cloud content, and that’s exactly what a new class of solutions provide.

With Acembly or Aparavi on Backblaze, you can build content indexes and offer deep search across all of your content, and automatically apply policy rules for management and retention.

Where Are You in the Cloud Collaboration Trend?

The trend to mobile, always-on workers building and sharing ever more sophisticated content around cloud storage as a shared hub is only accelerating. Users love the freedom to create, collaborate and share content anywhere. Businesses love the benefits of having all of that content in an easily managed repository that makes their entire business more flexible and less expensive to operate.

So, while device manufacturers like Apple may announce exciting Pro level workstations, the need for companies and teams to collaborate and be effective on the move is an even more important and compelling issue than ever before. The cloud is an essential element of that trend that can’t be underestimated.

•  •  •

Upcoming Free Webinars

Wednesday, June 5, 10am PT
Learn how Nodecraft saved 85% on their cloud storage bill with Backblaze B2 and Cloudflare.
Join the Backblaze/Nodecraft webinar.

Thursday, June 13, 10am PT
Want to learn more about turning content in Backblaze B2 into searchable content with powerful policy rules?
Join the Backblaze/Aparavi webinar
.

The post The Profound Benefits of Cloud Collaboration for Business Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating 23TB from Amazon S3 to Backblaze B2 in Just Seven Hours

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/migrating-23tb-from-amazon-s3-to-backblaze-b2-in-just-seven-hours/

flowchart of data transfer - Cloudflare - Bandwidth Alliance FTW! - Backblaze B2 Cloud Storage - Free Bandwidth - Nodecraft

Like many Backblaze customers, Nodecraft realized they could save a fortune by shifting their cloud storage to Backblaze and invest it elsewhere in growing their business. In this post that originally appeared on Nodecraft’s blog, Gregory R. Sudderth, Nodecraft’s Senior DevOps Engineer, shares the steps they took to first analyze, test, and then move that storage.
— Skip Levens

Overview

TL;DR: Nodecraft moved 23TB of customer backup files from AWS S3 to Backblaze B2 in just 7 hours.

Nodecraft.com is a multiplayer cloud platform, where gamers can rent and use our servers to build and share unique online multiplayer servers with their friends and/or the public. In the course of server owners running their game servers, there are backups generated including the servers’ files, game backups and other files. It goes without saying that backup reliability is important for server owners.

In November 2018, it became clear to us at Nodecraft that we could improve our costs if we re-examine our cloud backup strategy. After looking at the current offerings, we decided we were moving our backups from Amazon’s S3 to Backblaze’s B2 service. This article describes how our team approached it, why, and what happened, specifically so we could share our experiences.

Benefits

Due to S3 and B2 being at least nearly equally* accessible, reliable, available, as well as many other providers, our primary reason for moving our backups now became pricing. As we started into the effort, other factors such as variety of API, quality of API, real-life workability, and customer service started to surface.

After looking at a wide variety of considerations, we decided on Backblaze’s B2 service. A big part of the costs of this operation is their bandwidth, which is amazingly affordable.

The price gap between the two object storage systems come from the Bandwidth Alliance between Backblaze and Cloudflare, a group of providers that have agreed to not charge (or heavily discount) for data leaving inside the alliance of networks (“egress” charges). We at Nodecraft use Cloudflare extensively and so this left only the egress charges from Amazon to Cloudflare to worry about.

In normal operations, our customers both constantly make backups as well as access them for various purposes and there has been no change to their abilities to perform these operations compared to the previous provider.

Considerations

As with any change in providers, the change-over must be thought out with great attention to detail. When there were no quality issues previously and circumstances are such that a wide field of new providers can be considered, the final selection must be carefully evaluated. Our list of concerns included these:

  • Safety: we needed to move our files and ensure they remain intact, in a redundant way
  • Availability: the service must both be reliable but also widely available ** (which means we needed to “point” at the right file after its move, during the entire process of moving all the files: different companies have different strategies, one bucket, many buckets, regions, zones, etc)
  • API: we are experienced, so we are not crazy about proprietary file transfer tools
  • Speed: we needed to move the files in bulk and not brake on rate limitations, and…

…improper tuning could turn the operation into our own DDoS.

All these factors individually are good and important, but when crafted together, can be a significant service disruption. If things can move easily, quickly, and, reliably, improper tuning could turn the operation into our own DDoS. We took thorough steps to make sure this wouldn’t happen, so an additional requirement was added:

Tuning: Don’t down your own services, or harm your neighbors

What this means to the lay person is “We have a lot of devices in our network, we can do this in parallel. If we do it at full-speed, we can make our multiple service providers not like us too much… maybe we should make this go at less than full speed.”

Important Parts

To embrace our own cloud processing capabilities, we knew we would have to take a two tier approach in both the Tactical (move a file) and Strategic (tell many nodes to move all the files) levels.

Strategic

Our goals here are simple: we want to move all the files, move them correctly, and only once, but also make sure operations can continue while the move happens. This is key because if we had used one computer to move the files, it would take months.

The first step to making this work in parallel was to build a small web service to allow us to queue a single target file to be moved at a time to each worker node. This service provided a locking mechanism so that the same file wouldn’t be moved twice, both concurrently or eventually. The timer for the lock to expire (with error message) was set to a couple hours. This service was intended to be accessed via simple tools such as curl.

We deployed each worker node as a Docker container, spread across our Docker Swarm. Using the parameters in a docker stack file, we were able to define how many workers per node joined the task. This also ensured more expensive bandwidth regions like Asia Pacific didn’t join the worker pool.

Tactical

Nodecraft has multiple fleets of servers spanning multiple datacenters, and our plan was to use spare capacity on most of them to move the backup files. We have experienced a consistent pattern of access of our servers by our users in the various data centers across the world, and we knew there would be availability for our file moving purposes.

Our goals in this part of the operation are also simple, but have more steps:

  • Get the name/ID/URL of a file to move which…
    • locks the file, and…
    • starts the fail timer
  • Get the file info, including size
  • DOWNLOAD: Copy the file to the local node (without limiting the node’s network availability)
  • Verify the file (size, ZIP integrity, hash)
  • UPLOAD: Copy the file to the new service (again without impacting the node)
  • Report “done” with new ID/URL location information to the Strategic level, which…
    • …releases the lock in the web service, cancels the timer, and marks the file DONE
diagram of Nodecraft data migration from AWS S3 to Backblaze B2 Cloud Storage
Diagram illustrating how the S3 to B2 move was coordinated

The Kill Switch

In the case of a potential run-away, where even the in-band Docker Swarm commands themselves, we decided to make sure we had a kill switch handy. In our case, it was our intrepid little web service–we made sure we could pause it. Looking back, it would be better if it used a consumable resource, such as a counter, or a value in a database cell. If we didn’t refresh the counter, then it would stop all its own. More on “runaways” later.

Real Life Tuning

Our business has daily, weekly, and other cycles of activity that are predictable. Most important is our daily cycle, that trails after the Sun. We decided to use our nodes that were in low-activity areas to carry the work, and after testing, we found that if we tune correctly this doesn’t affect the relatively light loads of the servers in that low-activity region. This was backed up by verifying no change in customer service load using our metrics and those of our CRM tools. Back to tuning.

Initially we tuned the DOWN file transfer speed equivalent to 3/4ths of what wget(1) could do. We thought “oh, the network traffic to the node will fit in-between this so it’s ok”. This is mostly true, but only mostly. This is a problem in two ways. The cause of the problems is that isolated node tests are just that—isolated. When a large number of nodes in a datacenter are doing the actual production file transfers, there is a proportional impact that builds as the traffic is concentrated towards the egress point(s).

Problem 1: you are being a bad neighbor on the way to the egress points. Ok, you say “well we pay for network access, let’s use it” but of course there’s only so much to go around, but also obviously “all the ports of the switch have more bandwidth than the uplink ports” so of course there will be limits to be hit.

Problem 2: you are being your own bad neighbor to yourself. Again, if you end-up with your machines being network-near to each other in a network-coordinates kind of way, your attempts to “use all that bandwidth we paid for” will be throttled by the closest choke point, impacting only or nearly only yourself. If you’re going to use most of the bandwidth you CAN use, you might as well be mindful of it and choose where you will put the chokepoint, that the entire operation will create. If one is not cognizant of this concern, one can take down entire racks of your own equipment by choking the top-of-rack switch, or, other networking.

By reducing our 3/4ths-of-wget(1) tuning to 50% of what wget could do for a single file transfer, we saw our nodes still functioning properly. Your mileage will absolutely vary, and there’s hidden concerns in the details of how your nodes might or might not be near each other, and their impact on hardware in between them and the Internet.

Old Habits

Perhaps this is an annoying detail: Based on previous experience in life, I put in some delays. We scripted these tools up in Python, with a Bourne shell wrapper to detect fails (there were) and also because for our upload step, we ended up going against our DNA and used the Backblaze upload utility. By the way, it is multi-threaded and really fast. But in the wrapping shell script, as a matter of course, in the main loop, that was first talking to our API, I put in a sleep 2 statement. This creates a small pause “at the top” between files.

This ended up being key, as we’ll see in a moment.

How It (The Service, Almost) All Went Down

What’s past is sometimes not prologue. Independent testing in a single node, or even a few nodes, was not totally instructive to what really was going to happen as we throttled up the test. Now when I say “test” I really mean, “operation”.

Our initial testing was concluded “Tactically” as above, for which we used test files, and were very careful in the verification thereof. In general, we were sure that we could manage copying a file down (Python loop) and verifying (unzip -T) and operate the Backblaze b2 utility without getting into too much trouble…but it’s the Strategic level that taught us a few things.

Remembering to a foggy past where “6% collisions on a 10-BASE-T network and its game over”…yeah that 6%. We throttled up the number of replicas in the Docker Swarm, and didn’t have any problems. Good. “Alright.” Then we moved the throttle so to speak, to the last detent.

We had nearly achieved self-DDoS.

It wasn’t all that bad, but, we were suddenly very, very happy with our 50%-of-wget(1) tuning, and our 2 second delays between transfers, and most of all, our kill switch.

Analysis

TL;DR — Things went great.

There were a couple files that just didn’t want to transfer (weren’t really there on S3, hmm). There were some DDoS alarms that tripped momentarily. There was a LOT of traffic…and, then, the bandwidth bill.

Your mileage may vary, but there’s some things to think about with regards to your bandwidth bill. When I say “bill” it’s actually a few bills.

diagram of bandwidth data flow savings switching away from AWS S3 to Backblaze B2 cloud storage
Cloudflare Bandwidth Alliance cost savings

As per the diagram above, moving the file can trigger multiple bandwidth charges, especially as our customers began to download the files from B2 for instance deployment, etc. In our case, we now only had the S3 egress bill to worry about. Here’s why that works out:

  • We have group (node) discount bandwidth agreements with our providers
  • B2 is a member of the Bandwidth Alliance…
  • …and so is Cloudflare
  • We were accessing our S3 content through our (not free!) Cloudflare account public URLs, not by the (private) S3 URLs.

Without saying anything about our confidential arrangements with our service partners, the following are both generally true: you can talk to providers and sometimes work out reductions. Also, they especially like it when you call them (in advance) and discuss your plans to run their gear hard. For example, on another data move, one of the providers gave us a way to “mark” our traffic a certain way, and it would go through a quiet-but-not-often-traveled part of their network; win win!

Want More?

Thanks for your attention, and good luck with your own byte slinging.

by Gregory R. Sudderth
Nodecraft Senior DevOps Engineer

* Science is hard, blue keys on calculators are tricky, and we don’t have years to study things before doing them

Free Webinar
Nodecraft’s Data Migration From S3 to B2

Wednesday, June 5, 2019 at 10am PT
Cloud-Jitsu: Migrating 23TB from AWS S3 to Backblaze B2 in 7 hours

The post Migrating 23TB from Amazon S3 to Backblaze B2 in Just Seven Hours appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze’s Must See List for NAB 2019

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/what-not-to-miss-nab2019/

Collage of logos from Backblaze B2 cloud storage partners

With NAB 2019 only days away, the Backblaze team is excited to launch into the world’s largest event for creatives, and our biggest booth yet!

Must See — Backblaze Booth

This year we’ll be celebrating some of the phenomenal creative work by our customers, including American Public Television, Crisp Video, Falcons’ Digital Creative, WunderVu, and many more.

We’ll have workflow experts standing by to chat with you about your workflow frustrations, and how Backblaze B2 Cloud Storage can be the key to unlocking efficiency and solving storage challenges throughout your entire workflow: From Action! To Archive. With B2, you can focus on creating and managing content, not managing storage.

Create: Bring Your Story to Life

Stop by our booth and we can show you how you can protect your content from ingest through work-in-process by syncing seamlessly to the cloud. We can also detail how you can improve team collaboration and increase content reuse by organizing your content with one of our MAM integrations.

Distribute: Share Your Story With the World

Our experts can show you how B2 can help you scale your content library instantly and indefinitely, and avoid the hassle and expense of on-premises storage. We can demonstrate how everything in your content library can be served directly from your B2 account or through our content delivery partners like Cloudflare.

Preserve: Make Sure Your Story Lives Forever

Want to see the math behind the first cloud storage that’s more affordable than LTO? We can step through the numbers. We can also show you how B2 will keep your archived content accessible, anytime, and anywhere, through a web browser, API calls, or one of our integrated applications listed below.

Must See — Workflow Integrations You Can Count On

Our fantastic workflow partners are a critical part of your creative workflow backed by Backblaze — and there’s a lot of partner news to catch up on!

Drop by our booth to pick up a handy map to help you find Backblaze partners on the show floor including:

Backup and Archive Workflow Integrations

Archiware P5, booth SL15416
SyncBackPro, Wynn Salon — J

File Transfer Acceleration, Data Wrangling, Data Movement

FileCatalyst, booth SL12116
Hedge, booth SL14805

Asset and Collaboration Managers

axle ai, booth SL15116
Cantemo iconik, booth SL6021
Cantemo (Portal), booth SL6021
CatDV, booth SL5421
Cubix (Ortana Media Group), booth SL5922
eMAM, booth SL10224

Workflow Storage

Facilis, booth SL6321
GB Labs, booth SL5324
ProMAX, booth SL6313
Scale Logic, booth SL11109
Tiger Technology, booth SL8505
QNAP, booth SL15716
Seagate, booth SL8511
StorageDNA, booth SL11810

Must See — Backblaze Events during NAB

Monday morning we’re delivering a presentation in the Scale Logic Knowledge Zone, and Tuesday night of NAB we’re honored to help sponsor the all-new Faster Together event that replaces the long-standing Las Vegas Creative User Supermeet event.

We’ll be raffling off a Hover2 4K drone powered by AI to help you get that perfect drone shot for your next creative film! So after the NAB show wraps up on Tuesday, head over to the Rio main ballroom for a night of mingling with creatives and amazing talks by some of the top editors, colorists, and VFX artists in the industry.

ProVideoTech and Backblaze at Scale Logic Knowledge Zone
Monday April 8 at 11 AM
Scale Logic Knowledge Zone, NAB Booth SL111109
Monday of NAB, Backblaze and PVT will deliver a live presentation for NAB attendees on how to build hybrid-cloud workflows with Cantemo and Backblaze.
Scale Logic Media Management Knowledge Zone

Backblaze at The Faster Together Stage
Tuesday, April 9
Rio Las Vegas Hotel and Casino
Doors open at 4:30 PM, stage presentations begin at 7:00 PM
Reserve Tickets for the Faster Together event

If you haven’t yet, be sure to sign up and reserve your meeting time with the Backblaze team, and add us to your Map My Show NAB plan and we’ll see you there!

  NAB 2019 is just a few days away. NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post Backblaze’s Must See List for NAB 2019 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A Workflow Playbook for Migrating Your Media Assets to a MAM

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/workflow-playbook-migrating-your-media-assets-to-a-mam/

Asset > Metadata > Database > Media Asset Manager > Backblaze Fireball > Backblaze B2 Cloud Storage

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

Whatever your creative venture, the byproduct of all your creative effort is assets. Whether you produce music, images, or video, as you produce more and more of these valuable assets, they tend to pile up and become difficult to manage, organize, and protect. As your creative practice evolves to meet new demands, and the scale of your business grows, you’ll often find that your current way of organizing and retrieving assets can’t keep up with the pace of your production.

For example, if you’ve been managing files by placing them in carefully named folders, getting those assets into a media asset management system will make them far easier to navigate and much easier to pull out exactly the media you need for a new project. Your team will be more efficient and you can deliver your finished content faster.

As we’ve covered before, putting your assets in a type of storage like B2 Cloud Storage ensures that they will be protected in a highly durable and highly available way that lets your entire team be productive.

You can learn about some of the new capabilities of the latest cloud-based collaboration tools here:

With some smart planning, and a little bit of knowledge, you can be prepared to get the most of your assets as you move them into an asset management system, or when migrating from an older or less capable system into a new one.

Assets and Metadata

Before we can build some playbooks to get the most from your creative assets, let’s review a few key concepts.

Asset — a rich media file with intrinsic metadata.

An asset is simply a file that is the result of your creative operation, and most often a rich media file like an image or a video. Typically, these files are captured or created in a raw state, then your creative team adds value to that raw asset by editing it together with other assets to create a finished story that in turn, becomes another asset to manage.

Metadata — Information about a file, either embedded within the file itself or associated with the file by another system, typically a media asset management (MAM) application.

The file carries information about itself that can be understood by your laptop or workstation’s operating system. Some of these seem obvious, like the name of the file, how much storage space it occupies, when it was first created, and when it was last modified. These would all be helpful ways to try to find one particular file you are looking for among thousands just using the tools available in your OS’s file manager.

File Metadata

There’s usually another level of metadata embedded in media files that is not so obvious but potentially enormously useful: metadata embedded in the file when it’s created by a camera, film scanner, or output by a program.

Results of a file inspected by an operating system's file manager
An example of metadata embedded in a rich media file

For example, this image taken in Backblaze’s data center a few years ago carries all kinds of interesting information. For example, when I inspect the file on macOS’s Finder with Get Info, a wealth of information is revealed. I can now not only tell the image’s dimensions and when the image was taken, but also exactly what kind of camera took this picture and the lens settings that were used, as well.

As you can see, this metadata could be very useful if you want to find all images taken on that day, or even images taken with that same camera, focal length, F-stop, or exposure.

When a File and Folder System Can’t Keep Up

Inspecting files one at a time is useful, but a very slow way to determine if a file is the one you need for a new project. Yet many creative environments that don’t have a formal asset management system get by with an ad hoc system of file and folder structures, often kept on the same storage used for production or even on an external hard drive.

Teams quickly outgrow that system when they find that their work spills over to multiple hard drives, or takes up too much space on their production storage. Worst of all, assets kept on a single hard drive are vulnerable to disk damage, or to being accidentally copied or overwritten.

Why Your Assets Need to be Managed

To meet this challenge, creative teams have often turned to a class of application called a Media Asset Manager (MAM). A MAM automatically extracts all their assets’ inherent metadata, helps move files to protected storage, and makes them instantly available to their entire team. In a way, these media asset managers become a private media search engine where any file attribute can be a search query to instantly uncover the file they need in even the largest media asset libraries.

Beyond that, asset management systems are rapidly becoming highly effective collaboration and workflow tools. For example, tagging a series of files as Field Interviews — April 2019, or flagging an edited piece of content as HOLD — do not show customer can be very useful indeed.

The Inner Workings of a Media Asset Manager

When you add files into an asset management system, the application inspects each file, extracting every available bit of information about the file, noting the file’s location on storage, and often creating a smaller stand-in or proxy version of the file that is easier to present to users.

To keep track of this information, asset manager applications employ a database and keep information about your files in it. This way, when you’re searching for a particular set of files among your entire asset library, you can simply make a query of your asset manager’s database in an instant rather than rifling through your entire asset library storage system. The application takes the results of that database query and retrieves the files you need.

The Asset Migration Playbook

Whether you need to move from a file and folder based system to a new asset manager, or have been using an older system and want to move to a new one without losing all of the metadata that you have painstakingly developed, a sound playbook for migrating your assets can help guide you.

Play 1 — Getting Assets in Files and Folders Protected Without an Asset Management System

In this scenario, your assets are in a set of files and folders, and you aren’t ready to implement your asset management system yet.

The first consideration is for the safety of the assets. Files on a single hard drive are vulnerable, so if you are not ready to choose an asset manager your first priority should be to get those files into a secure cloud storage service like Backblaze B2.

We invite you to read our post: How Backup and Archive are Different for Professional Media Workflows

Then, when you have chosen an asset management system, you can simply point the system at your cloud-based asset storage to extract the metadata of the files and populate the asset information in your asset manager.

  1. Get assets archived or moved to cloud storage
  2. Choose your asset management system
  3. Ingest assets directly from your cloud storage

Play 2 — Getting Assets in Files and Folders into Your Asset Management System Backed by Cloud Storage

In this scenario, you’ve chosen your asset management system, and need to get your local assets in files and folders ingested and protected in the most efficient way possible.

You’ll ingest all of your files into your asset manager from local storage, then archive them to cloud storage. Once your asset manager has been configured with your cloud storage credentials, it can automatically move a copy of local files to the cloud for you. Later, when you have confirmed that the file has been copied to the cloud, you can safely delete the local copy.

  1. Ingest assets from local storage directly into your asset manager system
  2. From within your asset manager system archive a copy of files to your cloud storage
  3. Once safely archived, the local copy can be deleted

Play 3 — Getting a Lot of Assets on Local Storage into Your Asset Management System Backed by Cloud Storage

If you have a lot of content, more than say, 20 terabytes, you will want to use a rapid ingest service similar to Backblaze’s Fireball system. You copy the files to Fireball, Backblaze puts them directly into your asset management bucket, and the asset manager is then updated with the file’s new location in your Backblaze B2 account.

This can be a manual process, or can be done with scripting to make the process faster.

You can read about one such migration using this play here:
iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted

  1. Ingest assets from local storage directly into your asset manager system
  2. Archive your local assets to Fireball (up to 70 TB at a time)
  3. Once the files have been uploaded by Backblaze, relink the new location of the cloud copy in your asset management system

You can read more about Backblaze Fireball on our website.

Play 4 — Moving from One Asset Manager System to a New One Without Losing Metadata

In this scenario you have an existing asset management system and need to move to a new one as efficiently as possible to not only take advantage of your new system’s features and get files protected in cloud storage, but also to do it in a way that does not impact your existing production.

Some asset management systems will allow you to export the database contents in a format that can be imported by a new system. Some older systems may not have that luxury and will require the expertise of a database expert to manually extract the metadata. Either way, you can expect to need to map the fields from the old system to the fields in the new system.

Making a copy of old database is a must. Don’t work on the primary copy, and be sure to conduct tests on small groups of files as you’re migrating from the older system to the new. You need to ensure that the metadata is correct in the new system, with special attention that the actual file location is mapped properly. It’s wise to keep the old system up and running for a while before completely phasing it out.

  1. Export the database from the old system
  2. Import the records into the new system
  3. Ensure that the metadata is correct in the new system and file locations are working properly
  4. Make archive copies of your files to cloud storage
  5. Once the new system has been running through a few production cycles, it’s safe to power down the old system

Play 5 — Moving Quickly from an Asset Manager System on Local Storage to a Cloud-based System

In this variation of Play 4, you can move content to object storage with a rapid ingest service like Backblaze Fireball at the same time that you migrate to a cloud-based system. This step will benefit from scripting to create records in your new system with all of your metadata, then relink with the actual file location in your cloud storage all in one pass.

You should test that your asset management system can recognize a file already in the system without creating a duplicate copy of the file. This is done differently by each asset management system.

  1. Export the database from the old system
  2. Import the records into the new system while creating placeholder records with the metadata only
  3. Archive your local assets to Fireball (up to 70 TB at a time)
  4. Once the files have been uploaded by Backblaze, relink the cloud based location to the asset record

Wrapping Up

Every production environment is different, but we all need the same thing: to be able to find and organize our content so that we can be more productive and rest easy knowing that our content is protected.

These plays will help you take that step and be ready for any future production challenges and opportunities.

If you’d like more information about media asset manager migration, join us for our webinar on March 15, 2019:

Backblaze Webinar:  Evolving for Intelligence: MAM to MAM Migration

•  •  •

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post A Workflow Playbook for Migrating Your Media Assets to a MAM appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cloud-based Tools Combined with AI Can Make Workflows More Powerful and Increase Content Value

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/increase-content-archive-value-via-cloud-tools/

CPU + Metadata Mining + Virtual Machines & Apps + AI in the cloud

This is part two of a series. You can read part one at Modern Storage Workflows in the Age of Cloud.

Modern Storage Workflows in the Age of Cloud, Part 2

In Modern Storage Workflows in the Age of Cloud, Part One, we introduced a powerful maxim to guide content creators (anyone involved in video or rich media production) in choosing storage for the different parts of their content creation workflows:

Choose the storage that best fits each workflow step.

It’s true that every video production environment is different, with different needs, and the ideal solution for an independent studio of a few people is different than the solution for a 50-seat post-production house. But the goal of everyone in the business of creative storytelling is to tell stories and let your vision and craft shine through. Anything that makes that job more complicated and more frustrating keeps you from doing your best work.

Given how prevalent, useful, and inexpensive cloud technologies are, almost every team today is rapidly finding they can jettison whole classes of storage that are complicating their workflow and instead focus on two main types of storage:

  1. Fast, shared production storage to support editing for content creation teams (with no need to oversize or overspend)
  2. Active, durable, and inexpensive cloud storage that lets you move all of your content in one protected, accessible place — your cloud-enabled content backplane

It turns out there’s another benefit unlocked when your content backplane is cloud enabled, and it’s closely tied to another production maxim:

Organizing content in a single, well managed repository makes that content more valuable as you use it.

When all content is in a single place, well-managed and accessible, content gets discovered faster and used more. Over time it will pick up more metadata, with sharper and more refined tags. A richer context is built around the tags, making it more likely that the content you already have will get repurposed for new projects.

Later, when you come across a large content repository to acquire, or contemplate a digitization or preservation project, you know you can bring it into the same content management system you’ve already refined, concentrating and increasing value further still.

Having more content that grows increasingly valuable over time becomes a monetization engine for licensing, content personalization, and OTT delivery.

You might think that these benefits already present a myriad of new possibilities, but cloud technologies are ready to accelerate the benefits even further.

Cloud Benefits — Pay as You Need It, Scalability, and Burstability

It’s worth recapping the familiar cost-based benefits of the cloud: 1) pay only for the resources you actually use, and only as long as you need them, and, 2) let the provider shoulder the expense of infrastructure support, maintenance, and continuous improvement of the service.

The cost savings from the cloud are obvious, but the scalability and flexibility of the cloud should be weighted strongly when comparing using the cloud versus handling infrastructure yourself. If you were responsible for a large server and storage system, how would you cope with a business doubling every quarter, or merging with another team for a big project?

Too many production houses end up disrupting their production workflow (and their revenue) when they are forced to beef up servers and storage capability to meet new production demands. Cloud computing and cloud storage offer a better solution. It’s possible to instantly bring on new capacity and capability, even when the need is unexpected.

Cloud Delivered Compute Horsepower on Demand

Let’s consider the example of a common task like transcoding content and embedding a watermark. You need to process 3,600 frames of a two hour movie to resize the frame and add a watermark, and that compute workload takes 100 minutes and ties up a single server.

You could adapt that workflow to the cloud by pulling high resolution frames from cloud storage, feed them to 10 cloud servers in parallel, and complete the same job in 10 minutes. Another option is to spin up 100 servers and get the job done in one minute.

The cloud provides the flexibility to cut workflow steps that used to take hours down to minutes by adding the compute horsepower that’s needed for the job, then turn it off when it’s no longer needed. You don’t need to worry about planning ahead or paying for ongoing maintenance. In short, compute adapts to your workflow rather than the other way around, which empowers you to make workflow choices that instead prioritize the creative need.

Your Workflow Applications Are Moving to the Cloud, Too

More and more of the applications used for content creation and management are moving to the cloud, as well. Modern web browsers are gaining astonishing new capabilities and there is less need for dedicated application servers accompanying storage.

What’s important is that the application helps you in the creative process, not the mechanics of how the application is served. Increasingly, this functionality is delivered by virtual machines that can be spun up by the thousands as needed or by cloud applications that are customized for each customer’s specific needs.

iconik media workflow management screenshot

An example of a cloud-delivered workflow application — iconik asset discovery and project collaboration

iconik is one example of such a service. iconik delivers cloud-based asset management and project collaboration as a service. Instead of dedicated servers and storage in your data center, each customer has their own unique installation of iconik’s service that’s ready in minutes from first signup. The installation is exclusive to your organization and tailored to your needs. The result is a workflow utilizing virtual machines, compute, and storage that matches your workflow with just the resources you need. The resources are instantly available whenever or wherever your team is using the system, and consume no compute or storage resources when they are not.

Here’s an example. A video file can be pumped from Backblaze B2 to the iconik application running on a cloud compute instance. The proxies and asset metadata are stored in one place and available to every user. This approach is scalable to as many assets and productions you can throw at it, or as many people as are collaborating on the project.

The service is continuously upgraded and updated with new features and improvements as they become available, without the delay of rolling out enhancements and patches to different customers and locations.

Given the advantages of the cloud, we can expect that more steps in the creative production workflow that currently rely on dedicated on-site servers will move to the highly agile and adaptable environment offered by the cloud.

The Next Evolution — AI Becomes Content-Aware

Having your content library in a single content backplane in the cloud provides another benefit: ready access to a host of artificial intelligence (AI) tools.

Examples of AI Tools That Can Improve Creative Production Workflows:

  • Text to speech transcription
  • Language translation
  • Object recognition and tagging
  • Celebrity recognition
  • Brand use recognition
  • Colorization
  • High resolution conversion
  • Image stabilization
  • Sound correction

AI tools can be viewed as compute workers that develop processing rules by training for a desired result on a data set. An AI tool can be trained by having it process millions of images until it can tell the difference between sky and grass, or pick out a car in a frame of video. Once such a tool has been trained, it provides an inexpensive way to add valuable metadata to content, letting you find, for example, every video clip across your entire library that has sky, or grass, or a car in it. Text keywords with an associated timecode can be automatically added to aid in quickly zeroing in on a specific section of a long video clip. That’s something that’s not practical for a human content technician over thousands of files, but is easy, repeatable, and scalable for an AI tool.

Let AI Breathe New Life into Existing Content

AI tools can breathe new life in older content and intelligently clean up older format source video by removing film scratches or upresing content to today’s higher resolution formats. They can be valuable for digital restoration and preservation projects, too. With AI tools and source content in the cloud, it’s now possible to give new life to analog source footage. Digitize it, let AI clean it up, and you’ll get fresh, monetizable assets in your library.

axle ai automatically tags:

An example of the time-synched tags that can be generated with an AI tool

Many workflow tools, such as asset and collaboration tools, can use AI tools for speech transcription or smart object recognition, which brings additional capabilities. axle.ai, for example, can connect with a visual search tool to highlight an object in the frame like a wine bottle, letting you subsequently find every shot of a wine bottle across your entire library.

Visual search for brands and products also is possible. Just highlight a brand logo and find every clip where the camera panned over that logo. It’s smart enough to gets results even when only part of the logo is shown.

We’ve barely touched on the many tools that can be applied to content on ingest or content already in place. Whichever way they’re applied, they can deliver on the promise of making your workflows more efficient and powerful, and your content more valuable.

All Together Now

Taken together, these trends are great news for creatives. They can serve your creative vision by making your workflow more agile and more efficient. Cloud-enabled technologies enable you to focus on adding value and repurposing content in fresh new ways, resulting in new audiences and better monetization.

By placing your content in a cloud content backplane, and taking advantage of applications as a service, including the latest AI tools, it becomes possible to continually grow your content collection while increasing its value — a desirable outcome for any creative production enterprise.

If you could focus only on delivering great creative content, and had a host of AI tools to automatically make your content more valuable, what would you do?

The post Cloud-based Tools Combined with AI Can Make Workflows More Powerful and Increase Content Value appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Modern Storage Workflows in the Age of Cloud

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/cloud-based-video-production-workflows/

Video Production Workflow

Not too long ago, hardware storage vendors held an iron grip on what kinds of storage underpinned your creative, film, and broadcast workflows. This storage took many complex forms — RAIDs, JBODs, SANs, NAS systems, tape robots, and more. All of it was expensive, deeply complex, and carried fat vendor margins and high support costs.

How Storage Can Make Your Video Production Workflow More Efficient

But when you’re considering storage in today’s technology environment — whether it’s cloud, on-site storage, or a USB stick — the guiding principle in choosing storage for your creative production should simply be to choose the storage that best fits each workflow step.

Production Storage Maxim: Choose the storage that best fits each workflow step

Doing your best creative work is what builds your customer base, boosts your reputation, and earns you revenue and royalties. So any time sunk into legacy storage solutions, wrestling with complexity, unneeded production steps, refereeing competing vendors, and overpaying for, well, everything, just gets in the way of what you really want to do, create.

The right answer for your specific production needs is a function of the size of your production team and the complexity of your operating environment. Whatever that answer is, it should be as frictionless an environment as possible that helps you get your work done more efficiently and gives you the most flexibility.

An independent filmmaker can follow this production storage evaluation process for each stage of their workflow and decide to make do with a small deskside RAID system for primary production storage, and depend on the cloud for everything else.

A large, global production team will probably need multiple SANs in each production office and a complex series of cloud and dedicated playout applications and systems. If your environment falls somewhere between those two extremes, then your ideal solution mix does as well.

Traditional Content Production Workflow - Ingest > Work-in-Process > Deliver > Archive

The traditional content production workflow is thought of as a linear process. Content is ingested as raw camera files pulled into a shared work-in-process storage for editors, the final cut is then delivered to the client, and when the project is finished all files are saved off to an archive.

Simplified Production Workflow Steps

Let’s look at what the storage requirements and needs are for each of the common steps in a production workflow and where cloud can add value. Along the way, we’ll call out concrete examples of cloud capabilities at each stage with B2 cloud storage.

Ingest Stage - Ingest Stage Goals: Safely retrieve and protect files from capture media and move to production environment. Ingest Stage Needs: File data protection - Easy path to Production Storage. Where Cloud Can Add Value: Ingest and archive in one step

The Ingest Stage

Media copied in the ingest phase typically needs to get off of camera carts and flash drives as quickly and safely as possible and transported to the editing environment. Since those camera carts need to be used again for the next shot, pressure to get files copied over quickly (but safely) is intense.

Any time that critical content exists only in one place is dangerous. At this stage, lost or corrupted files mean a reshoot, which may not be practical or even possible.

Storage Needs for Ingest

Storage at the ingest stage can be very rudimentary and is often satisfied by just copying files from camera carts to an external drive, then to another drive as a safety, or by putting a RAID system on a crash cart on-set. Every team tends to come up with a different solution.

Where Cloud Can Add Value to Ingest

But even if your data wranglers aren’t ready to give up external hard drives here, one way cloud can help in the ingest stage is to help combine your ingest and archive for safety steps.

Instead of carrying carts from the shoot location to the production environment and copying them over to production storage, you could immediately start uploading content via the internet to your cloud storage, simultaneously copying over those files safely, and making them available to your entire team immediately.

When you restructure your workflow like this, you’ll get better than RAID-level protection for your content in the cloud. And by checking content into your archive first, your asset manager tools can immediately start processing those files by adding tags and generating lighter weight proxies. As soon as the files hit cloud storage, your entire team can start working on them. They can immediately begin tagging and reviewing files, and even mark edit points before handing off to editors, thereby speeding up production dramatically.

Some creatives have hit a roadblock in trying to take advantage of the cloud. Data transfer has historically been gated by the available upload bandwidth at your given location, but our customers have solved this in some interesting ways.

Producers, editors, and reporters are finding that even cellular 4G internet connections make it feasible to immediately start uploading raw shots to their cloud storage. Others make it routine to stop off at a data center or affiliate with excellent upload speeds on their way in from the field.

Either way, even novice shooters and freelancers can safely get content into your system quickly in a system that can be as simple as an upload bucket in your B2 account and making sure that your media or project manager tools are configured to watch those upload points.

Cloud Capability Example — Use a Backblaze Fireball to Rapidly Ingest Content

Backblaze offers a Rapid Ingest Service to help get large amounts of your content into your Backblaze account quickly. Backblaze ships you a 70TB storage system that you connect to your network and copy content to. When the system is shipped back to Backblaze, it is quickly moved directly into your B2 account, dramatically reducing ingest times.

 

Cloud Capability Example — Share Files Directly From Cloud

Archive.zip file in B2

An example of navigating to a file-review bucket in the B2 web interface to copy the direct sharing link to send to a reviewer

In addition to the archive on ingest technique, many customers share files for approval review or dailies directly from their Backblaze B2 account’s web interface.

If your B2 bucket for finished files is public, you can get a direct share link from the Backblaze account management website and simply send that to your customer, thereby eliminating a copy step.

You can even snapshot a folder of your content in B2, and have Backblaze ship it directly to your customer.

Work in Process Stage - WIP Stage Goals: Support collaborative, simultaneous editing of source files to finished content. WIP Stage Needs: Performance to support shared, collaborative editing access for many users. Very large file support. Where Cloud Can Add Value: Keeping expensive primary production storage running efficiently.

The Work-In-Process Stage

Work-in-process or primary production storage is the main storage used to support collaborative editing and production of content. The bulk of what’s thought of as collaborative editing happens in this stage.

For simplicity we’re combining several steps under the umbrella of work-in-process such as craft editing, voiceover, sound, ADR, special effects, and even color grading and finish etc. under a far simpler work-in-process step.

As audio, color grading and SFX steps get more complex, they sometimes need to be broken out into separate, extremely high performance storage such as more exotic (and expensive) flash-based storage that then feeds the result back to WIP storage.

Work-in-Process Stage Storage Needs

Storage performance requirements in this stage are extremely hard to meet, demanding the ability to serve multiple editors, each pulling multiple, extremely large streams of video files as they edit raw shots into a complex, visual story. Meeting this requirement usually requires either equipment intensive SAN, or a NAS that scales to eye-watering size and price.

Many production environments have gotten in the habit of keeping older projects and media assets on the shared production environment alongside current production files, knowing that if those files are needed they can be retrieved quickly. But this also means that production storage fills up quickly, and it’s tempting to let more and more users not involved in primary production have access to those files as well, both of which can slow down production storage and creation of your content.

Having to make a rush purchase to expand or add to your SAN is not fun, especially in the middle of a project, so regularly moving any files not needed for current production to your content archive is a great strategy to keep your production storage as light and small as possible so that it can last over several seasons.

Where Cloud Can Add Value to Work-in-Process

By regularly moving content from your production storage you keep it light, fast, and simpler to manage. But that content still needs to be readily available. Cloud is an excellent choice here as content is both immediately available and stored on highly resilient object storage. In effect, you’re lightening the burden on your primary storage, and using cloud as an always ready, expanding store for all of your content. We’ll explore this concept more in the archive stage.

Deliver Stage - Deliver Stage Goals: Securely deliver finished files to upstream/downstream clients. Deliver Stage Needs: High reliability. Separation from primary production storage. Where Cloud Can Add Value: Share files directly and securely from cloud without copying.

The Deliver Stage

The deliver stage, where your finished work is handed off to your customer, varies depending on what type of creative you are. Broadcast customers will almost always need dedicated playout server appliances, and others will simply copy files to where they’re needed by downstream customers, or upstream to a parent organization for distribution. But, at some level, we all have to deliver our work when it’s done.

Deliver Stage Storage Needs

Files for delivery should be moved off of your primary production storage and delivered in a separate workflow available to dedicated workflow or playout tools. Whatever the workflow, this storage needs to be extremely reliable and available for your customers whenever it is needed.

Where Cloud Can Add Value to Deliver

Whether content delivery in your workflow is met by copying files to a playout server or giving a finished file to a customer, cloud can help cut down on the number of steps to get the content to its final destination while giving you extreme reliability.

Cloud Capability Example — Serve Time-Limited Links to Content

Many customers use the Backblaze B2 API to add expiration limits that can last from seconds to a week to shared links:

B2 command-line

An example of using the B2 command-line tool to generate time-expiring tokens for content sharing and delivery

If your team is comfortable writing scripts to automate your workflow, this can be a powerful way to directly share files simply and quickly with tools provided by Backblaze.

For more information see this B2 Article: Get Download Authorization

 

Cloud Capability Example — Move Content Directly to Your Delivery and Distribution Servers

Serving your content to a wide audience via your website, content channel, or app is an increasingly popular way to deliver content. And thanks to our recent Cloudflare agreement, you can now move content from your B2 storage over to Cloudflare’s content delivery network at zero transfer cost for your content application or website.For more information see this B2 article: How to Allow Cloudflare to Fetch Backblaze B2 Content

Archive Stage - Archive Stage Goals: Securely deliver finished files to upstream/downstream clients. Archive Stage Needs: High reliability. Separation from primary prodcution storage. Where Cloud Can Add Value: Serve as your content backplane across all workflow steps.

The Archive Stage

At last, we come to the archive stage of content creation, traditionally thought of as the end of the traditional content creation chain, the source of the most frustration for creatives, and the hardest storage to size properly.

Traditionally, when a project or season of a show is finished, all of the files used to create the content are moved off of expensive primary production storage and stored on separate, highly reliable storage in case they are needed again.

Archive Stage Storage Needs

Archive storage needs to be a safe repository for all of the content that you’ve created. It should scale well at a sustainable price, and make all archived content available immediately when requested by your users and workflow tools like asset managers.

Tape was often chosen to store these archive files because it was cheaper than disk-based storage and offered good reliability. But choosing tape required a large investment in specialized tape systems, tape media, and the associated support contracts and maintenance.

Tape based archiving strategies usually rely on compressing content as it’s written to tape to hit the advertised storage capacity of tape media. But video content is already stored in a compressed container, so compressing those files as they’re written and retrieved from tape offers no advantage and only slows the process down.

Here we find the chief drawback of tape based content archives for many customers: the time required to retrieve content from those tape systems. As the pace of production has increased, many customers find they can no longer wait for tape systems to return archive sets or unarchive files.

Where Cloud Can Add Value to Archive

The archive stage is where cloud has the most impact on your entire workflow. The benefits of cloud itself are familiar: the ability to scale up or down instantly as your needs change, paying only for the storage you actually use, extremely high object storage file reliability, and availability anywhere there is a network connection.

Modern Content Production Workflow - Ingest > Archive as a Cloud Content Backplane ><Work-In-Process

Creating The Cloud Content Backplane

Having all of your content immediately available to your production storage and your asset management systems is emerging as the killer feature of cloud for production environments. By adding cloud, your content production goes from a linear process to a highly active one where content can freely check in and out of all of your other workflow steps as you’re producing content.

By shifting your content archives to cloud like Backblaze B2, you are creating, in effect, a cloud content backplane that supports your entire content creation and delivery process with these new capabilities:

  • New productions now have access to every file you might possibly need without waiting, letting you explore more creative choices
  • A single, authoritative content repository backing all of your creative production lets you phase out other storage and the associated management headaches and expense
  • You can now serve and deliver files directly from your cloud-based content archive with no impact on production storage
  • Having content in a single place means that your workflow tools like asset managers work better. You can find files across your entire content store instantly, and even archive or move files from your production storage to your cloud content archive automatically

The content not needed on your work-in-process storage is both highly protected and immediately available wherever you need it. Your entire workflow can get much simpler with fewer steps, and you can phase out storage you no longer need on-site.

Above all, you’ll have fewer steps between you and creating great content, and you’ll be able to explore new creative options faster while shifting to a pay-as-you-use-it model for all of your content storage.

In part two, we’ll explore the ways your new cloud-delivered content archive backplane can dramatically improve how you create, deliver, and monetize content with other cloud-based technologies in the age of cloud.

The post Modern Storage Workflows in the Age of Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Move Even Your Largest Archives to B2 with Fireball and Archiware P5

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/archiware-p5-cloud-backup/

Archiware P5 and Fireball

Backblaze B2’s reliability, scalability, and affordable, “pay only for what you use” pricing means that it’s an increasingly popular storage option for all phases of content production, and that’s especially true for media archiving.

By shifting storage to B2, you can phase out hard-to-manage and expensive local backup storage and clear space on your primary storage. Having all of your content in a single place — and instantly available — can transform your production and keep you focused on the creative process.

Fireball Rapid Ingest to Speed Your First Migration to Backblaze B2

Once you sign up for Backblaze B2, one tool that can speed an initial content migration tremendously is Backblaze’s Fireball rapid ingest service. As part of the service, Backblaze ships you a 70TB storage system. You then copy over all the content that you want in B2 to the Fireball system: all at local network speeds. Once the system is shipped to Backblaze, it’s quickly moved to your B2 account, a process far faster than uploading those files over the internet.

Setting Up Your Media Archive

Since manually moving files to archive and backing up project folders can be very time-consuming, many customers choose software like Archiware P5 that can manage this automatically. In P5’s interface you can choose files to add to archive libraries, restore individual files to your local storage from B2, and even browse all of your archive content on B2 with thumbnail previews, and more.

However, many media and entertainment customers have terabytes and terabytes of content in “archive” — that is, project files and content not needed for a current production, but necessary to keep nearby, ready to pull into a new production.

They’d love to get that content into their Backblaze B2 account and then manage it with an archive, sync, backup solution like Archiware P5. But the challenge facing too many is how to get all these terabytes up to B2 through the existing bandwidth in the office. Once the large, initial archive is loaded, the incrementals aren’t a problem, but getting years of backlog pushed up efficiently is.

For anyone facing that challenge, we’re pleased to announce the Archiware P5 Fireball Integration. Our joint solution provides any customer with an easy way to get all of their archives loaded into their B2 account without having to worry about bandwidth bottlenecks.

Archiware P5 Fireball Integration

A backup and archive manager like Archiware P5 is a great way to get your workflow under control and automated while ensuring that your content is safely and reliably stored. By moving your archives offsite, you get the highest levels of data protection while keeping your data immediately available for use anytime, anywhere.

With the newest release, Archiware P5 can archive directly to Fireball at fast, local network speeds. Then, once your Fireball content has been uploaded to your Backblaze account, a few clicks are all that is needed to point Archiware at your Backblaze account as the new location of your archive.

Finally, you can clear out those closets of hard drives and tape sets!

Archiware P5 to B2 workflow

Archiware P5 can now archive directly to Fireball at local network speeds, which are then linked to their new locations in your B2 accounts. With a few clicks you can get your entire archive uploaded to the B2 cloud without suffering any downtime or bandwidth issues.

For detailed information about configuring Archiware to archive directly to Fireball:

For more information about Backblaze B2 Fireball Rapid Ingest Service:

Archiware on Synology and QNAP NAS Devices

Archiware, NAS and B2

Archiware P5 can also now run directly on several Synology, QNAP, and G-Tech NAS systems to archive and move content to your Backblaze B2 account over the internet

With their most recent releases Archiware now supports several NAS system devices from QNAP, Synology, and G-Tech as P5 clients or servers.

The P5 software is installed as an application from the NAS vendor’s app store and runs directly on the NAS system itself without having to install additional hardware.

This means that all of your offices or departments with these NAS systems can now fully participate in your sync, archive, and backup workflows, and each of them can archive off to your central Backblaze B2 account.

For more information:

Archiware plus Backblaze: A Complete Front-to-Back Media Solution

Archiware P5, Fireball, and Backblaze B2 are all important parts of a great backup, archive, and sync plan. By getting all of your content into archive and B2, you’ll know that it’s highly protected, instantly available for new production workflows, and also readily discoverable through thumbnail and search capability.

With the latest version of P5, you not only have your entire production and backup workflows managed, with Fireball you can get even the largest and hardest to move archive safely and quickly into B2, as well!

For more information about the P5 Software Suite: Archiware P5 Software Suite

And to order a Fireball as part of our Rapid Ingest Service, start here: Backblaze B2 Fireball


You might also be interested in reading our recent guest post written by Marc N. Batschkus of Archiware about how to save time, money, and gain peace of mind with an archive solution that combines Backblaze B2 and Archiware P5.

Creating a Media Archive Solution with Backblaze B2 and Archiware P5

 

The post Move Even Your Largest Archives to B2 with Fireball and Archiware P5 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/iconik-and-backblaze-cloud-production-solution/

Cantemo iconik Plus Backblaze B2 for Media Cloud Production

Cantemo iconik Plus Backblaze B2 for Media Cloud Production

Many of our customers are archiving media assets in Backblaze B2, from long-running television productions, media distributors, AR/VR video creators, corporate video producers, houses of worship, and many more.

They are emptying their closets of USB hard drives, clearing off RAID arrays, and migrating LTO tapes to cloud storage. B2 has been proven to be the least expensive storage for their media archives, while keeping the archives online and accessible. Gone are the days of Post-its, clipboards, and cryptic drive labels defining whether old video footage can be found or not. Migrating archives from one form of storage to another will no longer suck up weeks and weeks of time.

So now that their archives are limitless, secure, always active, and available, the next step is making them actionable.

Our customers have been asking us — how can I search across all of my archives? Can I preview clips before I download the hi-res master, or share portions of the archive with collaborators around the world? Why not use the latest AI tools to intelligently tag my footage with metadata?

To meet all of those needs and more, we are excited to announce that Cantemo’s iconik cloud media management service now officially supports Backblaze B2.

iconik — A Media Management Service

iconik is an affordable and simple-to-use media management service that can read a Backblaze B2 bucket full of media and make it actionable. Your media assets are findable, sortable with full previews, and ready to pull into a new project or even right into your editor, such as Adobe Premiere, instantly.

Cantemo iconik user interface

iconik — Cantemo’s new media management service with AI features to find, sort, and even suggest assets for your project across your entire library

As a true media management service, iconik’s pricing model is a pay-as-you-go service, transparently priced per-user, per month. There are no minimum purchases, no servers to buy, and no large licensing fees to pay. To use iconik, all your users need is a web browser.

iconik Pricing

To get an idea of what “priced-per-user” might look like, most organizations will need at least one administrative user ($89/month), standard users ($49/month) who can organize content, create workflows, and ingest new media, and browse-only users ($19/month), who can search and download what they need. There’s also a “share-only” level that has no monthly charge that lets you incorporate customer and reviewer comments. This should accommodate teams of all kinds and all sizes.

Best of all, iconik is intelligent about how it uses storage, and while iconik charges small consumption fees for proxy storage, bandwidth, etc., they have found that for customers that bring media from Backblaze B2 buckets, consumption charges should be less than 5% of the monthly bill for user licenses.

As part of their launch promotion, if you get started in October, Cantemo will give Backblaze customers a $300 getting started credit!

You can sign up and get started here using the offer code of BBB22018.

Everwell’s Experience with iconik and Backblaze

One of the first customers to adopt iconik with Backblaze is Everwell, a video production company. Everwell creates a constant stream of videos for medical professionals to show in their waiting rooms. Rather than continuously buying upgrades for their in-house asset management system and local storage, iconik allows Everwell to shift their production to the cloud for all of their users. Their new solution will allow Everwell to manage their growing library of videos as new content constantly comes online, and kick off longer form productions with full access to all the assets they need across a fast-moving team that can be anywhere their production takes them.

collage of Everwll video images

Everwell is a fast-growing medical content developer for healthcare givers

To speed up their deployment of iconik, Everwell started with Backblaze’s data ingestion service, Fireball. Everwell copied their content to Fireball, and once back in the Backblaze data center, the data from Fireball was quickly added directly to Everwell’s B2 buckets. iconik could immediately start ingesting the content in place and make it available to every user.

Learn more about Backblaze B2 Fireball

With iconik and Backblaze, Everwell dramatically simplified their workflow as well, collapsing several critical workflow steps into one. For example, by uploading source files to Backblaze B2 as soon as they’re shot, Everwell not only reduces the need to stage local production storage at every site, they ingest and archive in a single step. Every user can immediately start work on their part of the project.

“The ‘everyone in the same production building’ model didn’t work for us any longer as our content service grew, with more editors and producers checking in content from remote locations that our entire team needed to use immediately. With iconik and Backblaze, we have what feels like the modern cloud-delivered production tool we’ve always wanted.”

— Loren Goldfarb, COO, Everwell

See iconik in Action at NAB NYC October 17-18

NAB Show New York - Media In Action October 17-18 2018

Backblaze is at NAB New York. Meet us there!

We’re excited to bring you several chances to see iconik and Backblaze working together.

The first is the NAB New York show, held October 17-18 at the Javits Center. iconik will be shown by Professional Video Technology in Booth N1432, directly behind Backblaze, Booth N1333.

Have you signed up for NAB NY yet? You can still receive a free exhibits pass by entering Backblaze’s Guest Code NY8842.

And be sure to sign up to meet with the Backblaze team at NAB by signing up on our calendar.

Attend the iconik and B2 Webinar on November 20

Soon after NAB NY, Backblaze and iconik will host a webinar to demo the solution called “3 Steps to Making Your Cloud Media Archive ‘active’ With iconik and Backblaze B2.” The webinar will be presented on November 20 and available on demand after November 20. Be sure to sign up for that too!

3 Steps Demo with: iconik and Backblaze B2 Cloud Storage

Sign up for the iconik/B2 Webinar

Don’t Miss the iconik October Launch Promotion

The demand for creative content is growing exponentially, putting more demands on your creative team. With iconik and B2, you can make all of your media instantly accessible within your workflows while adopting a infinitely scalable, pay only for what you use, storage solution.

To take advantage of the iconik October launch promotion and receive $300 free credit with iconik, sign up using the BBB22018 code.

The post iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Moving Tape Content to Backblaze Fireball with Canister

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/moving-tape-content-to-cloud-storage/


Canister for Fireball: LTO tape to Backblaze B2 migration made 'drag and drop' easy
If you shoot video on the run and wrangle video from multiple sources, you know that reliably offloading files from your camera carts, storage cards, or pluggable SSDs can be a logistical challenge. All of your source files need to be copied over, verified, and backed up before you can begin the rest of your post-production work.  It’s arguably the most critical step in your post-production workflow.

Knowing how critical this step is, videographers and data wranglers alike have long relied on an app for Mac and Windows called Hedge to take charge of their file copy and verification needs.


Hedge source and target progress

Hedge for Mac and Windows — drag and drop source file copy and verify tool

With an intuitive drag and drop interface, Hedge makes it simple to select your cards, disks, or other sources, identify your destination drives, then copy and verify using a custom “Fast Lane” engine to speed transfers dramatically. You can log when copies were completed, and even back up to multiple destinations in the same action, including your local SAN, NAS, or Backblaze Fireball, then on to your Backblaze B2 cloud storage.

But How Do You “Data-Wrangle” Tape Content to the Cloud?

But what if you have content, backup sets, or massive media archives on LTO tape?

You may find yourself in one of these scenarios:

  • You may have “inherited” an older LTO tape system that is having a hard time keeping up with your daily workflow, and you aren’t ready to sign up for more capital expense and support contracts.
  • You may have valuable content “stuck” on tape that you can’t easily access and want it on cloud for content monetization workflows that would overwhelm your tape system.
  • Your existing tape based workflow is working fine for now, but you want to get all of that content into the cloud quickly to get ready for future growth and new customers with a solution similar to Hedge.

While many people decide to move tape workflows to cloud for simple economic reasons, having all of that content securely stored in the cloud means that the individual files and entire folders can be instantly pulled into workflows and directly shared from Backblaze B2 with no need for copying, moving, restoring, or waiting.

For more information about how Backblaze B2 can replace LTO solutions, including an LTO calculator:  Backblaze LTO Replacement Calculator

Whichever scenario fits your need, getting tape content into the cloud involves moving a lot of content at once, and in a perfect world it would be as easy to drag and drop that content from tape to Backblaze B2!

Meet Canister for Fireball

To meet this exact need the team that developed Hedge have created an “LTO tape content to Fireball” solution called Canister for Fireball.

Fireball is Backblaze’s solution to help you quickly get massive amounts of data into Backblaze B2 Cloud Storage. When you sign up for the service, Backblaze sends you a 70TB Fireball that is yours to use for 30 days. Simply attach it to your local network and copy content over to the device at the speed of your local network. You’re free to fill up and send in your Fireball device as many times as needed. When Backblaze receives your Fireball with your files, all of the content is ingested directly into Backblaze’s data centers and appears in your Backblaze B2 online storage.

Backblaze B2 Fireball Rapid Ingest Service

Canister for Fireball makes it incredibly easy to move your content and archives from your tape device to your Backblaze B2 Fireball. With an intuitive interface similar to Hedge, Canister copies over and verifies files read from your tapes.

Using Canister with B2

flow chart for moving data from tape to the cloudInsert LTO tapes in your tape system and Canister for Backblaze will move them to your Backblaze B2 Fireball for rapid ingest into your B2 Cloud Storage


Cannister to Fireball user interfaceSelect from any tape devices with LTO media…

Cannister data progression screenshot…and watch the files on the tape copy and verify to your Backblaze B2 Fireball

Here’s how the solution works:

Steps to Migrate Your LTO Content to the Cloud with Canister for Fireball

  1. Order a Fireball system: As part of the signup step you will choose a B2 bucket that you’d like your Fireball content moved to.
  2. Connect your Fireball system to your network, making sure that the workstation that connects to your tape device can also mount the storage volume presented by your Backblaze Fireball.
  3. Install Canister for Fireball on your Mac workstation.
  4. Connect your tape device. Any tape system that can read your tapes and mount them as an LTFS volume will work. Canister will automatically mount tapes inside the app for you.
  5. Launch Canister for Fireball. You can now select the tape device volume as your source, the Fireball as your target, and copy the files over to your Fireball.
  6. Repeat as needed until you have copied and verified all of your tapes securely to your Fireball. You can fill and send in your Fireball as many times as needed during your 30 day period. (And you can always extend your loaner period.)
LTFS or Linear Tape File System is an industry adopted way to make the contents of an entire tape cartridge available as if it were a single volume of files. Typically, the tape stores a list of the files and their location on that tape in the beginning, or header of the tape. When a tape is read into your tape device, that directory section is read in and the tape system then presents it to you as a volume of files and folders. Say you want to select an individual file from that LTFS volume to copy to your desktop. When you move that to your desktop, the tape spools out to wherever that file is stored, reads the entire stream of tape containing that file, then finally copies it to your desktop. It can be a very slow process indeed and why many people choose to store content in cloud storage like Backblaze B2 so that they get instant access to every file.

Now — Put Your LTO Tape Ingest Plan Into Action

If you have content on tape that needs to get into your Backblaze B2 storage, Canister for Fireball and a Backblaze B2 Fireball are the perfect solution.

Canister for Fireball can be licensed for 30 days of use for $99 and includes priority support. The full version is $199. If you decide to upgrade from the 30 Day license you’ll pay only the difference to the full version.

Get more information about Canister for Fireball

And of course, make sure that you’ve ordered your Fireball:

Order a Backblaze B2 Fireball

Now with your content and archives no longer “trapped” on tape, you can browse them in your asset manager, share links directly from Backblaze B2, and have your content ready to pull into new content creation workflows by your team located anywhere in the world.

The post Moving Tape Content to Backblaze Fireball with Canister appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Creating a Media Archive Solution with Backblaze B2 and Archiware P5

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/creating-a-media-archive-solution/

Backblaze B2 Cloud Storage + Archiware P5= 7 Ways to Save

B2 + P5 = 7 Ways to Save Time, Money and Gain Peace of Mind with an Archive Solution of Backblaze B2 and Archiware P5

by Dr. Marc M. Batschkus, Archiware

This week’s guest post comes to us from Marc M. Batschkus of Archiware, who is well-known to media and entertainment customers, and is a trusted authority and frequent speaker and writer on data backup and archiving.

— Editor

Archiving has been around almost forever.

Roman "Archivum"Roman “Archivum” where scrolls were stored for later reference.

The Romans used the word “archivum” for the building that stored scrolls no longer needed for daily work. Since then, files have replaced scrolls, but the process has stayed the same and so today, files that are no longer needed for daily production can be moved to an archive.

Backup and Archive

Backblaze and Archiware complement each other in accomplishing this and we’ll show you how to get the most from this solution. But before we look at the benefits of archiving, let’s take a step back and review the difference between backup and archive.

A backup of your production storage protects your media files by replicating the files to a secondary storage. This is a cyclical process, continually checking for changed and new files, and overwriting files after the specified retention time is reached.

Archiving, on the other hand is a data migration, moving files that are no longer needed for daily production to (long-term) storage, yet keeping them easily retrievable. This way, all completed productions are collected in one place and kept for later reference, compliance, and re-use.

Think of backup as a spare tire, archive as winter tiresThink of BACKUP as a spare tire, in case you need it, and ARCHIVE as a stored set of tires for different needs.

To use an analogy:

  • Think of backup as the spare tire in the trunk.
  • Think of archive as the winter tires in the garage.

Both are needed!

Editor’s note: For more insight on “backup vs archive” have a look at What’s the Diff: Backup vs Archive.

Building a Media Archive Solution with Archiware P5 and Backblaze B2

Now that the difference between backup and archive is clear, let’s have a look at what an archive can do to make your life easier.

Archiware archive catalog transfering to B2 cloud storageArchiware P5 can be your interface to locate and manage your files, with Backblaze B2 as your ready storage for all of those files

P5 Archive connects to Backblaze B2 and offers the interface for locating files.

B2 + P5 = 7 Ways to Save Time and Money and Gain Peace-of-Mind

  1. Free up expensive production storage
  2. Archive from macOS, Windows, and Linux
  3. Browse and search the archive catalog with thumbnails and proxies
  4. Re-use, re-purpose, reference and monetize files
  5. Customize the metadata schema to fit your needs and speed up search
  6. Reduce backup size and runtime by moving files from production storage
  7. Protect precious assets from local disaster and for the long-term (no further migration/upgrade needed)

Archive as Mini-MAM

The “Mini-MAM” features of Archiware P5 help you to browse and find files easier than ever. Browse the archive visually using the thumbnails and proxy clips in the archive catalog. Search for specific criteria or a combination of criteria such as location or description.

Since P5 Archive lets you easily expand and customize metadata fields and menus, you can build the individual metadata schema that works best for you.

Technical metadata (e.g. camera type, resolution, lens) can be automatically imported from the file header into the metadata fields of P5 archive using a script.

The archive becomes the file memory of the company saving time and energy because now there is only one place to browse and search for files.

Mini MAM screenshotArchiware as “Mini-MAM” —  thumbnails, proxies, even metadata all within Archiware P5

P5 offers maximum flexibility and supports all storage strategies, be it cloud, disk or tape and any combination of the above.

For more information on Archiving with Archiware: Archiving with Archiware P5. For macOS, P5 Archive offers integration with the Finder and Final Cut Pro X via the P5 Archive App. For more information on integrated archiving with Final Cut Pro X: macOS Finder and Final Cut Pro X Integrated Archiving

You can start building an archive immediately with Backblaze B2 cloud storage because it allows you to do this without any additional storage hardware and upfront investment.

Backblaze B2 is the Best of Cloud

  • ✓  Saves investment in storage hardware
  • ✓  Access from anywhere
  • ✓  Storage on demand
  • ✓  Perpetual storage – no migration or upgrade of hardware
  • ✓  Financially advantageous (OPEX vs CAPEX)
  • ✓  Best price in its category

Backblaze B2 offers flexible access so that the archive can be accessed from several physical locations with no storage hardware needing to be moved.

P5 Archive supports consumable files as archive format. This makes the single files accessible even if P5 Archive is not present at the other location. This opens up a whole new world of possibilities for collaborative workflows that were not possible before.

Save Money with OPEX vs CAPEX

CAPEX vs. OPEXCAPital EXpenditures are the money companies spend to purchase major physical goods that will be used for more than one year. Examples in our field are investments in hardware such as storage and servers.

OPerating EXpenses are the costs for a company to run its business operations on a daily basis. Examples are rent and monthly cost for cloud storage like B2.

By using Backblaze B2, companies can save CAPEX and instead have monthly payments only for the cloud storage they use, while also saving maintenance and migration cost. Furthermore, migrating files to B2 makes expansion of high performance and costly production storage unnecessary. Over time this alone will make the archive pay off.

Now that you know how to profit from archiving with Archiware P5 and Backblaze B2, let’s look at the steps to build the best archive for you.

Connecting B2 cloud storage screenshot

Backblaze B2 is already a built-in option in P5 and works with P5 Archive and P5 Backup.

For detailed setup and best practice see:

Cloud Storage Setup and Best Practice for Archiware

Steps in Planning a Media Archive

Depending on the size of the archive, the number of people working with and using it, and the number of files that are archived, planning might be extremely important. Thinking ahead and asking the right questions ensures that the archive later delivers the value that it was built for.

Including people that will configure, operate, and use the system guarantees a high level of acceptance and avoids blind spots in your planning.

  1. Define users: who administers, who uses and who archives?
  2. Decide and select: what goes into the archive, and when?
  3. Which metadata are needed to describe the data needed (what will be searched for)?
  4. Actual security: on what operating system, hardware, software, infrastructure, interfaces, network and medium will be archived?
  5. What security requirements should be fulfilled: off-site storage, duplication, storage duration, test cycles of media, generation migration, etc.
  6. Retrieval:
    • Who searches?
    • With what criteria?
    • Who is allowed to restore?
    • On what storage?
    • For what use?

Metadata is the key to the archive and enables complex searches for technical and descriptive criteria.

Naming Conventions or “What’s in a File Name?”

The most robust metadata you can have is the file name. It can travel through different operating systems and file systems. The file name is the only metadata that is available all the time. It is independent of any database, catalog, MAM system, application, or other mechanism that can keep or read metadata. With it, someone can instantly make sense of a file that gets isolated, left over, misplaced or transferred to another location. Building a solid and intelligent naming convention for media files is crucial. Consistency is key for metadata. Metadata is a solid foundation for the workflow, searching and sharing files with other parties. The filename is the starting point.

Wrapping Up

There is much more that can make a media archive extremely worthwhile and efficient. For further reading I’ve made this free eBook available for more tips on planning and implementation.

eBook:  Data Management, Backup and Archive for Media Professionals — How to Protect Valuable Video Data in All Stages of the Workflow by Marc M. Batschkus

Start looking into the benefits an archive can bring you today. There is a 30-day fully featured trial license for Archiware P5 that can be combined with the Backblaze B2 free trial storage.

Trial License:  About Archiware P5 and 30-Day Trial

And of course, if you’re not already a Backblaze B2 customer, sign up instantly at the link below.

B2 Cloud Storage:  Instant Signup

— Dr. Marc M. Batschkus, Archiware

The post Creating a Media Archive Solution with Backblaze B2 and Archiware P5 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.