Tag Archives: Backing Up

How Reliable are SSDs?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/how-reliable-are-ssds/

an exploded view of a Samsung Solid State Drive

What’s not to love about solid state drives (SSDs)? They are faster than conventional hard disk drives (HDDs), more compact, have no moving parts, are immune to magnetic fields, and can withstand more shocks and vibration than conventional magnetic platter disks. And, they are becoming available in larger and larger capacities while their cost comes down.

If you’ve upgraded an older computer with an SSD, you no doubt instantly saw the benefits. Your computer booted in less time, your applications loaded faster, and even when you ran out of memory, and apps and data had to be swapped to disk, it felt like everything was much snappier.

We’re now seeing SSDs with capacities that used to be reserved for HDDs and at prices that no longer make our eyes water. 500 GB SSDs are now affordable (under $100), and 1 TB drives are reasonably priced ($100 to $150). Even 2 TB SSDs fall into a budget range for putting together a good performance desktop system ($300 to $400).

We’ve written a number of times on this blog about SSDs, and considered the best uses for SSDs compared to HDDs. We’ve also written about the future of SSDs and how we use them in our data centers and whether we plan on using more in the future.

Reliability

In this post we’re going to consider the issue of SSD reliability. For all their merits, can SSDs be trusted with your data and will they last as long or longer than if you were using an HDD instead? You might have read that SSDs are limited to a finite number of reads and writes before they fail. What’s that all about?

The bottom line question is: do SSD drives fail? Of course they do, as do all drives eventually. The important questions we really need to be asking are 1) do they fail faster than HDDs, and 2) how long can we reasonably expect them to last?

Backing Up Is Great To Do

Of course, as a data storage and backup company, you know what we’re going to say right off. We always recommend that no matter which storage medium you use, you should always have a backup copy of your data. Even if the disk is reliable and in good condition, it won’t do you any good if your computer is stolen, consumed by a flood, or lost in a fire or other act of nature. You might have heard that water damage is the most common computer accident, and few computer components can survive a thorough soaking, especially when powered.

SSD Reliability Factors to Consider

Generally, SSDs are more durable than HDDs in extreme and harsh environments because they don’t have moving parts such as actuator arms. SSDs can withstand accidental drops and other shocks, vibration, extreme temperatures, and magnetic fields better than HDDs. Add to that their small size and lower power consumption, and you can understand why they’re a great fit for laptop computers and mobile applications.

First, let’s cover the basics. Almost all types of today’s SSDs use NAND flash memory. NAND isn’t an acronym like a lot of computer terms. Instead, it’s a name that’s derived from its logic gate called “NOT AND.”

SSD part diagram including Cache, Controller, and NAND Flash Memory

The term following NAND, flash, refers to a non-volatile solid state memory that retains data even when the power source is removed. NAND storage has specific properties that affect how long it will last. When data is written to a NAND cell (also known as programming), the data must be erased before new data can be written to that same cell. NAND is programed and erased by applying a voltage to send electrons through an insulator. The location of those electrons (and their quantity) determine when current will flow between a source and a sink (called a voltage threshold), determining the data stored in that cell (the 1s and 0s). When writing and erasing NAND, it sends the electrons through the insulator and back, and the insulator starts to wear — the exact number of these cycles in each individual cell varies by NAND design. Eventually, the insulator wears to the point where it may have difficulty keeping the electrons in their correct (programmed) location, which makes it increasingly more difficult to determine if the electrons are where they should be, or if they have migrated on their own.

This means that flash type memory cells can only be programmed and erased a limited number of times. This is measured in P/E cycles, which stands for programmed and erased.

P/E cycles are an important measurement of SSD reliability, but there are other factors that are important to consider, as well. These are P/E cycles, TBW (terabytes written), and MTBF (mean time between failures).

The SSD manufacturer will have these specifications available for their products and they can help you understand how long your drive can be expected to last and whether a particular drive is suited to your application.

P/E cycles — A solid-state-storage program-erase cycle is a sequence of events in which data is written to solid-state NAND flash memory cell, then erased, and then rewritten. How many P/E cycles a SSD can endure varies with the technology used, somewhere between 500 to 100,000 P/E cycles.

TBW — Terabytes written is the total amount of data that can be written into an SSD before it is likely to fail. For example, here are the TBW warranties for the popular Samsung 860 EVO SSD: 150 TBW for 250 GB model, 300 TBW for 500 GB model, 600 TBW for 1 TB model, 1,200 TBW for 2 TB model and 2,400 TBW for 4 TB model. Note: these models are warrantied for 5 years or TBW, whichever comes first.

MTBF — MTBF (mean time between failures) is a measure of how reliable a hardware product or component is over its expected lifetime. For most components, the measure is typically in thousands or even tens of thousands of hours between failures. For example, a hard disk drive may have a mean time between failures of 300,000 hours, while an SSD might have 1.5 million hours.

This doesn’t mean that your SSD will last that many hours, what it means is, given a sample set of that model of SSD, errors will occur at a certain rate. A 1.2 million hour MTBF means that if the drive is used at an average of 8 hours a day, a sample size of 1,000 SSDs would be expected to have one failure every 150 days, or about twice a year.

SSD Types

There are a number of different types of SSD, and advancements to the technology continue at a brisk pace. Generally, SSDs are based on four different NAND cell technologies:

  • SLC (Single Level Cell) — one bit per cell
  • When one bit is stored (SLC), it’s not necessary to keep close tabs on electron locations, so a few electrons migrating isn’t much of a concern. Because only a 1 or a 0 is being stored, it’s necessary only to accurately determine if voltage flows or not.

  • MLC (Multi-Level Cell) — two bits per cell
  • MLC stores two bits per cell, so more precision is needed (determining voltage threshold is more complex). It’s necessary to distinguish among 00, 01, 10 or 11. Migrating electrons have more of an impact, so the insulator cannot be worn as much as with SLC.

  • TLC (Triple Level Cell) — three bits per cell
  • This trend continues with TLC where three bits are stored: 001, 010, 100, …110 and 111. Migrating electrons have more effect than in MLC, which further reduces tolerable insulator wear.

  • QLC (Quad Level Cell) — four bits per cell
  • QLC stores four bits (16 possible combinations of 1s and 0s). With QLC, migrating electrons have the most significant effect. Tolerable insulator wear is further reduced.

    QLC is a good fit for read-centric workloads because NAND cells are worn negligibly when reading data versus worn more when writing data (programming and erasing). When writing and rewriting a lot of data, the insulator wears more quickly. If a NAND cell can tolerate that wear, it is well suited to read/write mixed accesses. The less wear-tolerable NAND cells are, the better they are suited for read-centric workloads and applications.

Each subsequent technology for NAND allows it to store an extra bit. The fewer bits per NAND cell, the faster, more reliable, and more energy efficient the technology is — and also, more expensive. A SLC SSD would technically be the most reliable SSD as it can endure more writes, while a QLC is the least reliable. If you’re selecting an SSD for an application where it will be written more than read, than the selection of NAND cell technology could be a significant factor in your decision. If your application is general computer use, it likely will matter less to you.

How Reliability Factors Affect Your Choice of SSD

How important these factors are to you depends on how the SSD is used. The right question to ask is how a drive will perform in your application. There are different performance and reliability criteria depending on whether the SSD will be used in a home desktop computer, a data center, or an exploration vehicle on Mars.

Manufacturers sometimes specify the type of application workload for which an SSD is designed, such as write-intensive, read-intensive or mixed-use. Some vendors allow the customer to select the optimal level of endurance and capacity for a particular SSD. For instance, an enterprise user with a high-transaction database might opt for a higher number of drive writes at the expense of capacity. Or a user operating a database that does infrequent writes might choose a lower drive writes number and a higher capacity.

Signs of SSD Failure

SSDs will eventually fail, but there usually are advance warnings of when that’s going to happen. You’ve likely encountered the dreaded clicking sound that emanates from a dying HDD. As an SSD has no moving parts, so we won’t get an audible warning that an SSD is about to fail us. You should be paying attention for a number of indicators that your SSD is nearing its end of life, and take action by replacing that drive with a new one.

1) Errors Involving Bad Blocks

Much like bad sectors on HDDs, there are bad blocks on SSDs. This is typically a scenario where the computer attempts to read or save a file, but it takes an unusually long time and ends in failure, so the system eventually gives up with an error message.

2) Files Cannot Be Read or Written

There are two ways in which a bad block can affect your files, 1) the system detects the bad block while writing data to the drive, and thus refuses to write data, and 2), the system detects the bad block after the data has been written, and thus refuses to read that data.

3) The File System Needs Repair
Getting an error message on your screen can happen simply because the computer was not shut down properly, but it also could be a sign of an SSD developing bad blocks or other problems.

4) Crashing During Boot
A crash during the computer boot is a sign that your drive could be developing a problem. You should make sure you have a current backup of all your data before it gets worse and the drive fails completely.

5) The Drive Becomes Read-Only
Your drive might refuse to write any more data to disk and can only read data. Fortunately, you can still get your data off the disk.

SSDs Generally Will Last As Long As You Need Them To

Let’s go back to the two questions we asked above.

Q: Do SSDs fail faster than HDDs?

A: That depends on the technology of the drives and how they’re used. HDDs are better suited for some applications and SSDs for others. SSDs can be expected to last as long or longer than HDDs in most general applications.

and

Q: How long can we reasonably expect an SSD to last?

A: An SSD should last as long as its manufacturer expects it to last (e.g. five years), provided that the use of the drive is not excessive for the technology it employs (e.g. using a QLC in an application with a high number of writes). Consult the manufacturer’s recommendations to ensure that how you’re using the SSD matches its best use.

SSDs are a different breed of animal than a HDD and they have their strengths and weaknesses relative to other storage media. The good news is that their strengths — speed, durability, size, power consumption, etc. — are backed by pretty good overall reliability.

SSD users are far more likely to replace their storage drive because they’re ready to upgrade to a newer technology, higher capacity, or faster drive, than having to replace the drive due to a short lifespan. Under normal use we can expect an SSD to last years. If you replace your computer every three years, as most users do, then you probably needn’t worry about whether your SSD will outlast your computer. What’s important is whether the SSD will be sufficiently reliable that you won’t lose your data.

As we saw above, if you’re paying attention to your system, you will be given ample warning of an impending drive failure, and you can replace the drive before the data is not readable.

It’s good to understand how the different SSD technologies affect their reliability, and whether it’s worth it to spend extra money for SLC over MLC or QLC. However, unless you’re using an SSD in a specialized application with more writes than reads as we described above, just selecting a good quality SSD from a reputable manufacturer should be enough to make you feel confident that your SSD will have a useful life span.

Keep an eye out for any signs of failure or bad sectors, and, of course, be sure to have a solid backup plan no matter what type of drive you’re using.

The post How Reliable are SSDs? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Subscription Updates for Computer Backup

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/backblaze-computer-backup-pricing-change/

Backblaze laptop

Since 2008, we have offered unlimited Computer Backup for $5 per month. Today, after more than a decade of providing unlimited backup at that same price while also continuing to add features and functionality, we are announcing a price increase.

Effective for new purchases and renewals after March 11, 2019 at 5PM Pacific, our prices will change to:

2019 pricing

More than ten years ago, a friend’s computer crashed, taking with it all her writing and other files. Since she had no backup, she lost everything. As a result, we asked friends, family, and co-workers what they did for backup. The answers were primarily “nothing” or “not enough.” Five of us decided to quit our jobs and commit to working on this problem for a year with no salary in the hopes that we could help save a few people from this type of loss.

A lot has changed since then. Apple’s Time Machine, iPhone, iPad, Watch, and iCloud didn’t exist when we first started; Google Drive, Google Cloud, and Microsoft Azure were years away from being announced. Even for techies, the arrival of clouds mostly meant the need to bring an umbrella. Maxtor, at the time the third-largest hard drive vendor, had just been acquired by Seagate, and HGST was still a stand-alone hard drive company. The 1TB hard drive was a breakthrough in capacity.

Why The Change?

The short answer is that we have enhanced the service in many ways and storage costs have gone up. We have continually removed impediments to getting data backed up — no file size restrictions, speeding up uploads, all while data sets have grown larger and larger. We’ve worked hard to avoid raising our prices, which resulted in some great storage innovations and has allowed us to keep our original prices for more than a decade. By making this decision now, we are ensuring we can continue to offer unlimited backup and keep improving our Computer Backup service. I’d like to go into further detail on the two primary sources of our increased costs: 1) enhancements to the service, and 2) the market cost of storage.

1) Enhancements to the service

When we launched our service, we were (and still are) committed to providing unlimited backup. In addition, over the years, we’ve introduced many enhancements to improve the product in ways that have increased our costs. As consumer data has expanded, we have made sure that we continually back up all data as quickly as possible.

When we say unlimited, we mean unlimited. Here are a few examples of that commitment:

  • Removed all limits on what can be backed up. Originally 4GB was the maximum size any individual file could be and VM images, ISOs, plus other file types that aren’t typically user data were excluded.
  • Sped up backups. Combined small files into bundles, added threading to allow 30 backup processes at once, and added automatic thread management. This means your data gets backed up as fast as your setup allows.
  • Expanded restore options. Expanded the maximum size of Restore by Mail from 0.5 TB to 8 TB on a hard drive, and from a 4 GB DVD to a 256 GB flash drive. We also introduced the Restore Return Refund program. It’s a program our customers love but most other players in the industry have abandoned due to the costs of shipping, packaging, drive replacement, etc.
  • A bunch of other features. Locate My Computer, Preview/Access/Share, two-factor verification, iOS/Android apps, network management, Save to B2, and many of the other features/functions not only incurred development costs but have ongoing server/bandwidth expenses.

Other services have moved away from unlimited plans in favor of tiered pricing options (and different feature sets for different customers). Our customers tell us they love simplicity and predictability. While we are changing our prices, we remain fully committed to providing simple, reliable computer backup.

2) Market cost of storage

The volume of personal data has been skyrocketing for the last decade. In many ways our daily lives generate more data. We now carry a HD video camera in our pockets, music/video downloads are ubiquitous, and no event goes by without memorializing it with a photo or a social media post.

Historically, Backblaze benefitted from hard drives growing in capacity and decreasing in price. Over our first few years, these two trends approximately canceled each other out (customer data grew at approximately the same rate as hard drives decreased in price). Unfortunately, the 2011 floods in Thailand caused a step-function increase in the cost of drives that the market has still not recovered from, and the rate of price decreases on hard drives has slowed down.

Our team works aggressively to reduce our cost of storage year over year. And we have managed to create enough efficiencies to have kept our 2008 pricing. We designed our own Storage Pods, wrote our cloud storage file system, used consumer hard drives and analyzed which had the best price/reliability mix for our use-case, built client-side deduplication, went to crazy extremes during the Thailand drive crisis, and continue working proactively every day to drive down the cost of storage.

As a result, we believe that we have the lowest cost of storage in the industry. (An indicator of this is that we offer our infrastructure-as-a-service cloud storage at 1/4th the price of Amazon, Google, or Microsoft.) Despite that, the amount of storage per customer has grown faster than the reduction in costs.

Going Forward

A lot has changed in the decade since we founded Backblaze. We now offer backup for consumers and businesses, as well as raw object storage. We store over 750 petabytes of data for hundreds of thousands of customers in over 150 countries, and have helped customers recover over 35 billion files. What hasn’t changed is our desire to continue providing a service we’re proud of.

With all of that, we determined that it was important for us to take this step. It was not a decision we took lightly. We are committed to unlimited backup and want to be able to continue to invest in the service. We spent months making sure that we made this change the right way, including providing something for our existing and loyal customers.

To say thank you, we are offering existing customers the ability to extend existing Computer Backup licenses by one year for $50 per computer (the price of our original annual plan from 2008). Please read the Subscription Extension Program FAQ to learn more about this program and how you can extend your existing license for one year at the current pricing.

Thank you for being a customer and we look forward to protecting your data for many years to come.

The post Subscription Updates for Computer Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Save Data Directly to B2 With Backblaze Cloud Backup 6.0

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/save-data-directly-to-cloud-storage/

Save Restores to B2 screenshot

Customers have often told us that they’d love a way to save data directly from their Backblaze Computer Backup account to B2 Cloud Storage. Some want to freeze a set of records in time, others want to preserve the state of a directory or system as it existed at a specific moment. Still others simply want to remove data from their local drive but have the assurance that it is safely stored in the cloud.

We listened to these requests and are happy to say that we’ve added this capability in our just released 6.0 update of Backblaze Computer Backup. Users can now select B2 Cloud Storage as a destination to save Snapshots from their backup account during the restore process.

This capability lets customers do a number of new things, like keep a copy of their old computer’s data even when migrating to a new one, save a collection of files (e.g. last year’s emails, a completed work project, your novel draft, tax returns) in the cloud as an archive, or free up space on a hard drive by moving data to a Snapshot in B2 and then deleting the original copy. Just like files in Computer Backup, the B2 Snapshot can be downloaded over the internet or delivered anywhere on a USB flash or hard drive.

No More Connecting Your External Drives Every 30 Days

This new feature can particularly benefit users who have been using Computer Backup to back up data from multiple external drives. Often, these external drives are not always connected to their computers, and to maintain the backups they have been required to connect these drives at least once every 30 days so that they’re active and therefore maintained in their backup — a task they tell us they’d rather avoid.

Now, with the ability to save a restore to B2, these customers can take a Snapshot of the data already backed up from these drives and save it to a B2 account. They can save as many Snapshots as they wish, thereby saving the state of the drive as it existed in one moment for as long as they wish to retain it.

Snapshots are stored at economical B2 rates: $0.005 gigabyte/month and $0.01 gigabyte for downloads. Customers get an instant cost estimate when a Snapshot is prepared from Backblaze Backup to B2.

What is B2 Cloud Storage?

B2 is Backblaze’s low cost and high performance cloud storage. It can be used to store data for as short or as long a period as you require. The data in B2 is retrievable without delay from anywhere at any time.

B2 is different from Backblaze Computer Backup in that B2 can be used to store whatever data you want and you have complete control of how long it is retained. Our Computer Backup service offers unlimited backup of the data on your Mac or Windows computer using the Backblaze client software. B2, in contrast, can be accessed through the account dashboard or used with any of a number of applications chosen by the user, or accessed through various programming interfaces or from a computer’s command line. For more on pricing, see our pricing page and calculator for B2.

How Does Saving a Restore to B2 Work?

Files in your Computer Backup can be zipped and archived to a Snapshot that is stored in B2 Cloud Storage. These selected files will be safe in B2 until the Snapshot is removed by the user, even if the files have been deleted from the computer and the backup.

screenshot of the View/Restore Files options

Creating a Restore Snapshot in Backup account

The user gets an instant estimate of the cost to store the Snapshot in B2.

Name this Snapshot screenshot

Preparing Snapshot from Computer Backup account

The user receives a notice when the Snapshot is created and stored.

Your B2 Snapshot is Ready!

Notice that Snapshot has been created

An unlimited number of restores can be saved and retained as B2 Snapshots for any length of time desired.The user’s account dashboard shows all the Snapshots that have been created, and gives options to download or remove the Snapshot. A Snapshot can be downloaded directly from B2 to a user’s computer or shipped to customers on a USB flash or hard drive. And, when returned within 30 days, the cost of the flash or hard drive is completely refundable, just like with regular restores.

screenshot of user B2 Snapshots

User account page showing status of Snapshots in B2

Let Us Know How You’re Using Snapshots

We hope you’ll try out this new capability and let us know how you’re using it.

For more tips on saving data to B2 Snapshots, read our help article, Saving Files to B2 from Computer Backup, or sign up for our free webinar on Backblaze Backup v6.0 on January 30, 2019, at 11am PST.

The post Save Data Directly to B2 With Backblaze Cloud Backup 6.0 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Cloud Backup v6.0: Larger Longer Faster Better

Post Syndicated from Yev original https://www.backblaze.com/blog/backblaze-cloud-backup-v6/

Backblaze 6.0 -- The Larger Longer Faster Better Release
Announcing Backblaze Cloud Backup 6.0: The Larger Longer Faster Better Release!

This release for consumers and businesses brings a lot of new functionality to Backblaze Cloud Backup: Restore by Mail drives that are twice the size, archiving with Backblaze B2 Cloud Storage, up to 50% faster backups, and a network blocklist feature to help avoid data caps. All that plus more efficient and performant Mac and Windows applications along with mobile enhancements and SSO support with Google. We hope you like it!

Backblaze Restores — Now With The Power of B2 Cloud Storage

Larger Restores — Twice the Size

The amount of data individuals accumulate each year keeps growing. As you store more data, you need bigger hard drives to restore that data. Backblaze is increasing the capacity of our restore hard drives by 100% for our Restore By Mail feature. Flash keys can now hold up to 256GB and hard drives can now hold up to 8TB in restore data. Best of all, you can still use our Restore Return Refund feature to return those restore drives for a full refund.
2x 8TB USB Hard Drive Restore / 2x 256 GB USB Flash Drive Restore

Saving Data To B2 Snapshots

Backed up files can now be zipped and archived to a Snapshot in B2 Cloud Storage. These selected files will be available until you delete the Snapshot, even if the files have been deleted from your computer and backup. This capability lets customers do new things like keep a copy of all your old computer’s data even when migrating to a new one, save a collection of files (e.g. tax returns) in the cloud as an archive, or free up space on your hard drive by moving data to a Snapshot and then deleting the original copy. Just like files in Computer Backup, your B2 Snapshot can be downloaded over the internet or delivered on a USB hard drive. Learn more about Saving Data to B2!
Save Files to B2

Keep Restores Longer

Extend the life of your .zip restore by archiving it to B2 Cloud Storage. Your restore will be kept in a private B2 Cloud Storage Snapshot bucket as a .zip file until you delete the Snapshot. Use this feature if you need more time to download your restore or want to keep a permanent copy. Get the data later by downloading it directly to your computer or using our Restore by Mail service. Learn more about Keeping Restores Longer!
V6 -- Keep Restores Longer

Mac and Windows Application Updates

Performance — a 50% Boost

We’ve increased the maximum upload threads to 30, creating speed increases up to 50% (depending on your computer and upload bandwidth). More threads allow more uploads to run in parallel thereby dramatically increasing backup speeds. Learn more about backup threads on Mac and Windows.

Efficiency

Logging and system resource usage have been streamlined so Backblaze continues to be nearly invisible on your computer.

Network Management

We’re not big fans of data caps here at Backblaze, and one bit of feedback we’ve received over the last year or so was that people were blowing past their ISP’s monthly bandwidth allotment while backing up using their hotspot or mobile device-tethered internet connection. With that in mind, we’ve added a blocklist feature so you can choose to prevent backups from occurring while you are connected to specific Wi-Fi networks of your choosing. Backblaze will still transmit little bits of data (we call them heartbeats) to let us know your computer is still active, but no backups will be transmitted. Learn more about Network Management!
Block chosen WiFi Networks

Mobile Overhaul

Increased File Download Size

In the spirit of our increased maximum Restore by Mail hard drive and flash drive sizes, we’ve also increased the maximum size for downloads on our iOS and Android mobile apps. You can now download larger files, but keep in mind that your phone or tablet needs to have space available to hold them!

Security Enhancements

We’ve spent the last few months enhancing our sign-in security choices, and with the newest versions of our mobile apps, we’ve added support for 2FV via ToTP, biometric support, and SSO support.

Ease Of Use

We’ve cleaned up the mobile apps and made them a bit more intuitive to enable faster navigation and increased speed for browsing and downloading files.
V6 -- Increased File Download Size and Security Enhancements

SSO Support with Google

We’re rolling out SSO support for Gmail. Our Backblaze Groups have had SSO support for G Suite businesses for a few months, and now everyone can use this alternate sign-in method. You can enable SSO login from the My Settings page in your account and we’ll change your login preferences to SSO with the Gmail address associated with your account. New accounts can also be created using SSO on account creation. Learn more about Enabling SSO!

Backblaze 6.0 Available: January 17th, 2019

We will be slowly auto-updating all users in the coming weeks. To update now:

This version is now the default download on www.backblaze.com.

Want to Learn More? January 30th, 2019 at 11am PT

Want to learn more? Join Yev on a webinar where he’ll go over version 6.0 features and answer viewer questions! The webinar will be available on BrightTalk (registration is required) and you can sign up here by visiting the Backblaze BrightTALK channel.

We hope you enjoy Backblaze Cloud Backup v6.0!

The post Backblaze Cloud Backup v6.0: Larger Longer Faster Better appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Much Photo & Video Data Do You Have Stored?

Post Syndicated from Jim Goldstein original https://www.backblaze.com/blog/how-much-photo-video-data-do-you-have-stored/

How Much Photo and Video Data Do You Have?

Backblaze’s Director of Marketing Operations, Jim, is not just a marketing wizard, he’s worked as a professional photographer and run marketing for a gear rental business. He knows a lot of photographers. We thought that our readers would be interested in the results of an informal poll he recently conducted among his media friends about the amount of media data they store.You’re invited to contribute to the poll, as well!

— Editor

I asked my circle of professional and amateur photographer friends how much digital media data they have stored. It was a quick survey, and not in any way scientific, but it did show the range of data use by photographers and videographers.

Jim's media data storage poll

I received 64 responses. The answers ranged from less than 5 TB (17 users) to 2 petabytes (1 user). The most popular response was 10-19 TB (18 users). Here are the results.

Digital media storage poll results

Jim's digital media storage poll results

How Much Digital Media Do You Have Stored?

I wondered if the results would be similar if I expanded our survey to a wider audience.

The poll below replicates what I asked of my circle of professional and non-professional photographer and videographer friends. The poll results will be updated in real-time. I ask that you respond only once.

Backblaze is interested in the results as it will help us write blog articles that will be useful to our readership, and also offer cloud services suitable for the needs of our users. Please feel free to ask questions in the comments about cloud backup and storage, and about our products Backblaze Backup and Backblaze B2 Cloud Storage.

I’m anxious to see the results.

Our Poll — Please Vote!

How much photo/video data do you have in total (TB)?

Thanks for participating in the poll. If you’d like to provide more details about the data you store and how you do it, we’d love to hear from you in the comments.

The post How Much Photo & Video Data Do You Have Stored? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Ever Wish You Had a Backup Brain? The Mars Rover Has One

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/mars-rover-backup-brain/

Mars Curiosity Rover at JPL in Pasadena

Have you ever had one of those days when even a second cup of coffee can’t jump-start your thinking and you just wished you had another brain you could switch to? If you’re the Mars Curiosity Rover, you do.

A recent glitch in its main computer required the Curiosity Rover team at NASA’s Jet Propulsion Laboratory (JPL) to switch to another computer in the rover while they worked to resolve problems with its main computer. The problem started around September 15 with the rover “failing to store science and some key engineering data,” according to NASA. The rover continued to send limited engineering data stored in short-term memory when it connected to a relay orbiter — it was otherwise healthy and receiving commands. But whatever was preventing Curiosity from storing science data in long-term memory was also preventing the storage of the rover’s event records, a journal of all its actions that engineers need in order to make a diagnosis. The computer swap allowed data and event records to be stored on the Curiosity‘s other computer, improving the rover’s operations and helping the engineers diagnose the problem.

Tweet from Mars Curiosity Rover @MarsCuriosity on October 3, 2018

Two Brains Are Better Than One

Like most spacecraft, NASA outfits its spacecraft with twin computers for redundancy in case any problems arise with its main computer. Curiosity‘s paired computers are called Side-A and Side-B. The rover began its stay on Mars in August of 2012 using Side-A but switched to Side-B in February of 2013 when a problem developed in the computer’s flash memory that caused the computer to continuously reboot in a loop. Engineers working from 33.9 million miles away on earth were eventually able to get the Side-A computer back in working order. That’s the computer Curiosity switched back to this past October while engineers continued to investigate the memory errors in the Side-B machine.

Curiosity continues to operate using its Side-A computer. According to Steven Lee, Curiosity‘s deputy project manager at JPL, “At this point, we’re confident we’ll be getting back to full operations, but it’s too early to say how soon. It’s certainly possible to run the mission on the Side-A computer if we really need to, but our plan is to switch back to Side-B as soon as we can fix the problem to utilize its larger memory size.”

Tweet from @MarsCuriosity on October 17, 2018

The computer problems haven’t prevented Curiosity from continuing to pursue its mission objectives, which include an investigation of the Martian climate and geology; assessment of whether the selected field site inside Gale Crater has ever offered environmental conditions favorable for microbial life, including investigation of the role of water; and planetary habitability studies in preparation for human exploration.

Inside the Curiosity’s Brains

Even though Curiosity‘s computers are specialized for space use, the circuit board and operating system will be familiar to many. The CPU is a RAD750, a version of the IBM PowerPC 750, which was used in many computers from Apple, including the original iMac. The datasheet for the RAD750 states that the processor, “is the best space microprocessor available today by any selection criterion — performance, cost, availability, or flight heritage.”

RAD750 radiation-hardened PowerPC space microprocessor

RAD750 radiation-hardened PowerPC space microprocessor

On-board memory includes 256MB of DRAM and 2 GB of Flash Memory (~8 times as much as Rovers Spirit or Opportunity), both with error detection and correction and 256kB of EEPROM. The microprocessor operates at up to 200 megahertz speed, 10 times the speed of earlier microprocessors in rovers Spirit and Opportunity.

Two British Aerospace RAD750 single board computers as used aboard the Curiosity rover

Two British Aerospace RAD750 single board computers as used aboard the Curiosity rover

For Curiosity‘s software, NASA stuck to proven solutions, selecting the VxWorks operating system. VxWorks, developed by Wind River Systems, is a real-time operating system used in a huge number of embedded systems. The previous Mars rovers (Sojourner, Spirit, Opportunity), Mars Reconnaissance Orbiter, and the SpaceX Dragon spacecraft all use VxWorks. VxWorks also powers many earth-bound device and vehicles, including BMW’s iDrive, the Apache Longbow helicopter, and the Apple Airport Extreme and Linksys WRT54G routers.

Shortly after landing on Mars, on August 8, 2012, NASA Mission Control began upgrading the rover’s dual computers by deleting the entry-descent-landing software, then uploading and installing the surface operation software. The switchover to the new software was completed by August 15.

Note: some of the software developed for the rovers is available from NASA on GitHub.

The Right Stuff for Space Exploration

It might sound like these units resemble what we use everyday at home or in offices, but they are designed to withstand the harsh environments that will be encountered by satellites and space exploration vehicles. The RAD750 can withstand temperatures of between -55 and 70C and radiation levels up to 1000 gray (a gray is defined as the absorption of one joule of radiation energy per kilogram of matter). Safely protected within Curiosity, the temperature and radiation should remain well below these levels.

The units are priced differently than their cousins on earth, too — in 2002, the RAD750 microprocessor was listed at $200,000, which is quite a bit more than the PowerPC used at the time in iMacs, which sold in quantity for about $520 each. The high price of the RAD750 is mainly due to radiation hardening revisions to the PowerPC 750 architecture, manufacturing costs, stringent quality control requirements, and extended testing of each processor chip produced.

Each of the pair of rover computers is inside a module called The Rover Compute Element (RCE). The RCEs are protected from exposure in the middle of the rover body.

Curiosity Rover Compute Elements (highlighted)

Curiosity Rover Compute Elements (highlighted)

Sojourner, Spirit, Opportunity, Curiosity, and Beyond

The Mars Rover family, clockwise from bottom left: Sojourner (1997), Spirit/Opportunity (2004), Curiosity (2012)

The Mars Rover family, clockwise from bottom left: Sojourner (1997), Spirit/Opportunity (2004), Curiosity (2012)

Curiosity has had a long sojourn on Mars since landing on Aeolis Palus in Gale Crater on August 6, 2012, and follows the success of earlier Mars explorers Sojourner, Spirit, and Opportunity. Despite starting out with only a two-year mission, the durability of Curiosity prompted NASA in December 2012 to extend Curiosity‘s mission indefinitely.

Curiosity‘s design will serve as the basis for the planned  Mars 2020 rover, which is scheduled to launch in July/August of 2020. The new rover will have a few upgrades, however, including more sophisticated hardware and new instruments to conduct geological assessments of the rover’s landing site, which will determine the potential habitability of the environment and directly search for signs of ancient Martian life.

We don’t have to wait that long for another exciting Mars landing like we had with Curiosity, however. NASA InSight is scheduled to land on Mars in less than two weeks, on November 26, 2018. Following that, ExoMars and NASA Mars 2020 will head to Mars in 2020 to continue a search for evidence of existing and past life.

2018 NASA InSight Mission: InSight is a robotic explorer designed to study Mars’ crust, mantle, and core. InSight will land on Mars on November 26, 2018. NASA InSight
2020 ESA ExoMars Rover Mission: ExoMars, a joint mission of the European Space Agency and the Russian space agency Roscosmos, will search for evidence of life on Mars. NASA is providing critical elements for the astrobiology instrument on the rover. ESA ExoMars Rover
NASA 2020 Rover Mission: Mars 2020 seeks to answer key questions about the potential for life on Mars. It will cache samples for possible future return to Earth. Mars 2020 Rover

Tweet from @NASAJPL on Nov 12 re InSight Mars landing on November 26, 2018

 

A Backup is a Good Idea on Both Earth and Mars

It turns out that having a backup doesn’t apply just to data or computing. Sometimes, a second brain can come in handy, too, especially when you’re on Mars.

Do you follow Curiosity‘s advice to always have redundant systems? Have you ever switched to using your Side-A brain? Would you like to go to Mars? (I would.) Let’s hear your thoughts in the comments.

Don’t forget to catch landing on Mars of InSight on Monday, November 26. We’ll be watching!

The post Ever Wish You Had a Backup Brain? The Mars Rover Has One appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze’s Custom Data Center PDU

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backblazes-custom-data-center-pdu/

Backblaze PDU
When Jon needed to open a Backblaze Storage Pod for maintenance in our Phoenix data center, it wasn’t as straightforward as one might think. With a steel case, 60 hard drives, backplanes, a pair of power supplies and other components, each pod can weigh up to 150 pounds.

However, there was even a bigger challenge than the pod’s weight. A Storage Pod is divided into two main sections, the drive section and the processing section, each with separate access panels. To replace a drive, you need to open the access panel at the front, which requires sliding the Storage Pod out of the front of the cabinet. To replace a power supply or perhaps reseat a SATA card or cable, you’d prefer to slide the pod out the back of the cabinet because that gives you better access to the panel at the rear of the pod.

Backblaze's 6.0 Storage Pod with 60 drives

Backblaze’s 6.0 Storage Pod with 60 drives (front)

The problem was that doing that was difficult, if not impossible, with all the power cables that connected the pods to the power distribution unit (PDU) at the rear of the cabinet. That left Jon with only one choice: slide the pod out of the front of the cabinet even when he wanted to access the rear access panel, which took more time and often required two people.

Identifying the Problem — the PDU

As Backblaze’s Phoenix data center site manager, Jon realized that the job would be much easier if he could change one component, the PDU. The Phoenix data center used vertically-mounted power distribution units (PDUs) at the back of the cabinets that ran all the way from the top to the bottom of the cabinet. All the cables from the ten pods to the PDU blocked access to the back of the pods in the cabinet.

Vertically-mounted PDU blocking rear access to Storage Pods

Vertically-mounted PDU blocking rear access to Storage Pods

What’s a PDU?

A power distribution unit (PDU) is a device fitted with multiple outputs designed to distribute electric power to racks of computers and networking equipment located within a data center. Some PDUs have additional capabilities, including power filtering, intelligent load balancing, and remote monitoring and control by LAN or SNMP.

Data center IT managers remotely monitor PDU performance to ensure continual service, improve efficiency, and plan for growth.

Jon knew that the vertical PDU forced his team to spend more time than needed getting into the pods for service. If he could find a better option, everyone on the team would have more time to focus on other data center matters, like setting up more cabinets to fill with Storage Pods and customers’ data.

The Backblaze Storage Pods and Cabinets

Backblaze’s Storage Pod racks are standard full size data center cabinets that are 42 rack units (U or RU) high — a rack unit is 44.50 millimeters (1.75 inches). Equipment that fits into these racks is typically 1U, 2U, 3U, or 4U high. Backblaze’s Storage Pods are 4U high, so ten of them can fit into a single rack. With a small switch at the top occupying one of those rack units, that leaves just 1U of space.

If Jon could use that 1U of space in the cabinet for a horizontally-mounted PDU, he could get rid of the vertically-mounted PDU that was causing the access problem. The PDU had more power outlets than needed, anyway, as well as extra monitoring circuitry that wasn’t required for Zabbix, the software monitoring suite we use to track the health of all the components in our data centers.

The vertically-mounted PDU made it more complex and expensive than was necessary for the task — two factors that go against Backblaze’s philosophy of keeping things as simple and inexpensive as possible to keep costs low for our customers. (For a bit of history on this, see this post on how Backblaze got started.)

A Better PDU

Jon made a list of the requirements he wanted in a PDU that would fit Backblaze’s needs. It didn’t seem to him that it would be that hard to find one ready to drop into the cabinet.

Jon’s PDU Requirements

  • 1 rack unit high
  • 3-phase power
  • Horizontally mounted
  • Metering to remotely monitor circuit loads
  • 12 C13 power outlets
    • 10 outlets for Storage Pods
    • 1 outlet for small switch
    • 1 outlet for crash cart to service the pods

Finding a PDU that fit the list turned out to be harder than he expected. Jon searched to see if anyone made a 3-phase 1U horizontal mount PDU, and the only one he could find didn’t have the right type of power outlets (C13) or monitoring circuitry.

The only remaining option was to design a custom PDU. Jon remembered that he and Larry, Backblaze’s data center manager, had run into a PDU manufacturer, Geist, at an IT trade show in San Jose. Jon contacted our vendor, Mirapath, whom Jon had successfully worked with on other projects for Backblaze. Mirapath got the project rolling with Geist, worked out all the kinks, and were instrumental in bringing the project to completion.

The Custom PDU

The result is a custom PDU that fits Jon’s requirements. The PDU fits horizontally in the center-back of the cabinets and doesn’t block access from the back of the cabinet. It takes up only 1U of cabinet space, which allows Jon to put ten Storage Pods in each cabinet — five above the PDU in the center of the cabinet and five below. It has the correct type (C13) and number (12) of power outlets, which support the ten pods, one switch, and the crash cart. It also contains the power monitoring circuitry needed to collect data for Zabbix.

Custom PDU Custom PDU (back) Custom PDU display

Custom PDU

Custom PDU (back)

Custom PDU display

Guido, A Valued Member of Backblaze’s Operations Team

Guido, a valued member of Backblaze's operations team

Sometimes we do have to completely remove heavy pods from a cabinet, but a special member of the team helps with that challenge. Our server lift Guido has no trouble lifting and moving 150 pound Storage Pods and IT gear when needed.

Our server lift, Guido (on the right), helping Joe with the heavy lifting in our data center

Our server lift, Guido (on the right), helping Joe with the heavy lifting in our Phoenix data center

The custom PDU enables Jon and his team to access the Storage Pods from the back of the cabinet. Jon estimates that the new PDU enables him to complete a boot drive replacement in a Storage Pod in half the time it used to take with the previous PDU, and he doesn’t need the help of our server lift Guido for the job. That saved time adds up, especially when you need to replace boot drives in forty Storage Pods, as Jon did recently.

Custom PDU in a cabinet between two Storage Pods

Custom PDU in a cabinet between two Storage Pods

Storage Pod open at rear of cabinet

Storage Pod open at top

We Value Our Culture of Doing Things Differently

If you’re a regular reader of this blog, you’re already familiar with Backblaze’s history. Backblaze’s founders started the company because they thought people should back up their computers and it could be done at $5 per month. The problem was that no storage system available at the time would enable a sustainable business at that price. They did what they had to do: designed and built their own solution. The Backblaze Storage Pods, vault architecture, and Reed-Solomon encoding enabled a globally scalable storage system. After eleven years, three data centers, and seven hundred petabytes of customer data, we’re still able to sustainably offer the most affordable storage available anywhere.

Continuing the Backblaze Tradition

Hardworking innovators like Jon and our operations team find new ways every day to make our operations more efficient. This allows us to continuously reduce our costs while driving our growing, global footprint.

Thanks Jon. Well done!

Jon with two Backblaze cabinets, each with 10 Storage Pods, one switch, and one custom PDU

Jon with two Backblaze cabinets, each with 10 Storage Pods, one switch, and one custom PDU


Editor’s Note:  Anyone interested in obtaining information about availability and pricing for the PDU described above can contact Mirapath.

The post Backblaze’s Custom Data Center PDU appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

New for Business Backup: Single Sign-On (SSO)

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/new-for-business-backup-single-sign-on-sso/

Single Sign-On (SSO) for Backblaze
In 2017, we relaunched our Business Backup platform with a focus on providing administrators better tools for managing their teams. We’ve been busy enhancing the platform since the addition of Groups and thought we’d take a moment to review some of our latest enhancements.

The most recent is our support of Single Sign-On (SSO) using G Suite by Google Cloud. This has been one of our most requested features and we’re happy to be able to launch it today.

SSO Support for Groups

Effective immediately, SSO via G Suite is available for all Groups. There is no fee for turning on SSO or for creating a Group.

We created our Business Backup platform to help make managing your team’s backups easier. Whether your team is inside your household or a globally distributed workforce like charity: water (or somewhere in between), we want to make the process of getting your data backed up astonishingly easy and affordable.

As your team uses more and more software-based solutions, the challenge of managing all the logins gets more difficult. And as your team grows, so do the security issues. In addition, the simple act of administering the services can get complex. Administrators want to know they can onboard and offboard easily and efficiently.

SSO can be enabled for a specific Group, a collection of Groups, or all of your Groups. The flexibility, coupled with the ability to control access privileges at a Group level, provides administrators more tools to accomplish their goals.

You can enable SSO for your Groups inside of your preferences panel where you control all of your Group level customizations.

Groups Preferences Pane — Enable SSO

For more detail & FAQs, please visit our Knowledge Base article on Enabling Single Sign-On (SSO) In Backblaze Groups.

With the addition of SSO, we provide one more tool to manage your teams as you wish. With this roll out we are supporting G Suite based credentials. In Q1 of 2019, we’ll add support for organizations using Office365 credentials.

Mass Install with Microsoft Group Policy & SCCM

Another set of challenges for administrators is the deployment of software. Most users would prefer if their IT team simply “took care of everything.” For the user, a glorious world of having your machine updated and working flawlessly is great. For anyone who’s been in the IT Admin role, you know that things aren’t quite that simple.

Many administrators seek what’s known as Mass Silent Install (MSI). This allows the deployment of software without any end user interaction. Some do the scripting themselves, while others use Remote Monitoring and Management (RMM) tools. We recently added SCCM support to our existing list of MSI options.

We are constantly striving to make getting your data backed up as easy as possible, and adding SSO is another strong step in that direction.

The post New for Business Backup: Single Sign-On (SSO) appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Credential Stuffing Attacks: What They Are and How to Protect Yourself

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/how-to-protect-yourself-from-credential-stuffing-attacks/

a hacker wearing a hoodie running a credential stuffing attack
While we often see warnings about password best practices (different passwords for different services, change passwords frequently, 123456 is never a good password), we rarely get into why we need to do these things. Incremental security comes at a cost: usually convenience. Every individual must decide her personal tradeoffs. Today, we want to share one of the ways malicious actors try to take advantage of online services and poorly-crafted passwords: credential stuffing attacks.

What is a Credential Stuffing Attack?

A credential stuffing attack occurs when an attacker takes a set of stolen user credentials and automates the entry of those credentials into popular websites. Let’s unpack that:

Credentials
A user name and password combination used for logging in to service x.
Breached credentials
A list of user name/password combinations that have become public in some form. As an example, an enterprising cybercriminal exploits credentials from Adobe, Coachella, Dropbox, LinkedIn, Ticketfly, Yahoo and other sites that have leaked personal information for over 500 million accounts.
Automated entry
The cybercriminal will go to the login page on service x and systematically cycle through each user name and password combination hoping to get lucky enough to find a match. Some will even go further by using one email address and cycling through all the passwords in the database — the logic being that users tend to come up with similar passwords, such as 123456 or Pa$$word$.

What is Backblaze Doing to Defend Against Credential Stuffing Attacks?

Every service of scale, including Backblaze, has defense mechanisms to inhibit this sort of activity. For instance, when you see “too many attempts, try again later,” on a popular site, what is likely happening behind the scenes is something called rate limiting. This is when a web page has a rule akin to: if there are x number of login attempts in y seconds, it’s probably a robot; we should cut them off.

The problem is balancing security with the user experience. If we limited every account to two login attempts per hour, that would hamstring the efforts of any automated attack. However, it would also impede the efforts of legitimate users who made a simple typo when they were entering their password.

Revealing our exact rate limiting policies would pose a security risk to our users, allowing the attackers to fine-tune an attack. That said, we do have rate limiting, we do constantly monitor our systems, and we also have algorithms and humans that will adjust our rate limiting depending on a number of environmental variables that our security team monitors.

The Three Steps We Tell Everyone In Our Family to Take

With the large number of data breaches over the past few years, it’s more likely than not that you’ve been exposed. If you’ve been using the same email and password combination for three years and have a Comcast account that old, you could be exposed. It’s the same story for Ticketfly accounts older than May of 2018. We mention these not to single out any particular service, but to point out how prevalent these things are.

However, if you have different passwords for every website, you effectively protect yourself from being hacked as a result of leaks like these. While that might be true, trying to remember and manage all those different combinations is cumbersome.

How to Fight Back Against Credential Stuffing

Protecting yourself from credential stuffing attacks can be as simple as adopting the following three tactics:

1 — Monitor Your Email Addresses

Troy Hunt runs a phenomenal service called haveibeenpwned.com. He tracks major breaches and will let you know if your credentials were included in them. It’s free, although you can donate to the service. Signing up is one of the easiest ways to take control of your own security.

2 — Use Two Factor Verification

2FV, as it’s commonly called, is when you are asked for an incremental authentication — usually numbers generated by a dedicated app (including a password manager) — after you enter your password. Backblaze offers it as a complimentary service as do many other service providers. 2FV is a good defense mechanism against credential stuffing.

3 — Use a Password Manager

We highly recommend using a password manager such as Bitwarden, LastPass, or 1Password. Those services can help create new account credentials for every website you frequent, and help you manage those credentials when you visit those sites. Many people at Backblaze use these services and are quite happy with them.

One of the advantages of password managers is that they let you create passwords you can’t possibly remember. You just need to remember the master password to your password manager; they do the rest. That means you can set complicated passwords to any service. Each of the password managers integrate well into all major browsers and into Android and iOS devices. Not only will a password manager make your life secure, it makes your login experience much faster.

The Best Protection Against Credential Stuffing Is…

Of course, the best protection in the world is never being exposed in the first place. We encourage everyone to do business with vendors that can articulate how they protect their customers and have a sustained investment in doing so. At Backblaze, we’ve outlined our approach to security on our website.

All that said, the reality is we’ve all created accounts with service providers that may not have the best security practices. Even still, any website with the best intentions can still be felled by a skilled attacker, which is why the the need to protect ourselves and use credential best practices is very real. We hope, and strongly recommend, that everyone follow the three steps mentioned here.

If you have other other tips for the community, please feel free to share in the comments below!

The post Credential Stuffing Attacks: What They Are and How to Protect Yourself appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Stories of Camera and Data Catastrophes

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/stories-of-camera-and-data-catastrophes/

Salt water damaged camera

This is the third post in a series of post exchanges with our friends at Lensrentals.com, a popular online site for renting photography, videography, and lighting equipment. Seeing as how Halloween is just a few days away, we thought it appropriate to offer some scary tales of camera and data catastrophes. Enjoy.

Note: You can read all of Lensrentals’ posts on our blog. Find all of our posts on the Lensrentals blog.

— Editor

Stories of Camera and Data Catastrophes
by Zach Sutton, Editor-in-chief, Lensrentals.com

As one of the largest photo and video gear rental companies in the world, Lensrentals.com ships out thousands of pieces of gear each day. It would be impossible to expect that all of our gear would return to us in the same condition it was in when we rented it out. More often than not, the damage is the result of things being dropped, but now and then some pretty interesting things happen to the gear we rent out.

We have an incredible customer base, and when this kind of damage happens, they’re more than happy to pay the necessary repair fees. Stuff happens, mistakes are made, and we have a full-service repair center to keep the costs low. And while we have insurance policies for accidental damage such as drops, dings, and other accidents, it doesn’t cover neglect, which accounts for the stories we’re going to share with you below. Let’s take a look at some of our more exciting camera and data catastrophe stories.

Camera Data Catastrophes

Data catastrophes happen more often than anything else, but aren’t exactly the most exciting stories we’ve gotten over the years. The stories are usually similar. Someone rents a memory card or SSD from us, uses the card/SSD, then sends it back without pulling the footage off of it. When we receive gear back into our warehouse, we inspect and format all the media. If you realize your mistake and call or email us before that happens, we can usually put a hold on the media and ship it back to you to pull the data off of it. If we’ve already formatted the media, we will perform a recovery on the data using software such as TestDisk and PhotoRec, and let you know if we had any success. We then give you the option whether or not you want to rent the product again to have it shipped to you so you can pull the files.

The Salty Sony A7sII

A common issue we run into — and have addressed a number of times on our blog — is the dubious term “weather resistant.” This term is often used by equipment marketers and doesn’t give you the protection that people might assume by its name.

One example of that was last year, when we received a nonfunctioning Sony a7sII back from the California coast, and had to disassemble it to determine what was wrong. Upon opening the camera, it was quite apparent that it had been submerged in salt water. Water isn’t good for electronics, but the real killer is impurities, such as salt. Salt builds up on electronics, is a conductor of electricity, and will fry electronics in no time when power is applied. So, once we saw the salt corrosion, we knew that the camera was irreparable. Still, we disassembled it for no other reason than to provide evidence to others on what salt water can do to your electronics. You can read more about this and see the full break down in our post, About Getting Your Camera Wet… Teardown of a Salty Sony A7sII.

Sony A7sII disassembled into parts Sony A7sII salt water damage

The Color Run Cleanup

Color runs are 5K running events that happen all over the world. If you haven’t seen one, participants and spectators toss colorful powders throughout the run, so that by the time the runners reach the finish line, they’re covered head to toe in colorful powder. This event sounds like a lot of fun, and one would naturally want to take photos of the spectacle, but any camera gear used for the event will definitely require a deep cleaning.

Color run damage to camera lens

Color run damage to camera

We’ve asked our clients multiple times not to take our cameras to color runs, but each year we get another system back that is covered in pink, green, and blue dust. The dust used for these events is incredibly fine, making it easy to get into every nook and cranny within the camera body and lenses. This requires the gear to be completely disassembled, cleaned, and reassembled. We have two photos in this post of the results of a color run, but you can view more on the post we did about Color runs back in 2013, How to Ruin Your (or Our) Gear in 5 Minutes (Without Water).

The Eclipse That Killed Cameras

About a year ago, we had the incredible phenomenon here in the United States of a total solar eclipse. It was the first total solar eclipse to occur in the continental United States since 1979, hence a pretty exciting moment for all of us, but we braced ourselves for the damage it would do to cameras.

Eclipse camera lens damage

For weeks leading up to the event, we sent out fliers with our rentals that encouraged people to not only wear eye protection, but to protect their camera lenses with high-density ND filters. Despite that, in the days following the eclipse, we had gear coming back to us with aperture blades melted and holes burnt into sensors.

Eclipse camera damage

Eclipse camera shutter damage

As one would expect, it’s not a good idea to point your camera directly at the sun, especially for long periods of time. Most of the damage done from the eclipse was caused by people who had set up their camera and lens on a tripod pointing at the sun while waiting for the eclipse. This prolonged exposure causes a lot of heat to build up and will eventually start burning through apertures, shutters, sensors and anything else in its way. Not only do we recommend ND filters for the front of your lens, but also black cards to stop light from entering the camera until it’s go time for the total eclipse. You can read about the whole experience in our blog post on the topic, Rental Camera Gear Destroyed by the Solar Eclipse of 2017.

Damage from Burning Man

While we have countless stories of gear being destroyed, we figured it’d be best to just leave you with this one. Burning Man is an annual event that takes place in the deserts of Nevada. Touted as an art installation and experience, tens of thousands of people spend a few days living in the remote desert with fellow Burners to create and participate in a wide range of activities. And where there is a desert, there always are sand, dust, and dust storms.

Burning Man camera damage

Burning Man dust damage

One might think that sand is the biggest nuisance for camera gear at Burning Man, but it’s actually the fine dust that the wind picks up. One of the more interesting phenomena that happens during Burning Man are the dust storms. The dust storms occur with little warning, kicking up the fine dust buried within the sand that can quickly cause damage to your electronics, your skin, and your lungs. Because it is so fine, it is easily able to enter your cameras and lenses.

Burning Man damage to Nikon camera

While Burning Man doesn’t always totally destroy gear, it does result in a lot of cleaning and disassembling of gear after the event. This takes time and patience and costs the customer money. While there are stories of people who bring camera gear to Burning Man wrapped in nothing more than plastic and gaffer tape, we don’t recommend that for good gear. It’s best to just leave your camera at home, or buy an old camera for cheap to document the week. To see more of what can happen to gear at Burning Man, you can read our blog post on the topic, Please, Don’t Take Our Photography and Video Gear to Burning Man.

Those are just a few stories of some of the data and camera catastrophes that we’ve experienced over the years. We hope this serves as a warning to those who might be considering putting their gear through some of the experiences above and hopefully sway them against it. If you have some of your own stories on data or gear catastrophes, feel free to share them below in the comments.

— Zach Sutton, Editor-in-chief, Lensrentals.com

The post Stories of Camera and Data Catastrophes appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating from CrashPlan: Arq and B2

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/migrating-crashplan-arq-backup-b2/

Arq and Backblaze B2 logos on a computer screen

Many ex-CrashPlan for Home users have moved to Backblaze over the last year. We gave them a reliable, set-and-forget backup experience for the amazing price of $5/month per computer. Yet some people wanted features such as network share backup and CrashPlan’s rollback policy, and Arq Backup can provide those capabilities. So we asked Stefan Reitshamer of Arq to tell us about his solution.

— Andy

Migrating from CrashPlan
by Stefan Reitshamer, Founder, Arq Backup

CrashPlan for Home is gone — no more backups to CrashPlan and no more ability to restore from your old backups. Time to find an alternative!

Arq + Backblaze B2 = CrashPlan Home

If you’re looking for many of the same features as CrashPlan plus affordable storage, Arq + B2 cloud storage is a great option. MacWorld’s review of Arq called it “more reliable and easier to use than CrashPlan.”

Just like CrashPlan for Home, Arq lets you choose your own encryption password. Everything is encrypted before it leaves your computer, with a password that only you know.

Also just like CrashPlan for Home, Arq keeps all backups forever by default. Optionally you can tell it to “thin” your backup records from hourly to daily to weekly as they age, similar to the way Time Machine does it. And/or you can set a budget and Arq will periodically delete the oldest backup records to keep your costs under control.

With Arq you can back up whatever you want — no limits. Back up your external hard drives, network shares, etc. Arq won’t delete backups of an external drive no matter how long it’s been since you’ve connected it to your computer.

The license for Arq is a one-time cost and, if you use multiple Macs and/or PCs, one license covers all of them. The pricing for B2 storage is a fraction of the cost of any scale cloud storage provider — just $0.005/GB per month and the first 10GB is free. To put that in context, that’s 1/4th the price of Amazon S3. The savings becomes more pronounced if/when you need to restore your files. B2 only charges a flat rate of $0.01/GB for data download, and you get 1 GB of downloads free every day. By contract, Amazon S3 has tiered pricing that starts at 9 times that of B2.

Arq’s Advanced Features

Arq is a mature product with plenty of advanced features:

  • You can tell Arq to pause backups whenever you’re on battery.
  • You can tell Arq to pause backups during a certain time window every day.
  • You can tell Arq to keep your computer awake until it finishes the backup.
  • You can restrict which Wi-Fi networks and which network interfaces Arq uses for backup.
  • You can restrict how much bandwidth Arq uses when backing up.
  • You can configure Arq to send you email every time it finishes backing up, or only if there were errors during backup.
  • You can configure Arq to run a script before and/or after backup.
  • You can configure Arq to back up to multiple B2 accounts if you wish. Back up different folders to different B2 accounts, configure different schedules for each B2 account, etc.

Arq is fully compatible with B2. You can configure it with your B2 account ID and master application key, or you can use B2’s new application keys feature to restrict which bucket(s) Arq can write to.

Privacy and Control

With Arq and B2 storage, you keep control of your data because it’s your B2 account and your encryption password — even if an attacker got access to the B2 data they wouldn’t be able to read your encrypted files. Your backups are stored in an open, documented format. There’s even an open-source restore tool.

The post Migrating from CrashPlan: Arq and B2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Disk Drive (HDD) vs Solid State Drive (SSD): What’s the Diff?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/hdd-versus-ssd-whats-the-diff/

whats the diff? SSD vs. HDD

HDDs and SSDs have changed in the two years since Peter Cohen wrote the original version of this post on March 8 of 2016. We thought it was time for an update. We hope you enjoy it.

— Editor

In This Corner: The Hard Disk Drive (HDD)

The traditional spinning hard drive has been a standard for many generations of personal computers. Constantly improving technology has enabled hard drive makers to pack more storage capacity than ever, at a cost per gigabyte that still makes hard drives the best bang for the buck.

IBM RamacAs sophisticated as they’ve become, hard drives have been around since 1956. The ones back then were two feet across and could store only a few megabytes of information, but technology has improved to the point where you can cram 10 terabytes into something about the same size as a kitchen sponge.

Inside a hard drive is something that looks more than a bit like an old record player: There’s a platter, or stacked platters, which spin around a central axis — a spindle — typically at about 5,400 to 7,200 revolutions per minute. Some hard drives built for performance work faster.

Hard Drive exploded viewInformation is written to and read from the drive by changing the magnetic fields on those spinning platters using an armature called a read-write head. Visually, it looks a bit like the arm of a record player, but instead of being equipped with a needle that runs in a physical groove on the record, the read-write head hovers slightly above the physical surface of the disk.

The two most common form factors for hard drives are 2.5-inch, common for laptops, and 3.5-inch, common for desktop machines. The size is standardized, which makes for easier repair and replacement when things go wrong.

The vast majority of drives in use today connect through a standard interface called Serial ATA (or SATA). Specialized storage systems sometimes use Serial Attached SCSI (SAS), Fibre Channel, or other exotic interfaces designed for special purposes.

Hard Disk Drives Cost Advantage

Proven technology that’s been in use for decades makes hard disk drives cheap — much cheaper, per gigabyte than solid state drives. HDD storage can run as low as three cents per gigabyte. You don’t spend a lot but you get lots of space. HDD makers continue to improve storage capacity while keeping costs low, so HDDs remain the choice of anyone looking for a lot of storage without spending a lot of money.

The downside is that HDDs can be power-hungry, generate noise, produce heat, and don’t work nearly as fast as SSDs. Perhaps the biggest difference is that HDDs, with all their similarities to record players, are ultimately mechanical devices. Over time, mechanical devices will wear out. It’s not a question of if, it’s a question of when.

HDD technology isn’t standing still, and price per unit stored has decreased dramatically. As we said in our post, HDD vs SSD: What Does the Future for Storage Hold? — Part 2, the cost per gigabyte for HDDs has decreased by two billion times in about 60 years.

HDD manufacturers have made dramatic advances in technology to keep storing more and more information on HD platters — referred to as areal density. As HDD manufacturers try to outdo each other, consumers have benefited from larger and larger drive sizes. One technique is to replace the air in drives with helium, which reduces reduces friction and supports greater areal density. Another technology that should be available soon uses heat-assisted magnetic recording (HAMR). HAMR records magnetically using laser-thermal assistance that ultimately could lead to a 20 terabyte drive by 2019. See our post on HAMR by Seagate’s CTO Mark Re, What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

The continued competition and race to put more and more storage in the same familiar 3.5” HDD form factor means that it will be a relatively small, very high capacity choice for storage for many years to come.

In the Opposite Corner: The Solid State Drive (SSD)

Solid State Drives (SSDs) have become much more common in recent years. They’re standard issue across Apple’s laptop line, for example the MacBook, MacBook Pro and MacBook Air all come standard with SSDs. So does the Mac Pro.

Inside an SSDSolid state is industry shorthand for an integrated circuit, and that’s the key difference between an SSD and a HDD: there are no moving parts inside an SSD. Rather than using disks, motors and read/write heads, SSDs use flash memory instead — that is, computer chips that retain their information even when the power is turned off.

SSDs work in principle the same way the storage on your smartphone or tablet works. But the SSDs you find in today’s Macs and PCs work faster than the storage in your mobile device.

The mechanical nature of HDDs limits their overall performance. Hard drive makers work tirelessly to improve data transfer speeds and reduce latency and idle time, but there’s a finite amount they can do. SSDs provide a huge performance advantage over hard drives — they’re faster to start up, faster to shut down, and faster to transfer data.

A Range of SSD Form Factors

SSDs can be made smaller and use less power than hard drives. They also don’t make noise, and can be more reliable because they’re not mechanical. As a result, computers designed to use SSDs can be smaller, thinner, lighter and last much longer on a single battery charge than computers that use hard drives.

SSD Conversion KitMany SSD makers produce SSD mechanisms that are designed to be plug-and-play drop-in replacements for 2.5-inch and 3.5-inch hard disk drives because there are millions of existing computers (and many new computers still made with hard drives) that can benefit from the change. They’re equipped with the same SATA interface and power connector you might find on a hard drive.


Intel SSD DC P4500A wide range of SSD form factors are now available. Memory Sticks, once limited to 128MB maximum, now come in versions as large as 2 TB. They are used primarily in mobile devices where size and density are primary factor, such as cameras, phones, drones, and so forth. Other high density form factors are designed for data center applications, such as Intel’s 32 TB P4500. Resembling a standard 12-inch ruler, the Intel SSD DC P4500 has a 32 terabyte capacity. Stacking 64 extremely thin layers of 3D NAND, the P4500 is currently the world’s densest solid state drive. The price is not yet available, but given that the DC P4500 SSD requires only one-tenth the power and just one-twentieth the space of traditional hard disk storage, once the price comes out of the stratosphere you can be sure that there will be a market for it.

Nimbus ExaDrive 100TB SSDEarlier this year, Nimbus Data announced the ExaDrive D100 100TB SSD. This SSD by itself holds over twice as much data as Backblaze’s first Storage Pods. Nimbus Data has said that the drive will have pricing comparable to other business-grade SSDs “on a per terabyte basis.” That likely means a price in the tens of thousands of dollars.

SSD drive manufacturers also are chasing ways to store more data in ever smaller form factors and at greater speeds. The familiar SSD drive that looks like a 2.5” HDD drive is starting to become less common. Given the very high speeds that data can be read and copied to the memory chips inside SSDs, it’s natural that computer and storage designers want to take full advantage of that capability. Increasingly, storage is plugging directly into the computer’s system board, and in the process taking on new shapes.

Anand Lal Shimpi, anandtech.com -- http://www.anandtech.com/show/6293/ngff-ssds-putting-an-end-to-proprietary-ultrabook-ssd-form-factors

A size comparison of an mSATA SSD (left) and an M.2 2242 SSD (right)

Laptop makers adopted the mSATA, and then the M.2 standard, which can be as small as a few squares of chocolate but have the same capacity as any 2.5” SATA SSD.

Another interface technology called NvM Express or NVMe may start to move from servers in the data center to consumer laptops in the next few years. NVMe will push storage speeds in laptops and workstations even higher.

SSDs Fail Too

Just like hard drives, SSDs can wear out, though for different reasons. With hard drives, it’s often just the mechanical reality of a spinning motor that wears down over time. Although there are no moving parts inside an SSD, each memory bank has a finite life expectancy — a limit on the number of times it can be written to and read from before it stops working. Logic built into the drives tries to dynamically manage these operations to minimize problems and extend its life.

For practical purposes, most of us don’t need to worry about SSD longevity. An SSD you put in your computer today will likely outlast the computer. But it’s sobering to remember that even though SSDs are inherently more rugged than hard drives, they’re still prone to the same laws of entropy as everything else in the universe.

Planning for the Future of Storage

If you’re still using a computer with a SATA hard drive, you can see a huge performance increase by switching to an SSD. What’s more, the cost of SSDs has dropped dramatically over the course of the past couple of years, so it’s less expensive than ever to do this sort of upgrade.

Whether you’re using a HDD or an SSD, a good backup plan is essential because eventually any drive will fail. You should have a local backup combined with secure cloud-based backup like Backblaze, which satisfies the 3-2-1 backup strategy. To help get started, make sure to check out our Backup Guide.

Hopefully, we’ve given you some insight about HDDs and SSDs. And as always, we encourage your questions and comments, so fire away!


Editor’s note:  You might enjoy reading more about the future of HDDs and SSDs in our two-part series, HDD vs SSD: What Does the Future for Storage Hold?

The post Hard Disk Drive (HDD) vs Solid State Drive (SSD): What’s the Diff? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Securely Managing Your Digital Media (SD, CF, SSD, and Beyond)

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/securely-managing-your-digital-media-sd-cf-ssd-and-beyond/

3 rows of 3 memory cards

This is the second in our post exchange series with our friends Zach Sutton and Ryan Hill at Lensrentals.com, who have an online site for renting photography, videography, and lighting equipment. You can read our post from last month on their blog, 3-2-1 Backup Best Practices using Cloud Archiving, and all posts on our blog in this series at Lensrentals post series.

— Editor

Managing digital media securely is crucial for all photographers and videographers. At Lensrentals.com, we take media security very seriously, with dozens of rented memory cards, hard drives, and other data devices returned to our facility every day. All of our media is inspected with each and every rental customer. Most of the cards returned to us in rental shipments are not properly reformatted and erased, so it’s part of our usual service to clear all the data from returned media to keep each client’s identity and digital property secure.

We’ve gotten pretty good at the routine of managing data and formatting storage devices for our clients while making sure our media has a long life and remains free from corruption. Before we get too involved in our process of securing digital media, we should first talk fundamentals.

The Difference Between Erasing and Reformatting Digital Media

When you insert a card in the camera, you’re likely given two options, either erase the card or format the card. There is an important distinction between the two. Erasing images from a card does just that — erases them. That’s it. It designates the area the prior data occupied on the card as available to write over and confirms to you that the data has been removed.

The term erase is a bit misleading here. The underlying data, the 1’s and 0’s that are recorded on the media, are still there. What really happens is that the drive’s address table is changed to show that the space the previous file occupied is available for new data.

This is the reason that simply erasing a file does not securely remove it. Data recovery software can be used to recover that old data as long as it hasn’t been overwritten with new data.

Formatting goes further. When you format a drive or memory card, all of the files are erased (even files you’re designated as “protected”) and also usually adds a file system. This is a more effective method for removing all the data on the drive since all the space previously divided up for specific files has a brand new structure unencumbered by whatever size files were previously stored. Be beware, however, that it’s possible to retrieve older data even after a format. Whether that can happen depends on the formatting method and whether new data has overwritten what was previously stored.

To make sure that the older data cannot be recovered, a secure erase goes further. Rather than simply designating the data that can be overwritten with new data, a secure erase writes a random selection of 1s and 0s to the disk to make sure the old data is no longer available. This takes longer and is more taxing on the card because data is being overwritten rather than simply removed.

Always Format a Card for the Camera You’re Going to Be Using

If you’ve ever tried to use the same memory card on cameras of different makes without formatting it, you may have seen problems with how the data files are displayed. Each camera system handles its file structure a little differently.

For this reason it’s advisable to format the card for the specific camera you’re using. If this is not done, there is a risk of corrupting data on the card.

Our Process For Securing Data

Our inspection process for recording media varies a little depending on what kind of card we’re inspecting. For standardized media like SD cards or compact flash cards, we simply use a card reader to format the card to exFAT. This is done in Disk Utility on the Apple Macbooks that we issue to each of our Video Technicians. We use exFAT specifically because it’s recognizable by just about every device. Since these cards are used in a wide variety of different cameras, recorders, and accessories, and we have no way of knowing at the point of inspection what device they’ll be used with, we have to choose a format that will allow any camera to recognize the card. While our customer may still have to format a card in a camera for file structure purposes, the card will at least always come formatted in a way that the camera can recognize.

Sony SxS media
For proprietary media — things like REDMAGs, SxS, and other cards that we know will only be used in a particular camera — we use cameras to do the formatting. While the exFAT system would technically work, a camera-specific erase and format process saves the customer a step and allows us to more regularly double-check the media ports on our cameras. In fact, we actually format these cards twice at inspection. First, the Technician erases the card to clear out any customer footage that may have been left on it. Next, they record a new clip to the card, around 30 seconds, just to make sure everything is working as it’s supposed to. Finally, they format the card again, erasing the test footage before sending it to the shelf where it awaits use by another customer.

REDMAG Red Mini-Mag You’ll notice that at no point in this process do we do a full secure erase. This is both to save time and to prevent unnecessary wear and tear on the cards. About 75% of the media we get back from orders still has footage on it, so we don’t get the impression that many of our customers are overly concerned with keeping their footage private once they’re done shooting. However, if you are one of those 25% that may have a personal or professional interest in keeping your footage secure after shooting, we’d recommend that you securely erase the media before returning rented memory cards and drives. Or, if you’d rather we handle it, just send an email or note with your return order requesting that we perform a secure erase rather than simply formatting the cards, and we’ll be happy to oblige.

Managing your digital media securely can be easy if done right. Data management and backing up files, on the other hand, can be more involved and require more planning. If you have any questions on that topic, be sure to check out our recent blog post on proper data backup.

— Zach Sutton and Ryan Hill, lensrentals.com

The post Securely Managing Your Digital Media (SD, CF, SSD, and Beyond) appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Protecting Your Data From Camera to Archive

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/protecting-your-data-from-camera-to-archive/

Camera data getting backed up to Backblaze B2 cloud

Lensrentals.com is a highly respected company that rents photography and videography equipment. We’re a fan of their blog and asked Zach Sutton and Ryan Hill of Lensrentals to contribute something for our audience. We also contributed a post to their blog that was posted today: 3-2-1 Backup Best Practices using Cloud Archiving.

Enjoy!

— Editor

At Lensrentals.com we get a number of support calls, but unfortunately one of them is among the most common: data catastrophes.

The first of the frequent calls is from someone who thought they transferred over their footage or photos before returning their rental and discovered later that they were missing some images or footage. If we haven’t already gone through an inspection of those cards, it’s usually not a problem to send the cards back to them so they can collect their data. But if our techs have inspected the memory cards, then there isn’t much we can do. Our team at Lensrentals.com perform a full and secure reformatting of the cards to keep each customer’s data safe from the next renter. Once that footage is gone, it is unrecoverable and gone forever. This is never a fun conversation to have.

The second scenario is when a customer calls to tell us that they did manage to transfer all the footage over, but one or more of the clips or images were corrupted in the transferring process. Typically, people don’t discover this until after they’ve sent back the memory cards, and after we’ve already formatted the original media. This is another tough phone call to have. On occasion, data corruption happens in camera, but more often than not, the file gets corrupted during the transfer from the media to the computer or hard drive.

These kinds of problems aren’t entirely avoidable and are inherent risks users take when working with digital media. However, as with all risks, you can take proper steps to assure that your data is safe. If a problem arises, there are techniques you can use to work around it.

We’ve summarized our best suggestions for protecting your data from camera to archive in the following sections. We hope you find them useful.

How to Protect Your Digital Assets

Before Your Shoot

The first and most obvious step to take to assure your data is safe is to make sure you use reliable media. For us, we recommend using cards from brands you trust, such as Sandisk, Lexar or ProGrade Digital (a company that took the reins from Lexar). For hard drives, SanDisk, Samsung, Western Digital, and Intel are all considered incredibly reliable. These brands may be more expensive than bargain brands but have been proven time and time again to be more reliable. The few extra dollars spent on reliable media will potentially save you thousands in the long run and will assure that your data is safe and free of corruption.

One of the most important things you should do before any shoot is format your memory card in the camera. Formatting in camera is a great way to minimize file corruption as it keeps the card’s file structure conforming to that camera manufacturer’s specifications, and it should be done every time before every shoot. Equally important, if the camera gives you an option to do a complete or secure format, take that option over the other low-level formatting options available. In the same vein, it’s essential to also take the time to research and see if your camera needs to unmount or “eject” the media before removing it physically. While this option applies more for video camera recording systems, like those found on the RED camera platform and the Odyssey 7Q, it’s always worth checking into to avoid any corruption of the data. More often than not, preventable data corruption happens when the users turn off the camera system before the media has been unmounted.

Finally, if you’re shooting for the entire day, you’ll want to make sure you have enough media on hand for the entire day, so that you do not need to back up and reformat cards throughout the shoot. While it’s possible to take footage off of the card, reformat it, and use it again for the same day, that is not something you’d want to be doing during the hectic environment of a shoot day — it’s best to have extra media on hand. We’ve all made a mistake and deleted a file we didn’t mean to, so it’s best to avoid that mistake by not having to delete or manage files while shooting. Play it safe, and only reformat when you have the time and clear head to do so.

During Your Shoot

On many modern camera systems, you have the option of dual-recording using two different card slots. If your camera offers this option, we cannot recommend it enough. Doubling the media you’re recording onto can overcome a failure in one of the memory cards. While the added cost may be a hard sell, it’s negligible when compared to all the money spent on lights, cameras, actors and lousy pizza for the day. Additionally, develop a system that works for you and keeps everything as organized as possible. Spent media shouldn’t be in the same location as unused media, and your file structure should be consistent throughout the entire shoot. A proper file structure not only saves time but assures that none of the footage goes missing after the shoot, lost in some random folder.

Camera memory cards

Among one of the most critical jobs while on set is the work of a DIT (Digital Imaging Technician) for video, and a DT (Digital Technician) for photography. Essentially, the responsibilities of these positions are to keep the data archived and organized on a set, as well as metadata logging and other technical tasks involved in keeping a shoot organized. While it may not be cost effective to have a DIT/DT on every shoot, if the budget allows for it, I highly recommend you hire one to take on the responsibilities. Having someone on set who is solely responsible for safely backing up and organizing footage helps keep the rest of the crew focused on their obligations to assure nothing goes wrong. When they’re not transferring and archiving data, DIT/DT’s also log metadata, color correct footage and help with the other preliminary editing processes. Even if the budget doesn’t allow for this position to be filled, work to find someone who can solely handle these processes while on set. You don’t want your camera operator to be in charge of also backing up and organizing footage if you can help it.

Ingest Software

If there is one piece of information we’d like for videographers and photographers to take away from this article, it is this: file-moving or ‘offloading’ software is worth the investment and should be used every time you shoot anything. For those who are unfamiliar with offload software, it’s any application that is designed to make it easier for you to back up footage from one location to another, and one shoot to another. In short, to avoid accidents or data corruption, it’s always best to have your media on a MINIMUM of two different devices. The easiest way to do this is to simply dump media onto two separate hard drives, and keep those drives separately stored. Ideally (if the budget allows), you’ll also keep all of your data on the original media for the day as well, making sure you have multiple copies stored in various locations. Many other options are available and recommended if possible, such as RAID arrays or even copying the data over to a cloud service such as Backblaze B2. What offloading software does is just this process, and helps build a platform of automation while verifying all the data as it’s transferred.

There are a few different recommendations I give for offloading software, all at different price points and with unique features. At the highest end of video production, you’ll often see DITs using a piece of software called Silverstack, which offers color grading functionalities, LTO tape support, and basic editing tools for creating daily edits. At a $600 annual price, it is the most expensive in this field and is probably overkill for most users. As for my recommendation, I recommend a tool call Shotput Pro. At $129, Shotput Pro offers all the tools you’d need to build a great archiving process while sacrificing some of the color editing tools. Shotput Pro can simultaneously copy and transfer files to multiple locations, build PDF reports, and verify all transfers. If you’re looking for something even cheaper, there are additional options such as Offload and Hedge. They’re both available for $99 each and give all the tools you’d need within their simple interfaces.

When it comes to photo, the two most obvious choices are Adobe Lightroom and Capture One Pro. While both tools are known more for their editing tools, they also have a lot of archiving functions built into their ingest systems, allowing you to unload cards to multiple locations and make copies on the fly.

workstation with video camera and RAID NAS

When it comes to video, the most crucial feature all of the apps should have is an option called “checksum verification.” This subject can get complicated, but all you really need to know is that larger files are more likely to be corrupted when transferring and copying, so what checksum verification does is verify the file to assure that it’s identical to the original version down to the individual byte. It is by far the most reliable and effective way to ensure that entire volumes of data are copied without corruption or loss of data. Whichever application you choose, make sure checksum verification is an available feature, and part of your workflow every time you’re copying video files. While available on select photo ingesting software, corruption happens less on smaller files and is generally less of an issue. Still, if possible, use it.

Post-Production

Once you’ve completed your shoot and all of your data is safely transfered over to external drives, it’s time to look at how you can store your information long term. Different people approach archiving in different ways because none of us will have an identical workflow. There is no correct way to handle how to archive your photos and videos, but there are a few rules that you’ll want to implement.

The first rule is the most obvious. You’ll want to make sure your media is stored on multiple drives. That way, if one of your drives dies on you, you still have a backup version of the work ready to go. The second rule of thumb is that you’ll want to store these backups in different locations. This can be extremely important if there is a fire in your office, or you’re a victim of a robbery. The most obvious way to do this is to back up or archive into a cloud service such as Backblaze B2. In my production experience I’ve seen multiple production houses implement a system where they store their backup hard drives in a safety deposit box at their bank. The final rule of thumb is especially important when you’re working with significant amounts of data, and that is to keep a working drive separate from an archive drive. The reason for this is an obvious one: all hard drives have a life expectancy, and you can prolong that by minimizing drive use. Having a working drive separate from your archive drives means that your archive drives will have fewer hours on them, thereby extending their practical life.

Ryan Hill’s Workflow

To help visualize what we discussed above, I’ll lay out my personal workflow for you. Please keep in mind that I’m mainly a one-man band, so my workflow is based on me handling everything. I’m also working with a large variety of mediums, so nothing I’m doing is going to be video and camera specific as all of my video projects, photo projects, and graphic projects are organized in the same way. I won’t bore you with details on my file structure, except to say that everything in my root folder is organized by job number, followed by sub-folders with the data classified into categories. I will keep track of which jobs are which, and have a Google Spreadsheet that organizes the job numbers with descriptions and client information. All of this information is secured within my Google account but also allows me to access it from anywhere if needed.

With archiving, my system is pretty simple. I’ve got a 4-drive RAID array in my office that gets updated every time I’m working on a new project. The array is set to RAID 1+0, which means I could lose two of the four hard drives, and still be able to recover the data. Usually, I’ll put 1TB drives in each bay, fill them as I work on projects, and replace them when they’re full. Once they’re full, I label them with the corresponding job numbers and store them in a plastic case on my bookshelf. By no means am I suggesting that my system is a perfect system, but for me, it’s incredibly adaptable to the various projects I work on. In case I was to get robbed, or if my house caught fire, I still have all of my work also archived onto a cloud system, giving me a second level of security.

Finally, to finish up my backup solution, I also keep a two-bay Thunderbolt hard drive dock on my desk as my working drive system. Solid state drives (SSD) and the Thunderbolt connection give me the speed and reliability that I’d need from a drive that I’ll be working from, and rendering outputs off of. For now, there is a single 960gb SSD in the first bay, with the option to extend to the second bay if I need additional storage. I start work by transferring the job file from my archive to the working drive, do whatever I need to do to the files, then replace the old job folder on my archive with the updated one at the end of the day. This way, if I were to have a drive failure, the worst I will lose is a day’s worth of work. For video projects or anything that takes a lot of data, I usually keep copies of all my source files on both my working and archive drive, and just replace the Adobe Premiere project file as I go. Again, this is just my system that works for me, and I recommend you develop one that works for your workflow while keeping your data safe.

The Takeaway

The critical point you should take away is that these sorts of strategies are things you should be thinking about at every step of your production. How does your camera or codec choice affect your media needs? How are you going to ensure safe data backup in the field? How are you going to work with all of this footage in post-production in a way that’s both secure and efficient? Answering all of these questions ahead of time will keep your media safe and your clients happy.

— Zach Sutton and Ryan Hill, lensrentals.com

The post Protecting Your Data From Camera to Archive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: Backup vs Archive

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/data-backup-vs-archive/

Whats the Diff: Backup vs Archive

Backups and archives serve different functions, yet it’s common to hear the terms used interchangeably in cloud storage. It’s important to understand the difference between the two to ensure that your data storage methodology meets your needs in a number of key areas:

  1. retained for the period of time you require
  2. protected from loss or unauthorized access
  3. able to be restored or retrieved when needed
  4. structured or tagged to enable locating specific data
  5. kept current according to your requirements

Our two choices can be broadly categorized:

  • backup is for recovery from hardware failure or recent data corruption or loss
  • archive is for space management and long term retention

What Is a Backup?

A backup is a copy of your data that is made to protect against loss of that data. Typically, backups are made on a regular basis according to a time schedule or when the original data changes. The original data is not deleted, but older backups are often deleted in favor of newer backups.

Data backup graphic

Desktop computers, servers, VMs, and mobile devices are all commonly backed up. Backups can include data, OS and application files, or a combination of these according to the backup methodology and purpose.

The goal of a backup is to make a copy of anything in current use that can’t afford to be lost. A backup of a desktop or mobile device might include just the user data so that a previous version of a file can be recovered if necessary. On these types of devices an assumption is often made that the OS and applications can easily be restored from original sources if necessary (and/or that restoring an OS to a new device could lead to significant corruption issues). In a virtual server environment, a backup could include .VMDK files that contain data and the OS as well as both structured (database) and unstructured data (files) so that the system can be put back into service as quickly as possible if something happens to the original VM in a VMware, Hyper-V, or other virtual machine environment.

In the case of a ransomware attack, a solid backup strategy can mean the difference between being able to restore a compromised system and having to pay a ransom in the vague hopes of getting a decryption key to obtain access to files that are no longer available because they were encrypted by the attacker.

Backups can have additional uses. A user might make go to a backup to retrieve an earlier version of a file because it contains something no longer in the current file, or, as is possible with some backup services such as Backblaze Backup, to share a file with a colleague or other person.

What Is an Archive?

An archive is a copy of data made for long-term storage and reference. The original data may or may not be deleted from the source system after the archive copy is made and stored, though it is common for the archive to be the only copy of the data.

Data archive graphic

In contrast to a backup whose purpose is to be able to return a computer or file system to a state it existed in previously, an archive can have multiple purposes. An archive can provide an individual or organization with a permanent record of important papers, legal documents, correspondence, and other matters. Often, an archive is used to meet information retention requirements for corporations and businesses. If a dispute or inquiry arises about a business practice, contract, financial transaction, or employee, the records pertaining to that subject can be obtained from the archive.

An archive is frequently used to ease the burden on faster and more frequently accessed data storage systems. Older data that is unlikely to be needed often is put on systems that don’t need to have the speed and accessibility of systems that contain data still in use. Archival storage systems are usually less expensive, as well, so a strong motivation is to save money on data storage.

Archives are often created based on the age of the data or whether the project the data belongs to is still active. An archiving program might send data to an archive if it hasn’t been accessed in a specified amount of time, when it has reached a certain age, if a person is no longer with the organization, or the files have been marked for storage because the project has been completed or closed.

Archives also can be created using metadata describing the project. An archiving program can automatically add relevant metadata, or the user can tag data manually to aid in future retrieval. Common metadata added can be business information describing the data, or in the case of photos and videos, the equipment, camera settings, and geographical location where the media was created. Artificial intelligence (AI) can be used to identify and catalog subject matter in some data such as photos and videos to make it easier to find the data at a later date. AI tools will become increasingly important as we archive more data and need to be able to find it based on parameters that might not be known at the time the data was archived.

What’s the Diff?

Backup Archive
Data backup graphic Data archive graphic
Enables rapid recovery of live, changing data Stores unchanging data no longer in use but must be retained
One of multiple copies of data Usually only remaining copy of data
Restore speed: crucial Retrieval speed: not crucial
Short Term Retention
Retained for as long as data is in active use
Long Term Retention
Retained for required period or indefinitely
Duplicate copies are periodically overwritten Data cannot be altered or deleted

What’s the Difference Between Restore and Retrieve?

In general backup systems restore and archive systems retrieve. The tools needed to perform these functions are different.

If you are interested in restoring something from a backup, it usually is a single file, a server, or structured data such as a database that needs to be restored to a specific point in time. You need to know a lot about the data, such as where it was located when it was backed up, the database or folder it was in, the name of the file, when it was backed up, and so forth.

When you retrieve data from an archive, the data is connected in some manner, such as date, email recipient, period of time, or other set of parameters that can be specified in a search. A typical retrieval query might be to obtain all files related to a project name, or all emails sent by a person during a specific period of time.

Trying to use a backup for an archive can present problems. You would need to keep rigorous records of where and when the files were backed up, what medium they were backed up to, and myriad other pieces of information that would need to be recorded at the time of backup. By definition, backup systems keep copies of data currently in use, so maintaining backups for lengthy periods of time go beyond the capabilities of backup systems and would require manual management.

The bottom line is don’t use a backup for an archive. Select the approach that suits your needs: a backup to keep additional copies of data currently in use in case something happens to your primary copy, or an archive to keep a permanent (and perhaps only record) of important data you wish to retain for personal, business, or legal reasons.

Why You Need Both Backup and Archive

It’s clear the a backup and an archive have different uses. Do you need both?

If you’re a business, the wise choice is yes. You need to make sure that your active business data is protected from accidental or malicious loss, and that your important records are maintained as long as necessary for business and legal reasons. If you are an individual or a small business with documents, photos, videos, and other media, you also need both backup and archive to ensure that your data is protected both short and long term and available and retrievable when you need it.

Data backup graphic & Data archive graphic

Selecting the right tools and services for backup and archiving is essential. Each have feature sets that make them suited to their tasks. Trying to use backup for archiving, or archiving for backup, is like trying to fit a round peg into a square hole. It’s best to use the right tool and service for the data storage function you require.

The post What’s the Diff: Backup vs Archive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Cloud Backup Release 5.3 & EOL Announcements

Post Syndicated from Yev original https://www.backblaze.com/blog/backblaze-cloud-backup-release-5-3-eol-announcements/

Backblaze Cloud Backup Release 5.3

Backblaze announces an update to Backblaze Online Backup: Version 5.3! This is a smaller release, but does improve stability, security, and how Backblaze handles systems without a lot of RAM. This release also signals the beginning of the end of our support for a few older operating systems: Mac OS X 10.5, Mac OS X 10.6, Mac OS X 10.7, Windows XP, and Windows Vista.

What’s New in Release 5.3:

  • Better communication with the data centers when checking for connectivity
  • Increased security during communication with the data centers
  • Improved handling of temporary Backblaze log files when RAM is running low
  • Minor changes and bug fixes

Release Version Number:

  • Mac — 5.3.0
  • PC — 5.3.0

Availability:
July 19th, 2018

Upgrade Methods:

  • Immediately when performing a “Check for Updates” (click on the Backblaze icon and then select “Check for Updates”).
  • Immediately as a download from: https://secure.backblaze.com/update.htm.
  • Immediately as the default download from: www.backblaze.com.
  • Auto-update will begin in a couple of weeks.

Cost:
This is a free update for all Backblaze Cloud Backup consumer and business customers and active trial users.

Announcing an End of Life Process:

We’ve made the tough decision to end support for some older operating systems:

  • Mac OS X 10.5: Apple officially stopped security patching this OS in 2012
  • Mac OS X 10.6: Apple officially stopped security patching 10.6 in 2014
  • Mac OS X 10.7: Apple is no longer supporting this OS as of 2015
  • Windows XP: Microsoft officially ended support for XP in 2014
  • Windows Vista: Microsoft announced the end of support for Vista in 2017

It has been a while since we announced the end of life process for Mac OS X 10.4 and sent out DVDs with Mac OS X 10.6 on them to users who were still on Tiger. There aren’t DVDs to send out this time, but we’d still like to make this as smooth a transition as possible for people using the affected operating systems.

What This Means:

  • Customers still using Mac OS X 10.5, Mac OS X 10.6, Mac OS X 10.7, Windows XP, and Windows Vista will be able to continue backing up and restoring data from those systems. Support for the back up functionality will end on August 1, 2019.
  • The above mentioned operating systems will not be receiving new features and will not auto-update to the latest client versions.
  • Once support officially ends on August 1, 2019, the Backblaze client will no longer be able to back up data to Backblaze.

We strongly encourage people on those operating systems to update to the latest and greatest that Microsoft and Apple have to offer (and officially support). If you have any questions, please reach out to Backblaze support at: https://www.backblaze.com/help.html.

The post Backblaze Cloud Backup Release 5.3 & EOL Announcements appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Computer Backup Awareness in 2018: Getting Better and Getting Worse

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/computer-backup-awareness-in-2018/

Backup Frequency - 10 Years of History

Back in June 2008, Backblaze launched our first Backup Awareness Survey. Beginning with that survey and each year since, we’ve asked the folks at The Harris Poll to conduct our annual survey. For the last 11 years now, they’ve asked the simple question, “How often do you backup all the data on your computer?” Let’s see what they’ve found.

First, a Little History

While we did the first survey in 2008, it wasn’t until 2009, after the second survey was conducted, that we declared June as Backup Awareness Month, making June 2018 the 10th anniversary of Backup Awareness Month. But, why June? You’re probably thinking that June is a good time to remind people about backing up their computers. It’s before summer vacations in the northern hemisphere and the onset of winter down under. In truth, back in 2008 Backblaze was barely a year old and the survey, while interesting, got pushed aside as we launched the first beta of our cloud backup product on June 4, 2008. When June 2009 rolled around, we had a little more time and two years worth of data. Thus, Backup Awareness Month was born (PS — the contest is over).

More People Are Backing Up, But…

Fast forward to June 2018, and the folks at The Harris Poll have diligently delivered another survey. You can see the details about the survey methodology at the end of this post. Here’s a high level look at the results over the last 11 years.
Computer Backup Frequency

The percentage of people backing up all the data on their computer has steadily increased over the years, from 62% in 2008 to 76% in 2018. That’s awesome, but at the other end of the time spectrum it’s not so pretty. The percentage of people backing up once a day or more is 5.5% in 2018. That’s the lowest percentage ever reported for daily backup. Wouldn’t it be nice if there were a program you could install on your computer that would back up all the data automatically?

Here’s how 2018 compares to 2008 for how often people back up all the data on their computers.

Computer Data Backup Frequency in 2008
Computer Data Backup Frequency in 2018

A lot has happened over the last 11 years in the world of computing, but at least people are taking backing up their computers a little more seriously. And that’s a good thing.

A Few Data Backup Facts

Each survey provides interesting insights into the attributes of backup fiends and backup slackers. Here are a few facts from the 2018 survey.

Men

  • 21% of American males have never backed up all the data on their computers.
  • 11% of American males, 18-34 years old, have never backed up all the data on their computers.
  • 33% of American males, 65 years and older, have never backed up all the data on their computers.

Women

  • 26% of American females have never backed up all the data on their computers.
  • 22% of American females, 18-34 years old, have never backed up all the data on their computers.
  • 36% of American females, 65 years and older, have never backed up all the data on their computers.

When we look at the four regions in the United States, we see that in 2018 the percentage of people who have backed up all the data on their computer at least once was about the same across regions. This was not the case back in 2012 as seen below:

Year Northeast South Midwest West
2012 67% 73% 65% 77%
2018 75% 78% 75% 76%

 

Looking Back

Here are links to our previous blog posts on our annual Backup Awareness Survey:

Survey Method:

The surveys cited in this post were conducted online within the United States by The Harris Poll on behalf of Backblaze as follows: June 5-7, 2018 among 2,035 U.S. adults, among whom 1,871 own a computer. May 19-23, 2017 among 2048 U.S. adults, May 13-17, 2016 among 2,012 U.S. adults, May 15-19, 2015 among 2,090 U.S. adults, June 2-4, 2014 among 2,037 U.S. adults, June 13–17, 2013 among 2,021 U.S. adults, May 31–June 4, 2012 among 2,209 U.S. adults, June 28–30, 2011 among 2,257 U.S. adults, June 3–7, 2010 among 2,071 U.S. adults, May 13–14, 2009 among 2,185 U.S. adults, and May 27–29, 2008 among 2,761 U.S. adults. In all surveys, respondents consisted of U.S. adult computer users (aged 18+). These online surveys were not based on a probability sample and therefore no estimate of theoretical sampling error can be calculated. For complete survey methodology, including weighting variables and subgroup sample sizes, please contact Backblaze.

The 2018 Survey: Please note sample composition changed in the 2018 wave as new sample sources were introduced to ensure representativeness among all facets of the general population.

The post Computer Backup Awareness in 2018: Getting Better and Getting Worse appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Security Mindfulness Can Help Prevent Data Disasters

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/what-is-cyber-security/

A locked computer screen

A few years ago, I was surprised by a request to consult with the Pentagon on cybersecurity. It surprised me because I have no military background, and it was the Pentagon, whom I suspected already knew a thing or two about security.

I learned that the consulting project was to raise the awareness of cybersecurity among the people who work at the Pentagon and on military bases. The problem they were having was that some did not sufficiently consider the issue of cybersecurity when they dealt with email, file attachments, and passwords, and in their daily interactions with fellow workers and outside vendors and consultants. If these sound like the same vulnerabilities that the rest of us have, you’re right. It turned out that the military was no different than we are in tackling the problem of cybersecurity in their day-to-day tasks.

That’s a problem. These are the people whose primary job requirement is to be vigilant against threats, and yet some were less than vigilant with their computer and communications systems.

But, more than highlighting a problem with just the military, it made me realize that this problem likely extended beyond the military. If the people responsible for defending the United States can’t take cybersecurity seriously, then how can the rest of us be expected to do so?

And, perhaps even more challenging: how do those of us in the business of protecting data and computer assets fix this problem?

I believe that the campaign I created to address this problem for the Pentagon also has value for other organizations and businesses. We all need to understand how to maintain and encourage security mindfulness as we interact with computer systems and other people.

Technology is Not Enough

We continually focus on what we can do with software and hardware to fight against cyber attacks. “Fighting fire with fire” is a natural and easy way of thinking.

The problem is that the technology used to attack us will continually evolve, which means that our technological responses must similarly evolve. The attackers have the natural advantage. They can innovate and we, the defenders, can only respond. It will continue like that, with attacks and defenses leapfrogging each other over and over while we, the defenders, try to keep up. It’s a game where we can never get ahead because the attackers have a multitude of weaknesses to exploit while the defenders have to guess which vulnerability will be exploited next. It’s enough to want to put the challenge out of your mind completely.

So, what’s the answer?

Let’s go back to the Pentagon’s request. It struck me that what the Pentagon was asking me to do was a classic marketing branding campaign. They wanted to make people more aware of something and to think in a certain manner about it. In this case, instead of making people think that using a certain product would make them happier and more successful, the task was to take a vague threat that wasn’t high on people’s list of things to worry about and turn into something that engaged them sufficiently that they changed their behavior.

I didn’t want to try to make cyber attacks more scary — an idea that I rejected outright — but I did want to try to make people understand the real threat of cyber attacks to themselves, their families, and their livelihoods.

Managers and sysadmins face this challenge daily. They make systems as secure as possible, they install security updates, they create policies for passwords, email, and file handling, yet breaches still happen. It’s not that workers are oblivious to the problem, or don’t care about it. It’s just that they have plenty of other things to worry about, and it’s easy to forget about what they should be doing to thwart cyber attacks. They aren’t being mindful of the possibility of intrusions.

Raising Cybersecurity Awareness

People respond most effectively to challenges that are immediate and present. Abstract threats and unlikely occurrences don’t rise sufficiently above the noise level to register in our consciousness. When a flood is at your door, the threat is immediate and we respond. Our long-term health is important enough that we take action to protect it through insurance, check-ups, and taking care of ourselves because we have been educated or seen what happens if we neglect those preparations.

Both of the examples above — one immediate and one long-term — have gained enough mindfulness that we do something about them.

The problem is that there are so many possible threats to us that to maintain our sanity we ignore all but the most immediate and known threats. A threat becomes real once we’ve experienced it as a real danger. If someone has experienced a cyber attack, the experience likely resulted in a change in behavior. A shift in mindfulness made it less likely that the event would occur again due to a new level of awareness of the threat.

Making Mindfulness Work

One way to make an abstract threat seem more real and more possible is to put it into a context that the person is already familiar with. It then becomes more real and more of a possibility.

That’s what I did for the Pentagon. I put together a campaign to raise the level of mindfulness of the threat of cyberattack by associating it with something they were already familiar with considered serious.

I chose the physical battlefield. I branded the threat of cyber attack as the “Silent Battlefield.” This took something that was not a visible, physical threat and turned it into something that was already perceived as a place where actual threats exist: the battlefield. Cyber warfare is silent compared to physical combat, of course, so the branding associated it with the field of combat. At the same time it perhaps also made the threat more insidious; cyber warfare is silent. You don’t hear a shell whistling through the air to warn you of the coming damage. When the enemy is silent, your only choice is be mindful of the threat and therefore, prepared.

Can this approach work in other contexts, say, a business office, an IT department, a school, or a hospital? I believe it can if the right cultural context is found to increase mindfulness of the problem and how to combat it.

First, find a correlative for the threat that makes it real in that particular environment. For the military, it was the battlefield. For a hospital, the correlative might be a disease attempting to invade a body.

Second, use a combination of messages using words, pictures, audio, and video to get the concept across. This is a branding campaign, so just like a branding campaign for a product or service, multiple exposure and multiple delivery mechanisms will increase the effectiveness of the campaign.

Third, frame security measures as positive rather than negative. Focus on the achievement of a positive outcome rather than the avoidance of a negative result. Examples of positive framing of security measures include:

  • backing up regularly enabled the restoration of an important document that was lost or an earlier draft of a plan containing important information
  • recognizing suspicious emails and attachments avoided malware and downtime
  • showing awareness of various types of phishing campaigns enabled the productive continuation of business
  • creating and using unique and strong passwords and multi-factor verification for accounts avoided having to recreate accounts, credentials, and data
  • showing insight into attempts at social engineering and manipulation was evidence of intelligence and value to the organization

Fourth, demonstrate successful outcomes by highlighting thwarted cyber incursions. Give credit to those who are modeling a proactive attitude. Everyone in the organization should reinforce the messages and give positive reinforcement to effective measures when they are employed.

Other things to do to increase mindfulness are:

Reduce stress
A stressful workplace reduces anyone’s ability to be mindful.
Remove other threats so there are fewer things to worry about.
Encourage a “do one thing now” attitude
Be very clear about what’s important. Make sure that security mindfulness is considered important enough to devote time to.
Show positive results and emphasize victories
Highlight behaviors and actions that defeated attempts to breach security and resulted in good outcomes. Make it personal by giving credit to individuals who have done something specific that worked.

You don’t have to study at a zendō to develop the prerequisite mindfulness to improve computer security. If you’re the person whose job it is to instill mindfulness, you need to understand how to make the threats of malware, ransomware, and other security vectors real to the people who must be vigilant against them every day, and find the cultural and psychological context that works in their environment.

If you can find a way to encourage that security mindfulness, you’ll create an environment where a concern for security is part of the culture and thereby greatly increase the resistance of your organization against cyber attacks.

The post How Security Mindfulness Can Help Prevent Data Disasters appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/how-to-wipe-a-mac-hard-drive/

erasing a hard drive and a solid state drive

What do I do with a Mac that still has personal data on it? Do I take out the disk drive and smash it? Do I sweep it with a really strong magnet? Is there a difference in how I handle a hard drive (HDD) versus a solid-state drive (SSD)? Well, taking a sledgehammer or projectile weapon to your old machine is certainly one way to make the data irretrievable, and it can be enormously cathartic as long as you follow appropriate safety and disposal protocols. But there are far less destructive ways to make sure your data is gone for good. Let me introduce you to secure erasing.

Which Type of Drive Do You Have?

Before we start, you need to know whether you have a HDD or a SSD. To find out, or at least to make sure, you click on the Apple menu and select “About this Mac.” Once there, select the “Storage” tab to see which type of drive is in your system.

The first example, below, shows a SATA Disk (HDD) in the system.

SATA HDD

In the next case, we see we have a Solid State SATA Drive (SSD), plus a Mac SuperDrive.

Mac storage dialog showing SSD

The third screen shot shows an SSD, as well. In this case it’s called “Flash Storage.”

Flash Storage

Make Sure You Have a Backup

Before you get started, you’ll want to make sure that any important data on your hard drive has moved somewhere else. OS X’s built-in Time Machine backup software is a good start, especially when paired with Backblaze. You can learn more about using Time Machine in our Mac Backup Guide.

With a local backup copy in hand and secure cloud storage, you know your data is always safe no matter what happens.

Once you’ve verified your data is backed up, roll up your sleeves and get to work. The key is OS X Recovery — a special part of the Mac operating system since OS X 10.7 “Lion.”

How to Wipe a Mac Hard Disk Drive (HDD)

NOTE: If you’re interested in wiping an SSD, see below.

    1. Make sure your Mac is turned off.
    2. Press the power button.
    3. Immediately hold down the command and R keys.
    4. Wait until the Apple logo appears.
    5. Select “Disk Utility” from the OS X Utilities list. Click Continue.
    6. Select the disk you’d like to erase by clicking on it in the sidebar.
    7. Click the Erase button.
    8. Click the Security Options button.
    9. The Security Options window includes a slider that enables you to determine how thoroughly you want to erase your hard drive.

There are four notches to that Security Options slider. “Fastest” is quick but insecure — data could potentially be rebuilt using a file recovery app. Moving that slider to the right introduces progressively more secure erasing. Disk Utility’s most secure level erases the information used to access the files on your disk, then writes zeroes across the disk surface seven times to help remove any trace of what was there. This setting conforms to the DoD 5220.22-M specification.

  1. Once you’ve selected the level of secure erasing you’re comfortable with, click the OK button.
  2. Click the Erase button to begin. Bear in mind that the more secure method you select, the longer it will take. The most secure methods can add hours to the process.

Once it’s done, the Mac’s hard drive will be clean as a whistle and ready for its next adventure: a fresh installation of OS X, being donated to a relative or a local charity, or just sent to an e-waste facility. Of course you can still drill a hole in your disk or smash it with a sledgehammer if it makes you happy, but now you know how to wipe the data from your old computer with much less ruckus.

The above instructions apply to older Macintoshes with HDDs. What do you do if you have an SSD?

Securely Erasing SSDs, and Why Not To

Most new Macs ship with solid state drives (SSDs). Only the iMac and Mac mini ship with regular hard drives anymore, and even those are available in pure SSD variants if you want.

If your Mac comes equipped with an SSD, Apple’s Disk Utility software won’t actually let you zero the hard drive.

Wait, what?

In a tech note posted to Apple’s own online knowledgebase, Apple explains that you don’t need to securely erase your Mac’s SSD:

With an SSD drive, Secure Erase and Erasing Free Space are not available in Disk Utility. These options are not needed for an SSD drive because a standard erase makes it difficult to recover data from an SSD.

In fact, some folks will tell you not to zero out the data on an SSD, since it can cause wear and tear on the memory cells that, over time, can affect its reliability. I don’t think that’s nearly as big an issue as it used to be — SSD reliability and longevity has improved.

If “Standard Erase” doesn’t quite make you feel comfortable that your data can’t be recovered, there are a couple of options.

FileVault Keeps Your Data Safe

One way to make sure that your SSD’s data remains secure is to use FileVault. FileVault is whole-disk encryption for the Mac. With FileVault engaged, you need a password to access the information on your hard drive. Without it, that data is encrypted.

There’s one potential downside of FileVault — if you lose your password or the encryption key, you’re screwed: You’re not getting your data back any time soon. Based on my experience working at a Mac repair shop, losing a FileVault key happens more frequently than it should.

When you first set up a new Mac, you’re given the option of turning FileVault on. If you don’t do it then, you can turn on FileVault at any time by clicking on your Mac’s System Preferences, clicking on Security & Privacy, and clicking on the FileVault tab. Be warned, however, that the initial encryption process can take hours, as will decryption if you ever need to turn FileVault off.

With FileVault turned on, you can restart your Mac into its Recovery System (by restarting the Mac while holding down the command and R keys) and erase the hard drive using Disk Utility, once you’ve unlocked it (by selecting the disk, clicking the File menu, and clicking Unlock). That deletes the FileVault key, which means any data on the drive is useless.

FileVault doesn’t impact the performance of most modern Macs, though I’d suggest only using it if your Mac has an SSD, not a conventional hard disk drive.

Securely Erasing Free Space on Your SSD

If you don’t want to take Apple’s word for it, if you’re not using FileVault, or if you just want to, there is a way to securely erase free space on your SSD. It’s a little more involved but it works.

Before we get into the nitty-gritty, let me state for the record that this really isn’t necessary to do, which is why Apple’s made it so hard to do. But if you’re set on it, you’ll need to use Apple’s Terminal app. Terminal provides you with command line interface access to the OS X operating system. Terminal lives in the Utilities folder, but you can access Terminal from the Mac’s Recovery System, as well. Once your Mac has booted into the Recovery partition, click the Utilities menu and select Terminal to launch it.

From a Terminal command line, type:

diskutil secureErase freespace VALUE /Volumes/DRIVE

That tells your Mac to securely erase the free space on your SSD. You’ll need to change VALUE to a number between 0 and 4. 0 is a single-pass run of zeroes; 1 is a single-pass run of random numbers; 2 is a 7-pass erase; 3 is a 35-pass erase; and 4 is a 3-pass erase. DRIVE should be changed to the name of your hard drive. To run a 7-pass erase of your SSD drive in “JohnB-Macbook”, you would enter the following:

diskutil secureErase freespace 2 /Volumes/JohnB-Macbook

And remember, if you used a space in the name of your Mac’s hard drive, you need to insert a leading backslash before the space. For example, to run a 35-pass erase on a hard drive called “Macintosh HD” you enter the following:

diskutil secureErase freespace 3 /Volumes/Macintosh\ HD

Something to remember is that the more extensive the erase procedure, the longer it will take.

When Erasing is Not Enough — How to Destroy a Drive

If you absolutely, positively need to be sure that all the data on a drive is irretrievable, see this Scientific American article (with contributions by Gleb Budman, Backblaze CEO), How to Destroy a Hard Drive — Permanently.

The post Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.