Tag Archives: Cloud Storage

Channel Partner Program Launch: Ease, Transparency, and Predictability for the Channel

Post Syndicated from Elton Carneiro original https://www.backblaze.com/blog/channel-partner-program-launch-ease-transparency-and-predictability-for-the-channel/

Since the early days of Backblaze B2 Cloud Storage, the advocacy that resellers and distributors have carried out in support of our products has been super important for us. Today, we can start to more fully return the favor: We are excited to announce the launch of our Channel Partner program.

In this program, we commit to delivering greater ease, transparency, and predictability to our Channel Partners through a suite of tools, resources, incentives, and benefits which will roll out over the balance of 2022. We’ve included the details below.

Read on to learn the specifics, or reach out to our Partner team today to schedule a meeting.

“When Backblaze expressed interest in working with CloudBlue Marketplace, we were excited to bring them into the fold. Their ease-of-use and affordable price point make them a great offering to our existing resellers, especially those in the traditional IT, MSP, and media & entertainment space.”
—Jess Warrington, General Manager, North America at CloudBlue

The Program’s Mission

This new program is designed to offer a simple and streamlined way for Channel Partners to do business with Backblaze. In this program, we are committed to three principles:

Ease

We’ll work consistently to simplify the way partners can do business with Backblaze, from recruitment to onboarding, and engagement to deal close. Work can be hard enough, we want work with us to feel easy.

Transparency

Openness and honesty are central to Backblaze’s business, and they will be in our dealings with partners as well. As we evolve the program, we’ll share our experiences and thoughts early and often, and we’ll encourage feedback and keep our doors open to your thoughts to inform how we can continue to improve the Channel Partner experience.

Predictability

Maintaining predictable pricing and a scalable capacity model for our resellers and distributors is central to this effort. We’ll also increasingly bundle additional features to answer all your customers’ cloud needs.

The Program’s Value

Making these new investments in our Channel Partner program is all about opening up the value of B2 Cloud Storage to more businesses. To achieve that, our team will help you to engage more customers, help those customers to build their businesses and accelerate their growth, and ultimately increase your profits.

Engage

Backblaze will drive joint marketing activities, provide co-branded collateral, and establish market development funds to drive demand.

Build

Any technology that supports S3-compatible storage can be paired with B2 Cloud Storage, and we continue to expand our Alliance Partner ecosystem—this means you can sell the industry-leading solutions your customers prefer paired with Backblaze B2.

Accelerate

Our products are differentiated by their ease of adoption and use, meaning they’ll be easy to serve to your customers for any use case: backup, archive or any object storage use case, and more—growing your topline revenue.

The Details

To deliver on the mission this program is aligned around, and the value it aims to deliver, our team has developed a collection of benefits, rewards, and resources. Many of these are available today, and some will come later this year (which we’ll clarify below). Importantly, we want to emphasize that this is just the beginning, and we will work to add to each of these lists over the coming months and years.

Benefits:

  • Deal registration.
  • Channel-exclusive product: Backblaze B2 Reserve.
  • Logo promotion on www.backblaze.com.
  • Joint marketing activities.

Rewards:

  • Rebates.
  • Seller incentives.
  • Market development funds (coming soon).

Resources:

  • Partner sales manager to help with onboarding, engagement, and deal close.
  • Partner marketing manager to help with joint messaging, go-to-market, and collateral.
  • A password-protected partner portal (coming soon).
  • Automation of deal registration, lead passing, and seller incentive payments.

Join Us!

We can’t wait to join with our current and future Channel Partners to deliver tomorrow’s solutions to any customer who can use astonishingly easy cloud storage! (We think that’s pretty much everybody.)

If you’re a reseller or distributor, we’d love to hear from you. If you’re a customer interested in benefiting from any of the above, we’d love to connect you with the right Channel Partner team to serve your needs. Either way, the doors are open and we look forward to helping out.

The post Channel Partner Program Launch: Ease, Transparency, and Predictability for the Channel appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing Developer Quick Starts: Open-source Code You Can Build On

Post Syndicated from Greg Hamer original https://www.backblaze.com/blog/announcing-developer-quick-starts-access-open-source-code-you-can-build-on/

Developing finished applications always requires coding custom functionality, but, as a developer, isn’t it great when you have pre-built, working code you can use as scaffolding for your applications? That way, you can get right to the custom components.

To help you finish building applications faster, we are launching our Developer Quick Start series. This series provides developers with free, open-source code available for download from GitHub. We also built pre-staged buckets with a browsable media application and sample data. For read-only API calls against those buckets, we are sharing API key pairs for programmatic access to these pre-staged buckets. That means you can download the code, run it, and see the results, all without even having to create a Backblaze account!

Today, we’re debuting the first Quick Start in the series—using Python with the Backblaze S3 Compatible API. Read on to get access to all of the resources, including the code on GitHub, sample data to run it against, a video walkthrough, and guided instructions.

Announcing Our Developer Quick Start for Using Python With the Backblaze S3 Compatible API

All of the resources you need to use Python with the Backblaze S3 Compatible API are linked below:

  1. Sample Application: Get our open-source code on GitHub here.
  2. Hosted Sample Data: Experiment with a media application with Application Keys shared for read-only access here.
  3. Video Code Walk-throughs of Sample Application: Share and rewatch walk-throughs on demand here.
  4. Guided Instructions: Get instructions that guide you through downloading the sample code, running it yourself, and then using the code as you see fit, including incorporating it into your own applications here.

Depending on your skill level, the open-source code may be all that you need. If you’re new to the cloud, or just want a deeper, guided walk-through on the source code, check out the written code walk-throughs and video-guided code walk-throughs, too. Whatever works best for you, please feel free to mix and match as you see fit.

Click to enlarge.

The Quick Start walks you through how to perform create and delete API operations inside your own account, all of which can be completed using Backblaze B2 Cloud Storage—and the first 10GB of storage per month are on us.

With the Quick Start code we are sharing, you can get basic functionality working and interacting with B2 Cloud Storage in minutes.

Share the Love

Know someone who might be interested in leveraging the power and ease of cloud storage? Feel free to share these resources at will. Also, we welcome your participation in the projects on GitHub via pull requests. If you are satisfied, feel free to star the project on GitHub or like the videos on YouTube.

Finally, please explore our other Backblaze B2 Sample Code Repositories up on GitHub.

Stay Tuned for More

The initial launch of the Developer Quick Start series logic is available in Python. We will be rolling out Developer Quick Starts for other languages in the months ahead.

Which programming languages (or scripting environments) are of most interest for you? Please let us know in the comments down below. We are continually adding more working examples in GitHub projects, both in Python and in additional languages. Your feedback in the comments below can help guide what gets priority.

We look forward to hearing from you about how these Developer Quick Starts work for you!

The post Announcing Developer Quick Starts: Open-source Code You Can Build On appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Data Protection x2: Explore What Cloud Replication Can Do

Post Syndicated from Jeremy Milk original https://www.backblaze.com/blog/data-protection-x2-explore-what-cloud-replication-can-do/

Anyone overwhelmed by their to-do list wishes they could be in two places at once. Backblaze’s newest feature—currently in beta—might not be able to grant that wish, but it will soon offer something similarly useful: The new Cloud Replication feature means data can be in two places at once, solving a whole suite of issues that keep IT teams up at night.

The Background: What Is Backblaze Cloud Replication?

Cloud Replication will enable Backblaze customers to store files in multiple regions, or create multiple copies of files in one region, across the Backblaze Storage Cloud. Simply set replication rules via web UI or API on a bucket. Once the rules are set, any data uploaded to that bucket will automatically be replicated into a destination bucket either in the same region or another region. If it sounds easy, that’s because it is—even the English majors in our Marketing department have mastered this one.

The Why: What Can Cloud Replication Do for You?

There are three key use cases for Cloud Replication:

  • Protecting data for security, compliance, and continuity purposes.
  • Bringing data closer to distant teams or customers for faster access.
  • Providing version protection for testing and staging in deployment environments.

Redundancy for Compliance and Continuity

This is the top use case for cloud replication, and will likely have value for almost any enterprise with advanced backup strategies.

Whether you are concerned about natural disasters, political instability, or complying with possible government, industry, or board regulations—replicating data to another geographic region can check a lot of boxes easily and efficiently. Especially as enterprises move completely into the cloud, data redundancy will increasingly be a requirement for:

  • Modern business continuity and disaster recovery plans.
  • Industry and board compliance efforts centered on concentration risk issues.
  • Data residency requirements stemming from regulations like GDPR.

The gold standard for backup strategies has long been a 3-2-1 approach. The core principles of 3-2-1, originally developed for an on-premises world, still hold true, and today they are being applied in even more robust ways to an increasingly cloud-based world. Cloud replication is a natural evolution for organizations that are storing much more or even all of their data in the cloud or plan to in the future. It enables you to implement the core principles of 3-2-1, including redundancy and geographic separation, all in the cloud.

Data Proximity

If you have teams, customers, or workflows spread around the world, bringing a copy of your data closer to where work gets done can minimize speed-of-light limitations. Especially for media-heavy teams in game development and postproduction, seconds can make the difference in keeping creative teams operating smoothly. And because you can automate replication and use metadata to track accuracy and process, you can remove some manual steps from the process where errors and data loss tend to crop up.

Testing and Staging

Version control and smoke testing are nothing new, but when you’re controlling versions of large applications or trying to keep track of what’s live and what’s in testing, you might need a tool with more horsepower and options for customization. Cloud Replication can serve these needs.

You can easily replicate objects between buckets dedicated for production, testing, or staging if you need to use the same data and maintain the same metadata. This allows you to observe best practices and automate replication between environments.

The Status: When Can I Get My Hands on Cloud Replication?

Cloud Replication kicked off in beta in early April and our team and early testers have been breaking in the feature since then.

Here’s how things are lined up:

  • April 18: Phase One (Underway)
    Phase one is a limited release that is currently underway. We’ve only unlocked new file replication in this release—meaning testers have to upload new data to test functionality.
  • May 24 (Projected): Phase Two
    We’ll be unlocking the “existing file” Cloud Replication functionality at this time. This means users will be able to set up replication rules on existing buckets to see how replication will work for their business data.
  • Early June (Projected): General Availability

    We’ll open the gates completely on June 7 with full functionality, yeehaw!

Want to Learn More About Cloud Replication?

Stay in the know about Cloud Replication availability—click here to get notified first.

If you want to dig into how this feature works via the CLI and API and learn about some of the edge cases, special circumstances, billing implications, and lookouts—our draft Cloud Replication documentation can be accessed here. We also have some help articles walking through how to create rules via the web application here.

Otherwise, we look forward to sharing more when this feature is fully baked and ready for consumption.

The post Data Protection x2: Explore What Cloud Replication Can Do appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Developers: Spring Into Action With Backblaze B2 Cloud Storage

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/developers-spring-into-action-with-backblaze-b2-cloud-storage/

Spring is in the air here in the Northern Hemisphere, and a developer’s fancy lightly turns to new projects. Whether you’ve already discovered how astonishingly easy it is to work with Backblaze B2 Cloud Storage or not, we hope you find this collection of handy tips, tricks, and resources useful—many of the techniques apply no matter where you are storing data. But first, let’s have a little fun…

Backblaze Developer Meetup

Whether you call yourself a developer, software engineer, or programmer, if you are a Backblaze B2 customer or are just Backblaze B2-curious and want to hang out in person with like-minded folks, here’s your chance. Backblaze is hosting its very first developer meetup on May 24th from 6–8 p.m. in downtown San Mateo, California. We’ll be joined by Gleb Budman, CEO and Co-founder of Backblaze, members of our Engineering team, our Developer Evangelism team, sales engineers, product managers, and more. There’ll be snacks, drinks, prizes, and more. Space is limited, so please sign up for a spot using this Google Form by May 13th and we’ll let you know if there’s space.

Join Us at GlueCon 2022

Are you going to GlueCon 2022? Backblaze will be there! GlueCon is a developer-centric event that will be held in Broomfield, Colorado on May 18th and 19th, 2022. Backblaze is the partner sponsor of the event and Pat Patterson, our chief technical evangelist, will deliver one of the keynotes. There’s still time to learn more and sign up for GlueCon 2022, but act now!

Tips and Tricks

Here’s a collection of tips and tricks we’ve published over the last few months. You can take them as written or use your imagination as to what other problems you can solve.

  • Media Transcoding With Backblaze B2 and Vultr Cloud Compute
    Your task is simple: allow users to upload video from their mobile or desktop device and then make that video available to a wide variety of devices anywhere in the world. We walk you through how we built a very simple video sharing site with Backblaze B2 and Vultr’s Infrastructure Cloud using Vultr’s Cloud Compute instances for the application servers and their new Optimized Cloud Compute instances for the transcoding workers. This includes setup instructions for Vultr and sample code in GitHub.
  • Free Image Hosting With Cloudflare and Backblaze B2
    Discover how the combination of Cloudflare and Backblaze B2 allows you to create your own, personal 10GB image hosting site for free. You start out using Cloudflare Transform Rules to give you access to HTTP traffic at the CDN edge server. This allows you to manipulate the URI path, query string, and HTTP headers of incoming requests and outgoing responses. We provide step-by-step instructions on how to setup both Cloudflare and Backblaze B2 and leave the rest up to you.
  • Building a Multiregion Origin Store With Backblaze B2 and Fastly [email protected]
    [email protected] is a serverless computing environment built on the same caching platform as the Fastly [email protected] CDN. Serverless computing removes provisioning, configuration, maintenance, and scaling from the equation. One place where this technology can be used is in serving your own data from multiple Backblaze B2 regions—in other words, serve it from the closest or most available location. Learn how to create a [email protected] application and connect it to Backblaze B2 buckets making your data available anywhere.
  • Using a Cloudflare Worker to Send Notifications on Backblaze B2 Events
    When building an application, a common requirement is to be able to send a notification of an event (e.g., a user uploading a file) so that an application can take some action (e.g., processing the file). Learn how you can use a Cloudflare Worker to send event notifications to a wide range of recipients, allowing great flexibility when building integrations with Backblaze B2.

Additional Resources

What’s Next?

Coming soon on our blog, we’ll provide a developer quick start kit using Python that you can use with the Backblaze S3 Compatible API to store and access data in B2 Cloud Storage. The quick start kit includes:

  1. A sample application with open-source code on GitHub.
  2. Video code walk-throughs of the sample application.
  3. Hosted sample data.
  4. Guided instructions that walk you through downloading the sample code, running it yourself, and then using the code as you see fit, including incorporating it into your own applications.

Launching in mid-May; stay tuned!

Wrap-up

Hopefully you’ve found a couple of things you can try out using Backblaze B2 Cloud Storage. Join the many developers around the world who have discovered how easy it can be to work with Backblaze B2. If you have any questions, you can visit www.backblaze.com/help.html to use our Knowledge Base, chat with our customer support, or submit a customer support request. Of course, you’ll find lots of other developers online who are more than willing to help as well. Good luck and invent something awesome.

The post Developers: Spring Into Action With Backblaze B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Drive Stats for Q1 2022

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2022/

A long time ago, in a galaxy far, far away, Backblaze began collecting and storing statistics about the hard drives it uses to store customer data. As of the end of Q1 2022, Backblaze was monitoring 211,732 hard drives and SSDs in our data centers around the universe. Of that number, there were 3,860 boot drives, leaving us with 207,872 data drives under management. This report will focus on those data drives. We will review the hard drive failure rates for those drive models that were active as of the end of Q1 2022, and we’ll also look at their lifetime failure statistics. In between, we will dive into the failure rates of the active drive models over time. Along the way, we will share our observations and insights on the data presented and, as always, we look forward to you doing the same in the comments section at the end of the report.

“The greatest teacher, failure is.”1

As of the end of Q1 2022, Backblaze was monitoring 207,872 hard drives used to store data. For our evaluation, we removed 394 drives from consideration as they were either used for testing purposes or were drive models which did not have at least 60 active drives. This leaves us with 207,478 hard drives to analyze for this report. The chart below contains the results of our analysis for Q1 2022.

“Always pass on what you have learned.”2

In reviewing the Q1 2022 table above and the data that lies underneath, we offer a few observations and caveats:

  • “The Force is strong with this one.”3 The 6TB Seagate (model: ST6000DX000) continues to defy time with zero failures during Q1 2022 despite an average age of nearly seven years (83.7 months). 98% of the drives (859) were installed within the same two-week period back in Q1 2015. The youngest 6TB drive in the entire cohort is a little over four years old. The 4TB Toshiba (model: MD04ABA400V) also had zero failures during Q1 2022 and the average age (82.3 months) is nearly as old as the Seagate drives, but the Toshiba cohort has only 97 drives. Still, they’ve averaged just one drive failure per year over their Backblaze lifetime.
  • “Great, kid, don’t get cocky.”4 There were a number of padawan drives (in average age) that also had zero drive failures in Q1 2022. The two 16TB WDC drives (models: WUH721816ALEL0 and WUH721816ALEL4) lead the youth movement with an average age of 5.9 and 1.5 months respectively. Between the two models, there are 3,899 operational drives and only one failure since they were installed six months ago. A good start, but surely not Jedi territory yet.
  • “I find your lack of faith disturbing.”5 You might have noticed the AFR for Q1 2022 of 24.31% for the 8TB HGST drives (model: HUH728080ALE604). The drives are young with an average age of two months, and there are only 76 drives with a total of 4,504 drive days. If you find the AFR bothersome, I do in fact find your lack of faith disturbing, given the history of stellar performance in the other HGST drives we employ. Let’s see where we are in a couple of quarters.
  • “Try not. Do or do not. There is no try.”6 The saga continues for the 14TB Seagate drives (model: ST14000NM0138). When we last saw this drive, the Seagate/Dell/Backblaze alliance continued to work diligently to understand why the failure rate was stubbornly high. Unusual it is for this model, and the team has employed multiple firmware tweaks over the past several months with varying degrees of success. Patience.

“I like firsts. Good or bad, they’re always memorable.”7

We have been delivering quarterly and annual Drive Stats reports since Q1 2015. Along the way, we have presented multiple different views of the data to help provide insights into our operational environment and the hard drives in that environment. Today we’d like to offer a different way to visualize comparing the average age of many of the different models we currently use versus the annualized failure rate of each of those drive models: the Drive Stats Failure Square:

“…many of the truths that we cling to depend on our viewpoint.”8

Each point on the Drive Stats Failure Square represents a hard drive model in operation in our environment as of 3/31/2022 and lies at the intersection of the average age of that model and the annualized failure rate of that model. We only included drive models with a lifetime total of one million drive days or with a confidence interval of all drive models included being 0.6 or less.

The resulting chart is divided into four equal quadrants, which we will categorize as follows:

  • Quadrant I: Retirees. Drives in this quadrant have performed well, but given their current high AFR level they are first in line to be replaced.
  • Quadrant II: Winners. Drives in this quadrant have proven themselves to be reliable over time. Given their age, we need to begin planning for their replacement, but there is no need to panic.
  • Quadrant III: Challengers. Drives in this quadrant have started off on the right foot and don’t present any current concerns for replacement. We will continue to monitor these drive models to ensure they stay on the path to the winners quadrant instead of sliding off to quadrant IV.
  • Quadrant IV: Muddlers. Drives in this quadrant should be replaced if possible, but they can continue to operate if their failure rates remain at their current rate. The redundancy and durability built into the Backblaze platform protects data from the higher failure rates of the drives in this quadrant. Still, these drives are a drain on data center and operational resources.

“Difficult to see; always in motion is the future.”9

Obviously, the Winners quadrant is the desired outcome for all of the drive models we employ. But every drive basically starts out in either quadrant III or IV and moves from there over time. The chart below shows how the drive models in quadrant II (Winners) got there.

“Your focus determines your reality.”10

Each drive model is represented by a snake-like line (Snakes on a plane!?) which shows the AFR of the drive model as the average age of the fleet increased over time. Interestingly, each of the six models currently in quadrant II has a different backstory. For example, who could have predicted that the 6TB Seagate drive (model: ST6000DX000) would have ended up in the Winners quadrant given its less than auspicious start in 2015. And that drive was not alone; the 8TB Seagate drives (models: ST8000NM0055 and ST8000DM002) experienced the same behavior.

This chart can also give us a visual clue as to the direction of the annualized failure rate over time for a given drive model. For example, the 10TB Seagate drive seems more interested in moving into the Retiree quadrant over the next quarter or so and as such its replacement priority could be increased.

“In my experience, there’s no such thing as luck.”11

In the quarterly Drive Stats table at the start of this report, there is some element of randomness which can affect the results. For example, whether a drive is reported as a failure on the 31st of March at 11:59 p.m. or at 12:01 a.m. on April 1st can have a small effect on the results. Still, the quarterly results are useful in surfacing unexpected failure rate patterns, but the most accurate information regarding a given drive model is captured in the lifetime annualized failures rates.

The chart below shows the lifetime annualized failure rates of all the drive models in production as of March 31, 2022.

“You have failed me for the last time…”12

The lifetime annualized failure rate for all the drives listed above is 1.39%. That was down from 1.40% at the end of 2021. One year ago (3/31/2021), the lifetime AFR was 1.49%.

When looking at the lifetime failure table above, any drive models with less than 500,000 drive days or a confidence interval greater than 1.0% do not have enough data to be considered an accurate portrayal of their performance in our environment. The 8TB HGST drives (model: HUH728080ALE604) and the 16TB Toshiba drives (model: MG08ACA16TA) are good examples of such drives. We list these drives for completeness as they are also listed in the quarterly table at the beginning of this review.

Given the criteria above regarding drive days and confidence intervals, the best performing drive in our environment for each manufacturer is:

  • HGST: 12TB, model: HUH721212ALE600. AFR: 0.33%
  • Seagate: 12TB model: ST12000NM001G. AFR 0.63%
  • WDC: 14TB model: WUH721414ALE6L4. AFR: 0.33%
  • Toshiba: 16TB model: MG08ACA16TEY. AFR 0.70%

“I never ask that question until after I’ve done it!”13

For those of you interested in how we produce this report, the data we used is available on our Hard Drive Test Data webpage. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell the data itself to anyone; it is free.

Good luck and let us know if you find anything interesting. And no, it’s not a trap.

Quotes Referenced

  1. “The greatest teacher, failure is.”—Yoda, “The Last Jedi”
  2. “Always pass on what you have learned.”—Yoda, “Return of the Jedi”
  3. “The Force is strong with this one.”—Darth Vader, “A New Hope”
  4. “Great, kid, don’t get cocky.”—Han Solo, “A New Hope”
  5. “I find your lack of faith disturbing.”—Darth Vader, “A New Hope”
  6. “Try not. Do or do not. There is no try.”—Yoda, “The Empire Strikes Back”
  7. “I like firsts. Good or bad, they’re always memorable.”—Ahsoka Tano, “The Mandalorian”
  8. “…many of the truths that we cling to depend on our viewpoint.”—Obi-Wan Kenobi, “Return of the Jedi”
  9. “Difficult to see; always in motion is the future.”—Yoda, “The Empire Strikes Back”
  10. “Your focus determines your reality.”—Qui-Gon Jinn, “The Phantom Menace”
  11. “In my experience, there’s no such thing as luck.”—Obi-Wan Kenobi, “A New Hope”
  12. “You have failed me for the last time…”—Darth Vader, “The Empire Strikes Back”
  13. “I never ask that question until after I’ve done it!”—Han Solo, “The Force Awakens”

The post Backblaze Drive Stats for Q1 2022 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Use a Cloudflare Worker to Send Notifications on Backblaze B2 Events

Post Syndicated from Pat Patterson original https://www.backblaze.com/blog/use-a-cloudflare-worker-to-send-notifications-on-backblaze-b2-events/

When building an application or solution on Backblaze B2 Cloud Storage, a common requirement is to be able to send a notification of an event (e.g., a user uploading a file) so that an application can take some action (e.g., processing the file). In this blog post, I’ll explain how you can use a Cloudflare Worker to send event notifications to a wide range of recipients, allowing great flexibility when building integrations with Backblaze B2.

Why Use a Proxy to Send Event Notifications?

Event notifications are useful whenever you need to ensure that a given event triggers a particular action. For example, last month, I explained how a video sharing site running on Vultr’s Infrastructure Cloud could store raw and transcoded videos in Backblaze B2. In that example, when a user uploaded a video to a Backblaze B2 bucket via the web application, the web app sent a notification to a Worker app instructing the Worker to read the raw video file from the bucket, transcode it, and upload the processed file back to Backblaze B2.

A drawback of this approach is that, if we were to create a mobile app to upload videos, we would have to copy the notification logic into the mobile app. As the system grows, so does the maintenance burden. Each new app needs code to send notifications and, worse, if we need to add a new field to the notification message, we have to update all of the apps. If, instead, we move the notification logic from the web application to a Cloudflare Worker, we can send notifications on Backblaze B2 events from a single location, regardless of the origin of the request. This pattern of wrapping an API with a component that presents the exact same API but adds its own functionality is known as a proxy.

Cloudflare Workers: A Brief Introduction

Cloudflare Workers provides a serverless execution environment that allows you to create applications that run on Cloudflare’s global edge network. A Cloudflare Worker application intercepts all HTTP requests destined for a given domain, and can return any valid HTTP response. Your Worker can create that HTTP response in any way you choose. Workers can consume a range of APIs, allowing them to directly interact with the Cloudflare cache, manipulate globally unique Durable Objects, perform cryptographic operations, and more.

Cloudflare Workers often, but not always, implement the proxy pattern, sending outgoing HTTP requests to servers on the public internet in the course of servicing incoming requests. If we implement a proxy that intercepts requests from clients to Backblaze B2, it could both forward those requests to Backblaze B2 and send notifications of those requests to one or more recipient applications.

This example focuses on proxying requests to the Backblaze S3 Compatible API, and can be used with any S3 client application that works with Backblaze B2 by simply changing the client’s endpoint configuration.

Implementing a similar proxy for the B2 Native API is much simpler, since B2 Native API requests are secured by a bearer token rather than a signature. A B2 Native API proxy would simply copy the incoming request, including the bearer token, changing only the target URL. Look out for a future blog post featuring a B2 Native API proxy.

Proxying Backblaze B2 Operations With a Cloudflare Worker

S3 clients send HTTP requests to the Backblaze S3 Compatible API over a TLS-secured connection. Each request includes the client’s Backblaze Application Key ID (access key ID in AWS parlance) and is signed with its Application Key (secret access key), allowing Backblaze B2 to authenticate the client and verify the integrity of the request. The signature algorithm, AWS Signature Version 4 (SigV4), includes the Host header in the signed data, ensuring that a request intended for one recipient cannot be redirected to another. Unfortunately, this is exactly what we want to happen in this use case!

Our proxy Worker must therefore validate the signature on the incoming request from the client, and then create a new signature that it can include in the outgoing request to the Backblaze B2 endpoint. Note that the Worker must be configured with the same Application Key and ID as the client to be able to validate and create signatures on the client’s behalf.

Here’s the message flow:

  1. A user performs an action in a Backblaze B2 client application, for example, uploading an image.
  2. The client app creates a signed request, exactly as it would for Backblaze B2, but sends it to the Cloudflare Worker rather than directly to Backblaze B2.
  3. The Worker validates the client’s signature, and creates its own signed request.
  4. The Worker sends the signed request to Backblaze B2.
  5. Backblaze B2 validates the signature, and processes the request.
  6. Backblaze B2 returns the response to the Worker.
  7. The Worker forwards the response to the client app.
  8. The Worker sends a notification to the webhook recipient.
  9. The recipient takes some action based on the notification.

These steps are illustrated in the diagram below.

The validation and signing process imposes minimal overhead, even for requests with large payloads, since the signed data includes a SHA-256 digest of the request payload, included with the request in the x-amz-content-sha256 HTTP header, rather than the payload itself. The Worker need not even read the incoming request payload into memory, instead passing it to the Cloudflare Fetch API to be streamed directly to the Backblaze B2 endpoint.

The Worker returns Backblaze B2’s response to the client unchanged, and creates a JSON-formatted webhook notification containing the following parameters:

  • contentLength: Size of the request body, if there was one, in bytes.
  • contentType: Describes the request body, if there was one. For example, image/jpeg.
  • method: HTTP method, for example, PUT.
  • signatureTimestamp: Request timestamp included in the signature.
  • status: HTTP status code returned from B2 Cloud Storage, for example 200 for a successful request or 404 for file not found.
  • url: The URL requested from B2 Cloud Storage, for example, https://s3.us-west-004.backblazeb2.com/my-bucket/hello.txt.

The Worker submits the notification to Cloudflare for asynchronous processing, so that the response to the client is not delayed. Once the interaction with the client is complete, Cloudflare POSTs the notification to the webhook recipient.

Prerequisites

If you’d like to follow the steps below to experiment with the proxy yourself, you will need to:

1. Creating a Cloudflare Worker Based on the Proxy Code

The Cloudflare Worker B2 Webhook GitHub repository contains full source code and configuration details. You can use the repository as a template for your own Worker using Cloudflare’s wrangler CLI. You can change the Worker name (my-proxy in the sample code below) as you see fit:

wrangler generate my-proxy
https://github.com/backblaze-b2-samples/cloudflare-b2-proxy
cd my-proxy

2. Configuring and Deploying the Cloudflare Worker

You must configure AWS_ACCESS_KEY_ID and AWS_S3_ENDPOINT in wrangler.toml before you can deploy the Worker. Configuring WEBHOOK_URL is optional—you can set it to empty quotes if you just want a vanity URL for Backblaze B2.

[vars]

AWS_ACCESS_KEY_ID = "<your b2 application key id>"
AWS_S3_ENDPOINT = "</your><your endpoint - e.g. s3.us-west-001.backblazeb2.com>"
AWS_SECRET_ACCESS_KEY = "Remove this line after you make AWS_SECRET_ACCESS_KEY a secret in the UI!"
WEBHOOK_URL = "<e.g. https://api.example.com/webhook/1 >"

Note the placeholder for AWS_SECRET_ACCESS_KEY in wrangler.toml. All variables used in the Worker must be set before the Worker can be published, but you should not save your Backblaze B2 application key to the file (see the note below). We work around these constraints by initializing AWS_SECRET_ACCESS_KEY with a placeholder value.

Use the CLI to publish the Worker project to the Cloudflare Workers environment:

wrangler publish

Now log in to the Cloudflare dashboard, navigate to your new Worker, and click the Settings tab, Variables, then Edit Variables. Remove the placeholder text, and paste your Backblaze B2 Application Key as the value for AWS_SECRET_ACCESS_KEY. Click the Encrypt button, then Save. The environment variables should look similar to this:

Finally, you must remove the placeholder line from wrangler.toml. If you do not do so, then the next time you publish the Worker, the placeholder value will overwrite your Application Key.

Why Not Just Set AWS_SECRET_ACCESS_KEY in wrangler.toml?

You should never, ever save secrets such as API keys and passwords in source code files. It’s too easy to forget to remove sensitive data from source code before sharing it either privately or, worse, on a public repository such as GitHub.

You can access the Worker via its default endpoint, which will have the form https://my-proxy.<your-workers-subdomain>.workers.dev, or create a DNS record in your own domain and configure a route associating the custom URL with the Worker.

If you try accessing the Worker URL via the browser, you’ll see an error message:

<Error>
<Code>AccessDenied</Code>
<Message>
Unauthenticated requests are not allowed for this api
</Message>
</Error>

This is expected—the Worker received the request, but the request did not contain a signature.

3. Configuring the Client Application

The only change required in your client application is the S3 endpoint configuration. Set it to your Cloudflare Worker’s endpoint rather than your Backblaze account’s S3 endpoint. As mentioned above, the client continues to use the same Application Key and ID as it did when directly accessing the Backblaze S3 Compatible API.

4. Implementing a Webhook Consumer

The webhook consumer must accept JSON-formatted messages via HTTP POSTs at a public endpoint accessible from the Cloudflare Workers environment. The webhook notification looks like this:

{
"contentLength": 30155,
"contentType": "image/png",
"method": "PUT",
"signatureTimestamp": "20220224T193204Z",
"status": 200,
"url": "https://s3.us-west-001.backblazeb2.com/my-bucket/image001.png"
}

You might implement the webhook consumer in your own application or, alternatively, use an integration platform such as IFTTT, Zapier, or Pipedream to trigger actions in downstream systems. I used Pipedream to create a workflow that logs each Backblaze B2 event as a new row in a Google Sheet. Watch it in action in this short video:

Put the Proxy to Work!

The Cloudflare Worker/Backblaze B2 Proxy can be used as-is in a wide variety of integrations—anywhere you need an event in Backblaze B2 to trigger an action elsewhere. At the same time, it can be readily adapted for different requirements. Here are a few ideas.

In this initial implementation, the client uses the same credentials to access the Worker as the Worker uses to access Backblaze B2. It would be straightforward to use different credentials for the upstream and downstream connections, ensuring that clients can’t bypass the Worker and access Backblaze B2 directly.

POSTing JSON data to a webhook endpoint is just one of many possibilities for sending notifications. You can integrate the worker with any system accessible from the Cloudflare Workers environment via HTTP. For example, you could use a stream-processing platform such as Apache Kafka to publish messages reliably to any number of consumers, or, similarly, send a message to an Amazon Simple Notification Service (SNS) topic for distribution to SNS subscribers.

As a final example, the proxy has full access to the request and response payloads. Rather than sending a notification to a separate system, the worker can operate directly on the data, for example, transparently compressing incoming uploads and decompressing downloads. The possibilities are endless.

How will you put the Cloudflare Worker Backblaze B2 Proxy to work? Sign up for a Backblaze B2 account and get started!

The post Use a Cloudflare Worker to Send Notifications on Backblaze B2 Events appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A Refreshing Partnership: Backblaze and SoDA

Post Syndicated from Jennifer Newman original https://www.backblaze.com/blog/a-refreshing-partnership-backblaze-and-soda/

Editor’s Note: SoDA and Backblaze will be at NAB 2022 and would love to tell you more about our joint solution—offering data analysis and movement, FREE during initial migration—at NAB. Set up a meeting here.

Moving all your stuff is one of the most paralyzing projects imaginable. Which is why professional movers are amazing: one tackles the dishes, a couple folks hit the mattresses and closets. And there’s one guy (probably the rookie) who gets assigned to the junk drawers and odd gadgets. Suddenly, your old house is empty and your life is safe and orderly in boxes moving across the country.

Now imagine moving your businesses’ most valuable data across the country or the world—whether it’s the organization, the security, the budgeting, or all of the above and then some—it can be absolutely paralyzing, even when your current data storage approach is holding you back and you know you need to make a change.

This is where SoDA comes in.

Essentially your professional movers in the cloud, the SoDA team analyzes your cloud or on-prem infrastructure and then orchestrates the movement, replication, or syncing of data to wherever you want it to go—limiting any downtime in the process and ensuring your data is secure in flight and structured exactly as you need it in its new home. If deciding where to send data is an issue, they’ll use the analysis of your existing setup to scope the best solution by value for your business.

The Backblaze and SoDA Partnership leverages SoDA’s data movement services to unlock Backblaze B2 Cloud Storage’s value for more businesses. The partnership offers the following benefits:

  • A cost analysis of your existing storage infrastructure.
  • A “dry run” feature that compares existing storage costs to new storage costs and any transfer costs so you “know before you go.”
  • The ability to define policies for how the data should move and where.
  • Flexibility to move, copy, sync, or archive data to Backblaze B2.
  • Migration and management via the Backblaze S3 Compatible API—easily migrate data, and then develop and manage both on-prem and cloud data via the API going forward.

Why Should You Try Backblaze and SoDA?

First: Backblaze will pay for SoDA’s services for any customer who agrees to migrate 10TB or more and commit to maintaining at least 10TB in Backblaze B2 for a minimum of one year.*

People don’t believe this when we tell them, but we’ll say it again: You won’t receive an invoice for your initial data migration, ever.

If that’s not reason enough to run a proof of concept, here’s more to think about:

Moving a couple of files to the cloud is easy peasy. But what happens if you have billions of files structured in multiple folders across multiple storage locations? You could use legacy tools or command line tools, but all of the scripting and resource management for the data in flight will be on you. You’re smart enough to do it, but if someone else is willing to pay the metaphorical movers, why deal with the hassle?

With SoDA, you do not have to worry about any of it. You define your source locations, define your destination Backblaze B2 bucket and start a transfer. SoDA takes care of the rest. That is truly easy peasy.

An Example of the Backlaze and SoDA Value Proposition

One customer we recently worked with was managing data in their own data center and having issues with reliability and SLAs for current customers. They needed availability at 99.999% as well as cost-effectiveness for future scaling. They identified Backblaze B2 as a provider that checked both boxes and Backblaze recommended SoDA for the move. The customer migrated 1PB of data (over a billion files) into B2 Cloud Storage. Other than making the decision and pointing where the data should go, the customer didn’t have to lift a finger.

Try It Today

If you’re not convinced yet, the SoDA and Backblaze teams are ready to make your life easier at any time. You can schedule a meeting here. Or you can check out the Quickstart guide to explore the solution today.

*Conditions may apply.

The post A Refreshing Partnership: Backblaze and SoDA appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Ransomware Takeaways From Q1 2022

Post Syndicated from Molly Clancy original https://www.backblaze.com/blog/ransomware-takeaways-from-q1-2022/

The impact of the war in Ukraine is evolving in real time, particularly when it comes to the ransomware landscape. Needless to say, it dominated the ransomware conversation throughout Q1 2022. This quarter, we’re digging into some of the consequences from the invasion and what it means for you in addition to a few broader domestic developments.

Why? Staying up to date on ransomware trends can help you prepare your security infrastructure in the short and long term to protect your valuable data. In this series, we share five key takeaways based on what we saw over the previous quarter. Here’s what we observed in Q1 2022.

This post is a part of our ongoing series on ransomware. Take a look at our other posts for more information on how businesses can defend themselves against a ransomware attack, and more.

➔ Download The Complete Guide to Ransomware E-book

1. Sanctions and International Attention May Have Depressed Some Ransomware Activity

Following the ground invasion, ransomware attacks seemed to go eerily quiet especially when government officials predicted cyberattacks could be a key tactic. That’s not to say attacks weren’t being carried out without being reported, but the radio silence was notable enough that a few media outlets wondered why.

International attention may be one reason—cybercriminals tend to be wary of the spotlight. Having the world’s eyes on a region where much cybercrime originates seems to have pushed cybercriminals into the shadows. The sanctions imposed on Russia have made it more difficult for cybercrime syndicates based in the country to receive, convert, and disperse payment from victims. The war also may have caused some chaos within ransomware syndicates and fomented fears that cyberinsurers would not pay for claims. As a result, we’ve seen a slowing of ransomware incidents in the first quarter, but that may not last.

Key Takeaway: While ransomware attacks may be down short-term, no one should be lulled into thinking the threat is gone, especially with government agencies on high alert and warnings from the highest levels that businesses should still be on guard.

2. Long-term Socioeconomic Impacts Could Trigger a New Wave of Cybercrime

As part of their ongoing analysis, cyber security consultants Coveware, illustrated how the socioeconomic precarity caused by sanctions could lead to a larger number of people turning to cybercrime as a way to support themselves. In their reporting, they analyzed the number of trained cyber security professionals who they’d expect to be out of work given Russia’s rising unemployment rate in order to estimate a pool of potential new ransomware operators. To double the number of individuals currently acting as ransomware operators, they found that only 7% of the newly unemployed workforce would have to convert to cybercrime.

They note, however, that it remains to be seen what impact a larger labor pool would have since new entrants looking for fast cash may not be as willing to put in the time and effort to carry out big game tactics that typified the first half of 2021. As such, Coveware would expect to see an increase in attacks on small to medium-sized enterprises (which already make up the largest portion of ransomware victims today) and a decline in ransom demands with new operators hoping to make paying up more attractive for victims.

Key Takeaway: If the threat materializes, new entrants to the ransomware game are likely to try to fly under the radar, which means we would expect to see a larger number of small to medium-sized businesses targeted with ransoms that won’t make headlines, but that nonetheless hurt the businesses affected.

3. One Ransomware Operator Paid the Price for Russian Allegiance; Others Declared Neutrality

In February, ransomware group Conti declared their support for Russian actions and threatened to retaliate against Western entities targeting Russian infrastructure. But Conti appears to have miscalculated the loyalty of its affiliates, many of whom are likely pro-Ukraine. The declaration backfired when one of their affiliates leaked chat logs following the announcement. Shortly after, LockBit, another prolific ransomware group, took a cue from Conti’s blunder, declaring neutrality and swearing off any attacks against Russia’s many enemies. Their reasoning? Surprisingly inclusive for an organized crime syndicate:

“Our community consists of many nationalities of the world, most of our pentesters are from the CIS including Russians and Ukrainians, but we also have Americans, Englishmen, Chinese, French, Arabs, Jews, and many others in our team… We are all simple and peaceful people, we are all Earthlings.”

As we know, the ransomware economy is a wide, interconnected network of actors with varying political allegiances. The actions of LockBit may assuage some fears that Russia would be able to weaponize the cybercrime groups that have been allowed to operate with impunity within its borders, but that’s no reason to rest easy.

Key Takeaway: LockBit’s actions and words reinforce the one thing we know for sure about cybercriminals: Despite varying political allegiances, they’re unified by money and they will come after it if it’s easy for the taking.

4. CISA Reports the Globalized Threat of Ransomware Increased in 2021

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) released a statement in March summarizing the trends they saw throughout 2021. They outlined a number of tactics that we saw throughout the year as well, including:

  • Targeting attacks on holidays and weekends.
  • Targeting managed service providers.
  • Targeting backups stored in on-premises devices and in the cloud.

Among others, these tactics pose a threat to critical infrastructure, healthcare, financial institutions, education, businesses, and nonprofits globally.

Key Takeaway: The advisory outlines 18 mitigation strategies businesses and organizations can take to protect themselves from ransomware, including some of the top strategies as we see it: protecting cloud storage by backing up to multiple locations, requiring MFA for access, and encrypting data in the cloud.

5. Russia Could Use Ransomware to Offset Sanctions

Despite our first observation that ransomware attacks slowed somewhat early in the quarter, the Financial Crimes Enforcement Network (FinCEN) issued an alert in March that Russia may employ state-sponsored actors to evade sanctions and bring in cryptocurrency by ramping up attacks. They warned financial institutions, specifically, to be vigilant against these threats to help thwart attempts by state-sponsored Russian actors to extort ransomware payments.

The warnings follow an increase in phishing and distributed denial-of-service (DDoS) attacks that have persisted throughout the year and increased toward the end of February into March as reported by Google’s Threat Analysis Group. In reports from ThreatPost covering the alert as well as Google’s observations, cybersecurity experts seemed doubtful that ransomware payouts would make much of a dent in alleviating the sanctions, and noted that opportunities to use ransomware were more likely on an individual level.

Key Takeaway: The warnings serve as a reminder that both individual actors and state-sponsored entities have ransomware tools at their disposal to use as a means to retaliate against sanctions or simply support themselves, and that the best course of action is to shore up defenses before the anticipated threats materialize.

What This All Means for You

The changing political landscape will continue to shape the ransomware economy in new and unexpected ways. Being better prepared to avoid or mitigate the effects of ransomware makes more and more sense when you can’t be sure what to expect. Ransomware protection doesn’t have to be costly or confusing. Check out our ransomware protection solutions to get started.

The post Ransomware Takeaways From Q1 2022 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing Backblaze B2’s Universal Data Migration

Post Syndicated from Jeremy Milk original https://www.backblaze.com/blog/announcing-backblaze-b2-universal-data-migration/

Your data is valuable. Whether you’re sequencing genomes, managing a media powerhouse, or running your own business, you need fast, affordable, ready access to it in order to achieve your goals. But, you can’t get the most out of your data if it’s locked-in to a provider where it’s hard to manage or time-consuming to retrieve. Unfortunately, due to egress fees and closed, “all-in-one” platforms, vendor lock-in is currently trapping too many companies.

Backblaze can help: Universal Data Migration, a new service launched today, covers all data transfer costs, including legacy provider egress fees, and manages data migration from any legacy on-premises or cloud source. In short, your migration to Backblaze B2 Cloud Storage is on us.

Many of the businesses we’ve spoken to about this didn’t believe that the service was free at first. But seriously—you will never see an invoice for your transfer fees, any egress fees levied by your legacy vendor when you pull data out, or for the assistance in moving data.

If you’re still in doubt, read on to learn more about how Universal Data Migration can help you say goodbye to vendor lock-in, cold delays, and escalating storage costs, and hello to B2 Cloud Storage gains, all without fears of cost, complexity, downtime, or data loss.

How Does Universal Data Migration Work?

Backblaze has curated a set of integrated services to handle migrations from pretty much every source, including:

  • Public cloud storage
  • Servers
  • Network attached storage (NAS)
  • Storage area networks (SAN)
  • Tape/LTO solutions
  • Cloud drives

We cover data transfer and egress costs and facilitate the migration to Backblaze B2 Cloud Storage. The turnkey service expands on our earlier Cloud to Cloud Migration services as well as transfers via internet and the Backblaze Fireball rapid ingest devices. These offerings are now rolled up into one universal service.

“I thought moving my files would be the hardest part of the process, and it’s why I never really thought about switching providers before, but it was easy.”
—Tristan Pelligrino, Co-founder, Motion

We do ask that companies who use the service commit to maintaining at least 10TB in Backblaze B2 for a minimum of one year, but we expect that our cloud storage pricing—a quarter the cost of comparable services—and our interoperability with other cloud services, will keep new customers happy for that first year and beyond.

Outside of specifics that will vary by your unique infrastructure and workflows, migration types include:

  • Cloud to cloud: Reads from public cloud storage or a cloud drive (e.g., Amazon S3 or Google Drive) and writes to Backblaze B2 via inter-cloud bandwidth.
  • On-premises to cloud: Reads from a server, NAS, or SAN and writes to Backblaze B2 over optimized cloud pipes or via Backblaze’s 96TB Fireball rapid ingest device.
  • LTO/tape to cloud: Reads tape media, from reel cassettes to cartridges and more, and writes to Backblaze B2 via a high-speed, direct connection.

Backblaze also supports simple internet transfers for moving files over your existing bandwidth—with multi-threading to maximize speed.

How Much Does Universal Data Migration Cost?

Not to sound like a broken record, but this is the best part—the service is entirely free to you. You’ll never receive a bill. Backblaze incurs all data transfer and legacy vendor egress or download fees for inbound migrations >10TB with a one-year commitment. It’s pretty cool that we can help save you money; it’s even cooler that we can help more businesses build the tech stacks they want using unconflicted providers to truly get the most out of their data.

Fortune Media Reduces Storage Costs by Two-thirds With Universal Data Migration

 
After divesting from its parent company, Fortune Media rebuilt its technology infrastructure and moved many services, including data storage, to the cloud. However, the initial tech stack was expensive, difficult to use, and not 100% reliable.

Backblaze B2 offered a more reliable and cost-effective solution for both hot cloud storage and archiving. In addition, the platform’s ease of use would give Fortune’s geographically-dispersed video editors a modern, self-service experience, and it was easier for the IT team to manage.

Using Backblaze’s Cloud to Cloud Migration, now part of Universal Data Migration, the team transferred over 300TB of data from their legacy provider in less than a week with zero downtime, business disruption, or egress costs, and was able to cut overall storage costs by two-thirds.

“In the cloud space, the biggest complaint that we hear from clients is the cost of egress and storage. With Backblaze, we saved money on the migration, but also overall on the storage and the potential future egress of this data.”
—Tom Kehn, Senior Solutions Architect at CHESA, Fortune Media’s technology systems integrator

Even More Benefits

What else do you get with Universal Data Migration? Additional benefits include:

  • Truly universal migrations: Secure data mobility from practically any source.
  • Support along the way: Simple, turnkey services with solution engineer support to help ensure easy success.
  • Safe and speedy transfers: Proven to securely transfer millions of objects and petabytes of data, often in just days.

Ready to Get Started?

The Universal Data Migration service is generally available now. To qualify, organizations must migrate and commit to maintaining at least 10TB in Backblaze B2 for a minimum of one year. For more information or to set up a free proof of concept, contact the Backblaze Sales team.

The post Announcing Backblaze B2’s Universal Data Migration appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Partner API: The Details

Post Syndicated from Elton Carneiro original https://www.backblaze.com/blog/backblaze-partner-api-the-details/

Last week, we announced enhancements to our partner program that make working with Backblaze even easier for current and future partners. We shared a deep dive into the first new offering—Backblaze B2 Reserve—on Friday, and today, we’re digging into another key element: the Backblaze Partner API. The Backblaze Partner API enables independent software vendors (ISVs) participating in Backblaze’s Alliance Partner program to add Backblaze B2 Cloud Storage as a seamless backend extension within their own platform.

Read on to learn more about the Backblaze Partner API and what it means for existing and potential Alliance Partners.

What Is the Backblaze Partner API?

With the Backblaze Partner API, ISVs participating in Backblaze’s Alliance Partner program can programmatically provision accounts, run reports, and create a bundled solution or managed service which employs B2 Cloud Storage on the back end while delivering a unified experience to their users.

By unlocking an improved customer experience, the Partner API allows Alliance Partners to build additional cloud services into their product portfolio to generate new revenue streams and/or grow existing margin.

Why Develop the Backblaze Partner API?

We heard frequently from our existing partners that they wanted to provide a more seamless experience for their customers when it came to offering a cloud storage tier. Specifically, they wanted to keep customers on their site rather than requiring them to go elsewhere as part of the sign up experience. We built the Partner API to deliver this enhanced customer experience while also helping our partners extend their services and expand their offerings.

“Our customers produce thousands of hours of content daily, and, with the shift to leveraging cloud services like ours, they need a place to store both their original and transcoded files. The Backblaze Partner API allows us to expand our cloud services and eliminate complexity for our customers—giving them time to focus on their business needs, while we focus on innovations that drive more value.”
—Murad Mordukhay, CEO, Qencode

What Does the Partner API Do, Specifically?

To create the Backblaze Partner API, we exposed existing functionality to allow partners to automate tasks like creating and ejecting member accounts, managing Groups, and leveraging system-generated reports to get granular billing and usage information—outlining user tasks individually so users can be billed more accurately for what they’ve used.

The API calls are:

  • Account creation (adding Group members).
  • Organizing accounts in Groups.
  • Listing Groups.
  • Listing Group members.
  • Ejecting Group members.

Once the Partner API is configured, developers can use the Backblaze S3 Compatible API or the Backblaze B2 Native API to manage Group members’ Backblaze B2 accounts, including: uploading, downloading, and deleting files, as well as creating and managing the buckets that hold files.

How to Get Started With the Backblaze Partner API

If you’re familiar with Backblaze, getting started is straightforward:

  1. Create a Backblaze account.
  2. Enable Business Groups and B2 Cloud Storage.
  3. Contact Sales for access to the API.
  4. Create a Group.
  5. Create an Application Key and set up Partner API calls.

Check out our documentation for more detailed information on getting started with the Backblaze Partner API. You can also reach out to us via email at any time to schedule a meeting to discuss how the Backblaze Partner API can help you create an easier customer experience.

The post Backblaze Partner API: The Details appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze B2 Reserve: The Details

Post Syndicated from Elton Carneiro original https://www.backblaze.com/blog/backblaze-b2-reserve-the-details/

Yesterday, we announced enhancements to our partner program that make working with Backblaze even easier for current and prospective partners. Today, we’re digging into a key offering from that program: Backblaze B2 Reserve. Backblaze B2 Reserve brings more value to the Backblaze community of Channel Partners and opens up our easy, affordable cloud storage to many more.

Read on to learn more about Backblaze B2 Reserve and what it means for existing and potential Channel Partners.

What Is Backblaze B2 Reserve?

Predictable, affordable pricing is our calling card, but for a long time our Channel Partners have had a harder time than other customers when it came to accessing this value. Backblaze B2 Reserve brings them a capacity-based, annualized SKU which works seamlessly with channel billing models. The offering also provides seller incentives, Tera-grade support, and expanded migration services to empower the channel’s acceleration of cloud storage adoption and revenue growth.

Why Launch Backblaze B2 Reserve?

Short story, we heard a lot of feedback from our partners about how much they loved working with Backblaze B2 Cloud Storage, except for the service’s pricing model—which limited their ability to promote it to customers. Backblaze B2 is charged on a consumption-based model, meaning you only pay for what you use. This works great for many of our customers who value pay-as-you-go pricing, but not as well for those who value fixed, predictable, monthly or annual bills.

Customers who are more accustomed to planning for storage provisioning want to pay for cloud storage on a capacity-based model similar to how they would for on-premises storage. They buy what they expect to use up front, and their systems and processes are set up to utilize storage in that way. Additionally, the partners who include Backblaze B2 as part of packages they sell to their customers wanted predictable pricing to make things easier in their sales processes.

Backblaze B2 Reserve is a pricing package built to answer these needs—serving the distributors and value-added resellers who want to be able to present B2 Cloud Storage to their current and prospective customers.

How Does Backblaze B2 Reserve Work?

The Backblaze B2 Reserve offering is capacity-based, starting at 20TB, with key features, including:

  • Free egress up to the amount of storage purchased per month.
  • Free transaction calls.
  • Enhanced migration services.
  • No delete penalties.
  • Tera support.

A customer can purchase more storage by buying 10TB add ons. If you’re interested in participating or just want to learn more, you can reach out to us via email to schedule a meeting.

How Is Backblaze B2 Reserve Different From Backblaze B2?

The main difference between Backblaze B2 Reserve and Backblaze B2 is the way the service is packaged and sold. Backblaze B2 uses a consumption model—you pay for what you use. Backblaze B2 Reserve uses a capacity model—you pay for a specific amount of storage up front.

“Backblaze’s ease and reliability, paired with their price leadership, has always been attractive, but having their pricing aligned with our business model will bring them into so many more conversations we’re having across the types of customers we work with.”
—Mike Winkelmann, Cinesys-Oceana

Ready to Get Started?

If you’re going to NAB, April 23-27th, we’ll be there, and we’d love to see you—click here to book a meeting. We’ll also be at the Channel Partners Conference, April 11-14th. Otherwise, reach out to us via email to schedule a chat. Let’s talk about how the new program can move your business forward.

The post Backblaze B2 Reserve: The Details appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing Partner Program Enhancements

Post Syndicated from Elton Carneiro original https://www.backblaze.com/blog/announcing-partner-program-enhancements/

Here at Backblaze, we can definitively say that we get by with a little (okay, a lot of) help from our friends. We’ve always been committed to building an open, transparent, and interoperable ecosystem which has helped us grow an incredible partner network. We provide easy, affordable, and trusted cloud storage as a neutral partner, and they provide all manner of services, products, and solutions that use our storage. But there’s always room for improvement, right?

Which is why, today, we’re enhancing our partner program with two major new offerings:

  • Backblaze B2 Reserve: A predictable, capacity pricing model to empower our Channel Partners.
  • Backblaze Partner API: A new API that empowers our Alliance Partners to easily integrate and manage B2 Cloud Storage within their products and platforms.

Read on to learn a bit more about each component, and stay tuned throughout this week and next for deeper dives into each element.

Capacity Pricing With Backblaze B2 Reserve

Backblaze B2 Reserve is a new offering for our Channel Partners. Predictable, affordable pricing is our calling card, but for a long time our Channel Partners had a harder time than other customers when it came to accessing this value. Backblaze B2 Reserve brings them a capacity-based, annualized SKU which works seamlessly with channel billing models. The offering also provides seller incentives, Tera-grade support, and expanded migration services to empower the channel’s acceleration of cloud storage adoption and revenue growth.

The key benefits include:

  • Enhanced margin opportunity and a predictable pricing model.
  • Easier conversations with customers accustomed to an on-premises or capacity model.
  • Discounts and seller incentives.

The program is capacity based, starting at 20TB, with key features, including:

  • Free egress up to the amount of storage purchased per month.
  • Free transaction calls.
  • Enhanced migration services.
  • No delete penalties.
  • Tera support.

It’s all of the same great functionality folded in. Partners get more margin, seller incentives, and a predictable growth model for customers.

“Backblaze’s ease and reliability, paired with their price leadership, has always been attractive, but having their pricing aligned with our business model will bring them into so many more conversations we’re having across the types of customers we work with.”
—Mike Winkelmann, Owner of CineSys Inc.

User Management and Usage Reporting With the Backblaze Partner API

The Backblaze Partner API empowers independent software vendors participating in Backblaze’s Alliance Partner Program to add Backblaze B2 Cloud Storage as a seamless backend extension within their own platform, where they can programmatically provision accounts, run reports, and create a bundled solution or managed service for a unified user experience. By unlocking an improved customer experience for the partner, the Partner API allows Alliance Partners to build additional cloud services into their product portfolio to generate new revenue streams and/or grow existing margin.

Features of the Partner API include:

  • Account provisioning.
  • Managing a practically unlimited number of accounts or groups.
  • Comprehensive usage reporting.

In using the Partner API, partners can offer a proprietary branded, bundled solution with a unified bill, or create a solution that is “Powered by Backblaze B2.”

“Our customers produce thousands of hours of content daily, and, with the shift to leveraging cloud services like ours, they need a place to store both their original and transcoded files. The Backblaze Partner API allows us to expand our cloud services and eliminate complexity for our customers—giving them time to focus on their business needs, while we focus on innovations that drive more value.”
—Murad Mordukhay, CEO at Qencode

Other Benefits

To unlock the value inherent in Backblaze B2 Reserve and the Partner API, Backblaze is offering free migration to help customers painlessly copy or migrate their data from practically any source into B2 Cloud Storage.

This service supports truly free data mobility without complexity or downtime, including coverage of all data transfer costs and any egress fees charged by legacy vendors. Stay tuned for more on this feature that benefits both partners and all of our customers.

The addition of Tera support brings the benefit of a four-hour target response time for email support and named customer contacts to ensure that partners and their end users can troubleshoot at speed.

What’s Next?

These are the first of many features and programs that Backblaze will be rolling out this year to make our partners’ experience working with us better. Tomorrow, we’ll dive deeper into the Backblaze B2 Reserve offering. On Monday, we’ll offer more detail on the Backblaze Partner API feature. In the coming months, we’ll be sharing even more. Stay tuned.

Want to Learn More?

Reach out to us via email to schedule a meeting. If you’re going to the Channel Partners Conference, April 11–14th, we’ll be there and we’d love to see you! If not, reach out and we’d be happy to start a conversation about how the new program can move your business forward.

The post Announcing Partner Program Enhancements appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Bring Your File System to the Cloud With Backblaze + CTERA

Post Syndicated from Jennifer Newman original https://www.backblaze.com/blog/bring-your-file-system-to-the-cloud-with-backblaze-ctera/

You know your file system. You love your file system (suspend your disbelief for a moment). You can find what you need (mostly). The hierarchy makes sense (kind of). Anyway—the point is, it’s convenient and it works.

You also know object storage offers scalability and cost savings that you’d love to tap into to either replace or extend your on-premises file servers or NAS. But how do you move your data to the cloud safely and securely without overhauling your entire file system to work with object storage?

Through a new partnership with CTERA, you can extend your corporate file system to Backblaze B2 Cloud Storage while maintaining all of your existing permissions, file structures, and security protocols.

➔ Sign Up for the Webinar

The joint solution unlocks meaningful opportunities for small and medium-sized enterprises. Through the partnership, you can:

  • Store all of your unstructured data in one centralized place, while maintaining instant and reliable access.
  • Retire legacy file servers and NAS devices along with the time and expense required to maintain them.
  • Empower a remote workforce and multi-site collaboration.
  • Establish a resilient disaster recovery, business continuity, and ransomware mitigation plan.
  • Optimize your budget with pay-as-you-go cloud storage pricing that’s a quarter the price of equivalent offerings.

“If you’re tired of buying new equipment every three years; replacing hard drives; paying for maintenance, or power, or space in a data center; and all of the headaches of managing remote user access, then the CTERA and Backblaze partnership is perfect for you. The setup is incredibly easy and the immediate budget and staffing relief will give you resources to tackle new opportunities. You’ll never have to—or want to—upgrade your NAS again.”
—Nilay Patel, VP of Sales, Backblaze

How It Works

CTERA’s Enterprise File Services Platform, through their core global file system technology, extends the capabilities of traditional NAS and file servers to the cloud. The joint solution provides fast local file access via cached files in CTERA’s Edge Filers, while storing the primary copy of your data in hot storage with Backblaze B2 for just $5/TB per month with a 99.9% uptime SLA.

Users across your enterprise have access to shared files via their Windows, Mac, or mobile devices, and data generated at the edge is automatically backed up to Backblaze B2 and accessible across your organization. The flexibility and extensibility of the Backblaze B2 + CTERA partnership consolidates your remote IT infrastructure and reduces the burden on your IT team to manage remote devices.

For customers that have distributed sites across the U.S. and EU, regions can be defined within CTERA and Backblaze B2 to ensure data is GDPR compliant. CTERA also offers an SDK for developers, enabling them to automatically configure Edge Filers and synchronize to Backblaze B2 Cloud Storage using infrastructure as code.

“We’re seeing a massive shift from traditional NAS to cloud NAS, from edge to core access, as organizations evolve and expand. CTERA is committed to providing the widest choice of cloud to our customers. The Backblaze-CTERA partnership establishes a compelling, new, cost-effective storage option for companies that wish to tier their data to the cloud for redundancy and collaboration.”
—Oded Nagel, Chief Strategy Officer, CTERA

About CTERA

CTERA is the edge-to-cloud file services leader, powering more than 50,000 connected sites and millions of corporate users. CTERA offers the industry’s most feature-rich global file system, enabling enterprises to centralize file access from any edge location or device without compromising performance or security. The CTERA Enterprise File Services Platform makes it easy for organizations to consolidate legacy NAS, backup and disaster recovery systems, and collaboration platforms while reducing costs by up to 80% versus legacy solutions. CTERA is trusted by the world’s largest companies, including McDonald’s, GE, Unilever, and Live Nation, as well as the U.S. Department of Defense and other government organizations worldwide.

Interested in Learning More?

Join us for a webinar on April 27, 2022 at 8 a.m. PST/11 a.m. EST to discover how to tier your file structure intelligently into Backblaze B2 with CTERA—register here. Anyone interested in exploring the solution today can check out the CTERA Solution Brief.

The post Bring Your File System to the Cloud With Backblaze + CTERA appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How to Scale a Storage-heavy Startup

Post Syndicated from Molly Clancy original https://www.backblaze.com/blog/how-to-scale-a-storage-heavy-startup/

No developer likes to feel stuck with a cloud vendor, especially when you’re paying for the trap. Being locked in because the expense of moving is prohibitive or the risk of service disruption is too great can be singularly frustrating. Unfortunately, Gavin Wade, Founder and CEO of CloudSpot, found himself in exactly that position.

Gavin shared how he restructured his cloud architecture (containing more than 700TB of images under management), enabling him to:

  • Cut storage costs in half.
  • Slow the rate at which storage costs compound by half.
  • Cut data transfer costs by 90%.
  • Lower breakeven customer volume.
  • Increase margins.
What Is CloudSpot?

CloudSpot is a software as a service (SaaS) business platform based in Irvine, CA that makes professional photographers’ lives easier. The CloudSpot app allows photographers to deliver images digitally to clients in beautiful galleries through a seamless system.

Amazon Web Services gave CloudSpot free credits for storage. When those credits ran out, CloudSpot would have happily left to avoid escalating storage costs but felt trapped by high egress fees.

“We had a few internal conversations where we concluded that we were stuck with Amazon. That’s never a good feeling in business.”
—Gavin Wade, Founder & CEO, CloudSpot

How CloudSpot Solved for Escalating Storage Costs

In the short term, CloudSpot’s development team, led by their vice president of engineering, took a few key steps to manage costs:

  1. They untangled their monolithic system into a cluster of microservices.
  2. They moved to a Kubernetes environment where images upload directly to storage, then CloudSpot’s microservices retroactively query the data they need.

The transition to microservices made their infrastructure more nimble, but Gavin still had to reluctantly cut key promotional offers like free migration for prospective customers in order to maintain margins.

When Cost-cutting Measures Still Don’t Suffice

Even after optimizing workflows, storage costs continued to snowball. Namely:

  • The service grew—customers were uploading five times the previous year’s volume.
  • Gavin wanted to position the company for triple-digit growth in the upcoming year.

They decided to move their production data to Backblaze B2 Cloud Storage. The potential ROI of switching to Backblaze B2 was too substantial to ignore for a data-heavy startup, and Backblaze’s Cloud to Cloud Migration service allowed them to move 700TB of data in one day with zero transfer fees.

Migrating Storage Clouds Without Service Disruption

CloudSpot’s data is accessed frequently, and the CloudSpot development team had to make sure customers saw no disruptions. To do so, they supported both environments—on Amazon S3 and Backblaze B2—simultaneously for one week to ensure everything was working. Then, they disabled uploads to Amazon S3 and redirected new uploads to Backblaze B2.

“It was like changing the tires on a car while it’s flying down the road at 100 mph,” but a change that resulted in no loss of operational efficiency, speed, or reliability.
—Gavin Wade, Founder & CEO, CloudSpot

Cloud to Cloud Migration Is Not Out of Reach

Like many developers, Gavin thought he was trapped in a walled garden with Amazon S3. Improving CloudSpot’s workflows unlocked the switch to Backblaze B2, enabling CloudSpot to:

  • Structure workflows using best-of-breed providers.
  • Reintroduce free migration.
  • Grow margins.
  • Demonstrate savvy decision-making to future investors.

“Software margins are expected to be high. If you can take a big cut of that, it allows you to scale more rapidly. It just makes our story so much better, especially as a SaaS business looking to scale, grow, and raise capital.”
—Gavin Wade, Founder & CEO, CloudSpot

Unlocking Capacity to Scale With Backblaze B2

Read more about how CloudSpot overcame vendor lock-in to realize exponential growth, and check out our Cloud to Cloud Migration offer and partners—we’ll pay for your data transfer if you need to move more than 10TB out of Amazon.

The post How to Scale a Storage-heavy Startup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Multi-cloud Backups Outage-proof Data

Post Syndicated from Molly Clancy original https://www.backblaze.com/blog/how-multi-cloud-backups-outage-proof-data/

Bob Graw, Director of IT for CallTrackingMetrics, a leading marketing automation platform, put words to a concern that’s increasingly troubling for businesses of all sizes: “If the data center gets wiped out, what happens to our business?”

Lately, high profile cloud outages happen at a regular clip, and the only thing you can count on is that systems will fail at some point. Savvy IT leaders work from that assumption regardless of their primary cloud provider, because they know redundant fail-safes should always be part of your plan.

Bob shared how CallTrackingMetrics outage-proofed their data by establishing a vendor-independent, multi-cloud system. Read on to learn how they did it.

About CallTrackingMetrics

CallTrackingMetrics is a conversation analytics platform that enables marketers to drive data-backed advertising strategies, track every conversion, and optimize ad spend. Customers can discover which marketing campaigns are generating leads and conversions, and use that data to automate lead flows and create better buyer experiences—across all communication channels. More than 100,000 users around the globe trust CallTrackingMetrics to align their sales and marketing teams around every customer touchpoint.

CallTrackingMetrics has been recognized in Inc. Magazine’s 5000™ list of fastest-growing private companies and best places to work, and as a leader on G2 and Gartner for call tracking and marketing attribution software.

Multi-cloud Storage Protects Data From Disasters

As CallTrackingMetrics’ data grew over the years, so did their data backups. They stored more than a petabyte of backups with one vendor. It was a strategy they grew less comfortable with over time. Their concerns included:

  • Operating with a single point of failure.
  • Becoming locked in with one vendor.
  • Maintaining compliance with data regulations.
  • Diversifying their storage infrastructure within budget.
CallTrackingMetrics generates 3TB of backups daily from the volume of data gathered through their platform.

Multi-cloud Solves for a Single Point of Failure

Bob had thought about diversifying CallTrackingMetrics’ storage infrastructure for years, and as outages continued apace, he decided to pull the trigger. He sought out an additional storage vendor where CallTrackingMetrics could store a redundant copy of their backups for disaster recovery and business continuity. “You should always have at least two viable, ready to go backups. It costs real money, but if you can’t come back to life in a disaster scenario, it is basically game over,” Bob explained.

They planned to mirror data from their diversified cloud provider in Backblaze B2, creating a robust multi-cloud strategy. With data backups in two places, they would be better protected from outages and disasters.

“We trust the Backblaze technology. I’d be very surprised if I ever lost data with Backblaze.”
—Bob Graw, Senior Software Engineer, CallTrackingMetrics

Multi-cloud Solves for Vendor Lock-in

Diversifying storage providers came with the added benefit of solving for vendor lock-in. “We did not want to be stuck with one cloud provider forever,” Bob said. Addressing storage was only one part of that strategy, though. They also intentionally avoided using specific services from their diversified cloud vendor like elastic search and databases to keep their data portable. That way, “We could really take our system to anybody that provides compute or storage, and we would be fine,” Bob said.

Solving for Compliance

Some of CallTrackingMetrics’ clients work in highly regulated industries, so compliance with regulations like HIPAA and GDPR was important for them to maintain when searching for a new storage provider. Those regulations stipulate how data is stored, the security protocols in place to protect data, and data retention requirements. “We feel like Backblaze is very secure, and we rely on Backblaze being secure so that our backups are safe and we stay compliant,” Bob said.

CallTrackingMetrics’ analytics console.

Solving for Budget Concerns

Bob and CallTrackingMetrics Founder, Todd Fisher, had both used Backblaze Personal Backup for years, so that familiarity opened the door. But the Backblaze Cloud to Cloud Migration service sealed the deal. Due to high data transfer fees, “Getting out of our previous provider isn’t cheap,” Bob said. But with data transfer fees covered through the Cloud to Cloud Migration service, they addressed one of their primary concerns—staying within budget. On top of an easy migration, after making the switch, Bob saw an immediate 50% savings on his storage bill thanks to Backblaze’s affordable, single-tier pricing.

“With Backblaze, the benefits are threefold: We like the cost savings, we like that our eggs aren’t all in one basket, and Backblaze is super simple to use.”
—Bob Graw, Senior Software Engineer, CallTrackingMetrics

Cloud Storage Helps Leading Marketing Platform Level Up

Now that CallTrackingMetrics has a multi-cloud system to protect their data from outages, they can focus on solving for the next challenge—getting better visibility into their overall cloud usage. With the savings they recouped from moving backups to Backblaze B2, Bob was able to invest in Lacework, software that helps with automation to protect their multi-cloud environment.

Thinking about going multi-cloud but worried about the cost of transferring your data? Check out our Cloud to Cloud Migration service and get started today—the first 10GB are free.

“Backblaze is our ultimate security blanket. We know our big pile of data is safe and sound.”
—Bob Graw, Senior Software Engineer, CallTrackingMetrics

The post How Multi-cloud Backups Outage-proof Data appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Media Transcoding With Backblaze B2 and Vultr Optimized Cloud Compute

Post Syndicated from Pat Patterson original https://www.backblaze.com/blog/media-transcoding-with-backblaze-b2-and-vultr-optimized-cloud-compute/

Since announcing the Backblaze + Vultr partnership last year, we’ve seen our mutual customers build a wide variety of applications combining Vultr’s Infrastructure Cloud with Backblaze B2 Cloud Storage, taking advantage of zero-cost data transfer between Vultr and Backblaze. This week, Vultr announced Optimized Cloud Compute instances, virtual machines pairing dedicated best-in-class AMD CPUs with just the right amount of RAM and NVMe SSDs.

To mark the occasion, I built a demonstration that both showcases this new capability and gives you an example application to adapt to your own use cases.

Imagine you’re creating the next big video sharing site—CatTube—a spin-off of Catblaze, your feline-friendly backup service. You’re planning all sorts of amazing features, but the core of the user experience is very familiar:

  • A user uploads a video from their mobile or desktop device.
  • The user’s video is available for viewing on a wide variety of devices, from anywhere in the world.

Let’s take a high-level look at how this might work…

Transcoding Explained: How Video Sharing Sites Make Videos Shareable

The user will upload their video to a web application from their browser or a mobile app. The web application must store the uploaded user videos in a highly scalable, highly available service—enter Backblaze B2 Cloud Storage. Our customers store, in the aggregate, petabytes of media data including video, audio, and still images.

But, those videos may be too large for efficient sharing and streaming. Today’s mobile devices can record video with stunning quality at 4K resolution, typically 3840 × 2160 pixels. While 4K video looks great, the issue is that even with compression, it’s a lot of data—about 1MB per second. Not all of your viewers will have that kind of bandwidth available, particularly if they’re on the move.

So, CatTube, in common with other popular video sharing sites, will need to convert raw uploaded video to one or more standard, lower-resolution formats, a process known as transcoding.

Transcoding is a very different workload from running a web application’s backend. Where an application server requires high I/O capability, but relatively little CPU power, transcoding is extremely CPU-intensive. You decide that you’ll need two sets of machines for CatTube—application servers and workers. The worker machines can be optimized for the transcoding task, taking advantage of the fastest available CPUs.

For these tasks, you need appropriate cloud compute instances. I’ll walk you through how I implemented CatTube as a very simple video sharing site with Backblaze B2 and Vultr’s Infrastructure Cloud using Vultr’s Cloud Compute instances for the application servers and their new Optimized Cloud Compute instances for the transcoding workers.

Building a Video Sharing Site With Backblaze B2 + Vultr

The video sharing example comprises a web application, written in Python using the Django web framework, and a worker application, also written in Python, but using the Flask framework.

Here’s how the pieces fit together:

  1. The user uploads a video from their browser to the web app.
  2. The web app uploads the raw video to a private bucket on Backblaze B2.
  3. The web app sends a message to the worker instructing it to transcode the video.
  4. The worker downloads the raw video to local storage and transcodes it, also creating a thumbnail image.
  5. The worker uploads the transcoded video and thumbnail to Backblaze B2.
  6. The worker sends a message to the web app with the addresses of the input and output files in Backblaze B2.
  7. Viewers around the world can enjoy the video.

These steps are illustrated in the diagram below.

Click to enlarge.

There’s a more detailed description in the Backblaze B2 Video Sharing Example GitHub repository, as well as all of the code for the web application and the worker. Feel free to fork the repository and use the code as a starting point for your own projects.

Here’s a short video of the system in action:

Some Caveats:

Note that this is very much a sample implementation. The web app and the worker communicate via HTTP—this works just fine for a demo, but doesn’t account for the worker being too busy to receive the message. Nor does it scale to multiple workers. In a production implementation, these issues would be addressed by the components communicating via an asynchronous messaging system such as Kafka. Similarly, this sample transcodes to a single target format: 720p. A real video sharing site would transcode the raw video to a range of formats and resolutions.

Want to Try It for Yourself?

Vultr’s new Cloud Compute Optimized instances are a perfect match for CPU-intensive tasks such as media transcoding. Zero-cost ingress and egress between Backblaze B2 and Vultr’s Infrastructure Cloud allow you to build high performance, scalable applications to satisfy a global audience. Sign up for Backblaze B2 and Vultr’s Infrastructure Cloud today, and get to work!

The post Media Transcoding With Backblaze B2 and Vultr Optimized Cloud Compute appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How to Run VFX Workflows in the Cloud

Post Syndicated from Molly Clancy original https://www.backblaze.com/blog/how-to-run-vfx-workflows-in-the-cloud/

An hour from Queens. An hour from Jersey. Two hours from Staten Island. That’s how long it would take Molecule VFX IT staff to travel from their homes to the closet in Manhattan that housed the team’s LTO device. All those hours, just to spend five minutes switching out one tape.

It was a huge waste of time, not to mention subway fares. The hassle of tape wasn’t the only reason Molecule decided to make their production workflows fully cloud-based, but the IT team certainly doesn’t mind skipping that trip these days.

Moving production entirely to the cloud allowed Molecule to unlock the value of their artists’ time as well as the IT staff to support them, and save money in the process. If your media team has been contemplating a fully cloud-based workflow, read on to learn how Molecule did it—including how they managed to maintain the ability to move data from the cloud back to tape on demand without maintaining on-premises tape infrastructure.

About Molecule VFX

Molecule VFX is a visual effects studio based in New York and Los Angeles that provides the elemental building blocks to tell a customer’s story. They have been servicing episodic television and feature films, like the Apple TV series, “Dickinson,” and the Hulu series, “Only Murders in the Building,” since 2005.

Molecule’s Case for the Cloud

Visual effects artists want to be able to hop into a new script, work on it, render it, review it, QC it, and call it done. Their work is the most valuable element of the business. Anything that gets in the way of that or slows down the workflow directly impacts the company’s success, and an on-premises system was doing exactly that.

  • With IT staff working from home, LTO maintenance tied them up for hours—time that could have been spent helping Molecule’s visual effects artists create.
  • Beyond tape, the team managed a whole system of machines, networks, and switches. Day-to-day issues could knock out the company’s ability to get work done for entire days.

They knew moving to the cloud would optimize staff time and mitigate those outages, but it didn’t happen overnight. Because much of their business already happens in the digital workspace, Molecule had been slowly moving to the cloud over the past few years. The shift to remote work due to the COVID-19 pandemic accelerated their transition.

Work from the Amazon Original Movie, “Bliss,” featuring Owen Wilson.

Strategies for Moving VFX Workflows to the Cloud

Molecule’s Full Stack Software Architect, Ben Zenker, explained their approach. Through the process, he identified a few key strategies that made the transition a success, including:

  • Taking a phased approach while deciding between hybrid and fully cloud-based workflows.
  • Reading the fine print when comparing providers.
  • Rolling their own solutions where possible.
  • Thoroughly testing workflows.
  • Repurposing on-premises infrastructure.

1. Take a Phased Approach

Early in the transition, the Molecule team was still using the tape system and an on-premises Isilon server for some workloads. Because they were still deciding if they were going to have a hybrid system or go fully cloud, they took an ad hoc approach to identifying what data was going to be in Backblaze B2 Cloud Storage and what production infrastructure was going to be in CoreWeave, a cloud compute partner that specializes in VFX workloads. Ben explained, “Once we decided definitively we wanted to be fully in the cloud, connecting CoreWeave and Backblaze was simple—if it was on CoreWeave, it was getting backed up in Backblaze B2 nightly.”

2. Read the Fine Print

The team planned to sync incremental backups to the cloud every night. That meant their data would change every day as staff deleted or updated files. They figured out early on that retention minimums were a non-starter. Some cloud providers charge for deleted data for 30, 60, or even 90 days, meaning Molecule would be forced to pay for storage on data they had deleted months ago. But not all cloud providers are transparent about their retention policies. Molecule took the time to track down these policies and compare costs.

“Backblaze was the only service that met our business requirements without a retention minimum.”
—Ben Zenker, Full Stack Software Architect, Molecule VFX

3. Roll Your Own Solutions Where Possible

The team creates a lot of their own web tools to interact with other technology, so it was a relatively easy lift to set up rclone commands to run syncs of their production data nightly to Backblaze B2. Using rclone, they also built a variable price reporting tool so that higher ups could easily price out different projects and catch potential problems like a runaway render.

“There are hundreds of options that you can pass into rclone, so configuring it involved some trial and error. Thankfully it’s open-source, and Backblaze has documentation. I made some small tweaks and additions to the tool myself to make it work better for us.”
—Ben Zenker, Full Stack Software Architect, Molecule VFX

4. Test and Test Again

In reflecting on the testing phase they went through, Ben acknowledges he could have been more liberal. He noted, “I went into it a little cautious because I didn’t want to end up incurring big charges for a test, but Backblaze has all sorts of safeguards in place. You can set price limits and caps, which was great for the testing period.”

5. Repurpose On-premises Infrastructure

The on-premises Isilon server and the physical tape system are no longer part of the active project workflow. They still utilized those devices to host some core services for a time—a firewall, authentication, and a VPN that some members used. In the end, they decided to fully retire all on-premises infrastructure, but repurposing the on-premises infrastructure allowed them to maximize its useful life.

But What If Clients Demand Tape?

While Molecule is more than happy to have modernized their workflows in the cloud, there are still some clients—and major clients at that—who require that contractors save final projects on tape for long-term storage. It no longer made sense to have staff trained on how to use the LTO system, so when a customer asked for a tape copy, they reached out to Backblaze for advice.

They needed a turnkey solution that they didn’t have to manage, and they definitely didn’t want to have to resort to reinvesting and managing tape hardware. Backblaze partner, TapeArk, fit the bill. TapeArk typically helps clients get data off of tape and into the cloud, but in this case they reversed the process. Molecule sent them a secure token to the exact piece of data they needed. TapeArk managed the download, put it on tape, and shipped it to the client.

If Molecule needs to send tape copies to clients in the future, they have an easy, hands-off solution and they don’t have to maintain an LTO system for infrequent use. Ben was grateful for the partnership and easy solution.

Work from the Apple TV series, “Dickinson,” featuring Hailee Steinfeld.

Cloud Workflows Free Up a Month of Time

Now that the staff no longer has to manage an LTO tape system, the team has recouped at least 30 payroll days a year that can be dedicated to supporting artists. Ben noted that with the workflows in the cloud, the nature of the IT workload has changed, and the team definitely appreciates having that time back to respond to changing demands.

Ready to move your VFX workflows to the cloud? Start testing today with 10GB of data storage free from Backblaze B2.

The post How to Run VFX Workflows in the Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The SSD Edition: 2021 Drive Stats Review

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/ssd-edition-2021-drive-stats-review/

Welcome to the first SSD edition of the Backblaze Drive Stats report. This edition will focus exclusively on our SSDs as opposed to our quarterly and annual Drive Stats reports which, until last year, focused exclusively on HDDs. Initially we expect to publish the SSD edition twice a year, although that could change depending on its value to our readers. We will continue to publish the HDD Drive Stats reports quarterly.

Background

The SSDs in this report are all boot drives in our storage servers. In our early storage servers, we used HDDs exclusively for boot drives. We began using SSDs in this capacity in Q4 of 2018. Since that time, all new storage servers and any with failed HDD boot drives have had SSDs installed. Boot drives in our environment do much more than boot the storage servers, they also store log files and temporary files produced by the storage server. Each day a boot drive will read, write, and delete files depending on the activity of the storage server itself.

Overview

As of December 31, 2021, we were using 2,200 SSDs. As we share various tables and charts below, some of the numbers, particularly the annualized failure rate (AFR) will be very surprising to informed readers. For example, an AFR of 43.22% might catch your attention. We will explain these outliers as we go along. Most are due to the newness of a drive, but we’ll let you know.

As with the HDD reports, we have published the data we used to develop our SSD report. In fact, we have always published this data as it resides in the same files as the HDD data. Now for the bad news: The data does not currently include a drive type, SDD or HDD, so you’ll have to do your research by model number. Sorry. You’ll find the links to download the data files on our Drive Stats Test Data webpage. If you are just looking for SSD data, start with Q4 2018 and go forward.

If you are new to our Drive Stats reports, you might wonder why we collect and share this information. It starts with the fact that we have lots of data storage available, over two exabytes to date, for customers using the Backblaze B2 Cloud Storage and Backblaze Computer Backup services. In doing that, we need to have a deep understanding of our environment, one aspect of which is how often drives, both HDDs and SSDs, fail. Starting about seven years ago we decided to share what we learned and shed some light on the previously opaque world of hard drive failure rates. It is only natural that we would be as transparent with SSDs. Read on.

Annual SSD Failure Rates for 2019, 2020, and 2021

At the end of 2021, there were 2,200 SSDs in our storage servers, having grown from zero in Q3 2018. We’ll start with looking at the AFR for the last three years, then dig into 2021 failure rates, and finally, take a look at the monthly AFR rates since 2019. We’ll explain each as we go.

The chart below shows the failure rates for 2019, 2020, and 2021.

Observations and Comments

  • The data for each year (2019, 2020, and 2021) is inclusive of the activity which occurred in that year.
  • There is an upward direction in the failure rate for 2021. We saw this when we compared our HDD and SSD boot drives in a previous post. When we get to the quarter-by-quarter chart later in this blog post, this trend, as such, will be much clearer.
  • Two drives have eye-popping failure rates—the Crucial model: CT250MX500SSD1 and the Seagate model: ZA2000CM10002. In both cases, the drive days and drive count (not shown) are very low. For the Crucial, there are only 20 drives which were installed in December 2021. For the Seagate, there were only four drives and one failed in early 2021. In both cases, the AFR is based on very little data, which leads to a very wide confidence interval, which we’ll see in the next section. We include these drives for completeness.
  • A drive day denotes one drive in operation for one day. Therefore, one drive in operation for 2021 would have 365 drive days. If a drive fails after 200 days, it will have 200 drive days and be marked as failed. For a given cohort of drives over a specified period of time, we compute the AFR as follows:
     
    AFR = (drive failures / (drive days / 365)) * 100
     
    This provides the annualized failure rate (AFR) over any period of time.

2021 Annual SSD Failure Rates

Let’s dig into 2021 and add a few more details. The table below is an expanded version of the annual 2021 section from the previous chart.

From the table, it should be clear that the Crucial and Seagate drives with the double-digit AFRs require a lot more data before passing any judgment on their reliability in our environment. This is evidenced by the extremely wide confidence interval for each drive. A respectable confidence interval is less than 1.0%, with 0.6% or less being optimal for us. Only the Seagate model: ZA250CM10002 meets the 1.0% percent criteria, although the Seagate model: ZA250CM10003 is very close.

Obviously, it takes time to build up enough data to be confident that the drive in question is performing at the expected level. In our case, we expect a 1% to 2% AFR. Anything less is great and anything more bears watching. One of the ways we “watch” is by tracking quarterly results, which we’ll explore next.

Quarterly SSD Failure Rates Over Time

There are two different ways we can look at the quarterly data: over discrete periods of time, e.g., a quarter or year; or cumulative over a period of time, e.g., all data since 2018. Data scoped to quarter by quarter can be volatile or spikey, but reacts quickly to change. Cumulative data shows longer term trends, but is less reactive to quick changes.

Below are graphs of both the quarter-by-quarter and cumulative-by-quarter data for our SSDs beginning in Q1 2019. First we’ll compare all SSDs, then we’ll dig into a few individual drives of interest.

The cumulative curve flows comfortably below our 2% AFR threshold of concern. If we had just followed the quarterly number, we might have considered the use of SSDs as boot drives to be problematic, as in multiple quarters the AFR was at or near 3%. That said, the more data the better, and as the SSDs age we’ll want to be even more on alert to see how long they last. We have plenty of data on that topic for HDDs, but we are still learning about SDDs.

With that in mind, let’s take a look at three of the older SSDs to see if there is anything interesting at this point.

Observations and Comments

  • For all of 2021, all three drives have had cumulative AFR rates below 1%.
  • This compares to the cumulative AFR for all SSD drives as of Q4 2021 which was 1.07% (from the previous chart).
  • Extending the comparison, the cumulative (lifetime) AFR for our hard drives was 1.40% as noted in our 2021 Drive Stats report. But, as we have noted in our comparison of HDDs and SSDs, the two groups (SSDs and HDDs) are not at the same point in their life cycles. As promised, we’ll continue to examine that dichotomy over the coming months.
  • The model (ZA250CM10002) represented by the red line seems to be following the classic bathtub failure curve, experiencing early failures before settling down to an AFR below 1%. On the other hand, the other two drives showed no signs of early drive failure and have only recently started failing. This type of failure pattern is similar to that demonstrated by our HDDs which no longer fit the bathtub curve model.

Experiments and Test Drives

If you decide to download the data and poke around, you’ll see a few anomalies related to the SSD models. We’d like to shed some light on these outliers before you start poking around. We’ve already covered the Crucial and Seagate drives that had higher than expected AFR numbers, but there are two other SSD models that don’t show up in this report, but do show up in the data. These are the Samsung 850 EVO 1TB and the HP SSD S700 250GB.

Why don’t they show up in this report? As with our drive stats review for our HDDs, we remove those drives we are using for testing purposes. Here are the details:

  • The Samsung SSDs were the first SSDs to be installed as boot drives. There were 10 drives that were installed to test out how SSDs would work as boot drives. Thumbs up! We had prior plans for these 10 drives in other servers and after about two weeks, the Samsung drives were swapped out with other SSDs and deployed for their original purpose. Their pioneering work was captured in the Drive Stats data for posterity.
  • The HP SSDs that were part of the testing of our internal data migration platform, i.e., moving data from smaller drives to larger drives. These drives showed up in the data in Q3 and Q4 of 2021. Any data related to these drives in Q3 or Q4 is not based on using these drives in our production environment.

What’s Next

We acknowledge that 2,200 SSDs is a relatively small number of drives on which to perform our analysis, and while this number does lead to wider than desired confidence intervals, we had to start somewhere. Of course, we will continue to add SSD boot drives to the study group, which will improve the fidelity of the data presented. In addition, we expect our readers will apply their usual skeptical lens to the data presented and help guide us towards making this report increasingly educational and useful.

We do have SSDs in other types of servers in our environment. For example, restore servers, utility servers, API servers, and so on. We are considering instrumenting the drives in some of those servers so that they can report their stats in a similar fashion as our boot drives. There are multiple considerations before we do that:

  1. We don’t impact the performance of the other servers.
  2. We recognize the workload of the drives in each of the other servers is most likely different. This means we could end up with multiple cohorts of SSD drives, each with different workloads, that may or may not be appropriate to group together for our analysis.
  3. We don’t want to impact the performance of our data center techs to do their job by adding additional or conflicting steps to the processes they use when maintaining those other servers.

The SSD Stats Data

The complete data set used to create the information used in this review is available on our Hard Drive Test Data page. As noted earlier, you’ll find SSD and HDD data in the same files and you’ll have to use the model number to distinguish one record from another. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone; it is free.
Good luck and let us know if you find anything interesting.

The post The SSD Edition: 2021 Drive Stats Review appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Building a Multiregion Origin Store With Backblaze B2 + Fastly [email protected]

Post Syndicated from Pat Patterson original https://www.backblaze.com/blog/building-a-multiregion-origin-store-with-backblaze-b2-fastly-computeedge/

Backblaze B2 Cloud Storage customers have long leveraged our partner Fastly’s [email protected] CDN as an essential component of a modern, scalable web architecture. Complementing [email protected], [email protected] is a serverless computing environment built on the same caching platform to provide a general-purpose compute layer between the cloud and end users. Today, we’re excited to celebrate Fastly’s announcement of its [email protected] partner ecosystem.

Serverless computing is quickly gaining popularity among developers for its simplicity, agility, and functionality. In the serverless model, cloud providers allocate resources to applications on demand, managing the compute infrastructure on behalf of their customers. The term, “serverless,” is a little misleading: The servers are actually still there, but customers don’t have to get involved in their provisioning, configuration, maintenance, or scaling.

Fastly’s [email protected] represents the next generation of serverless computing—purpose-built for better performance, reduced latency, and enhanced visibility and security. Using Fastly’s tools, a developer can create an edge application, test it locally, then with one command, deploy it to the [email protected] platform. When a request for that application reaches any of Fastly’s global network of edge servers, the application is launched and running in microseconds and can instantly scale to tens of thousands of requests per second.

It’s difficult to overstate the power and flexibility this puts in your hands as a developer—your application can be running on every edge server, with access to every attribute of its incoming requests, assembling responses in any way you choose. For an idea of the possibilities, check out the [email protected] demos, in particular, the implementation of the video game classic, “Doom.”

We don’t have space in a single blog post to explore an edge application of that magnitude, but read on for a simple example of how you can combine Fastly’s [email protected] with Backblaze B2 to improve your website’s user experience, directing requests to the optimal origin store end point based on the user’s location.

The Case for a Multiregion Origin Store

Although the CDN caches resources to improve performance, if a requested resource is not present in the edge server cache, it must be fetched from the origin store. When the edge server is close to the origin store, the increase in latency is minimal. If, on the other hand, the edge server is on a different continent from the origin store, it can take significantly longer to retrieve uncached content. In most cases, this additional delay is hardly noticeable, but for websites with many resources that are frequently updated, it can add up to a sluggish experience for users. A solution is for the origin store to maintain multiple copies of a website’s content, each at an end point in a different region. This approach can dramatically reduce the penalty for cache misses, improving the user experience.

There is a problem here, though: How do we ensure that a given CDN edge server directs requests to the “best” end point? The answer: build an application that uses the edge server’s location to select the end point. I’ll explain how I did just that, creating a Fastly [email protected] application to proxy requests to Backblaze B2 buckets.

Creating an Application on Fastly [email protected]

The Fastly [email protected] developer documentation did a great job of walking me through creating a [email protected] application. As part of the process, I had to choose a starter kit—a simple working application targeting a specific use case. The Static Content starter kit was the ideal basis for my application—it demonstrates many useful techniques, such as generating an AWS V4 Signature and manipulating the request’s Host HTTP header to match the origin store.

The core of the application is just a few lines written in the Rust programming language:

#[fastly::main]
 
fn main(mut req: Request) -> Result<Response, Error> {
// 1. Where is the application running?
let pop = get_pop(&req);

// 2. Choose the origin based on the edge server (pop) -
// default to US if there is no match on the pop
let origin = POP_ORIGIN.get(pop.as_str()).unwrap_or(&US_ORIGIN);

// 3. Remove the query string to improve cache hit ratio
req.remove_query();

// 4. Set the `Host` header to the bucket name + host rather than
// our [email protected] endpoint
let host = format!("{}.{}", origin.bucket_name, origin.endpoint);
req.set_header(header::HOST, &host);

// 5. Copy the modified client request to form the backend request
let mut bereq = req.clone_without_body();

// 6. Set the AWS V4 authentication headers
set_authentication_headers(&mut bereq, &origin);

// 7. Send the request to the backend and assign its response to `beresp`
let mut beresp = bereq.send(origin.backend_name)?;

// 8. Set a response header indicating the origin that we used
beresp.set_header("X-B2-Host", &host);

// 9. Return the response to the client
return Ok(beresp);
}

In step one, the get_pop function returns the three-letter abbreviation for the edge server, or point of presence (POP). For the purposes of testing, you can specify a POP as a query parameter in your HTTP request. For example, https://three.interesting.words.edgecompute.app/image.png?pop=AMS will simulate the application running on the Amsterdam POP. Next, in step two, the application looks up the POP in a mapping of POPs to Backblaze B2 end points. There are about a hundred Fastly POPs spread around the world; I simply took the list generated by running the Fastly command-line tool with the POPs argument, and assigned POPs to Backblaze B2 end points based on their location:

  • POPs in North America, South America, and Asia/Pacific map to the U.S. end point.
  • POPs in Europe and Africa map to the EU end point.

I won’t step through the rest of the logic in detail here—the comments in the code sample above cover the basics; feel free to examine the code in detail on GitHub if you’d like a closer look.

Serve Your Own Data From Multiple Backblaze B2 Regions

As you can see in the screenshot above, Fastly has implemented a Deploy to Fastly button. You can use this to create your own copy of the Backblaze B2 [email protected] demo application in just a couple of minutes. You’ll need to gather a few prerequisites before you start:

  • You must create Backblaze B2 accounts in both the U.S. and EU regions. If you have an existing account and you’re not sure which region it’s in, just take a look at the end point for one of your buckets. For example, this bucket is in the U.S. West region:

    To create your second account, go to the Sign Up page, and click the Region drop-down on the right under the big, red Sign Up button:

    Pick the region in which you don’t already have an account, and enter an email and password. Remember, your new account comes with 10GB of storage, free of charge, so there’s no need to enter your credit card details.

    Note: You’ll need to use a different email address from your existing account. If you don’t have a second email address, you can use the plus trick (officially known as sub-addressing) and reuse an existing email address. For example, if you used [email protected] for your existing B2 Cloud Storage account in the U.S. region, you can use [email protected] for your new EU account. Mail will be routed to the same inbox, and Backblaze B2 will be satisfied that it’s a different email address. This technique isn’t limited to Gmail, by the way, it works with many email providers.

  • Create a private bucket in each account, and use your tool of choice to copy the same data into each of them. Make a note of the end point for each bucket.
  • Create an application key with read access to each bucket.
  • Sign up for a free Fastly account if you don’t already have one. Right now, this includes free credits for [email protected]
  • Sign up for a free GitHub account.
  • Go to the Backblaze B2/Fastly [email protected] Demo GitHub repository, click the Deploy to Fastly button, and follow the prompts. The repository will be forked to your GitHub account and then deployed to Fastly.
  • Important: There is one post-deploy step you must complete before your application will work! In your new GitHub repository, navigate to src/config.rs and hit the pencil icon near the top right to edit the file. Change the origin configuration in lines 18-31 to match your buckets and their end points. Alternatively, you can, of course, clone the repository to your local machine, edit it there, and push the changes back to GitHub.

Once you have your accounts and buckets created, it takes just a few minutes to deploy the application. Watch me walk through the process:

What Can You Do With Fastly’s [email protected] and Backblaze B2?

My simple demo application only scratches the surfaces of [email protected] How could you combine Fastly’s edge computing platform with Backblaze B2 to create a new capability for your website? Check out Fastly’s collection of over 100 [email protected] code samples for inspiration. If you come up with something neat and share it on GitHub, let me know in the comments and I’ll round up a bundle of Backblaze-branded goodies, just for you!

The post Building a Multiregion Origin Store With Backblaze B2 + Fastly [email protected] appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Explore the Backblaze S3 Compatible API With Our New Postman Collection

Post Syndicated from Pat Patterson original https://www.backblaze.com/blog/explore-the-backblaze-s3-compatible-api-with-our-new-postman-collection/

Postman is a platform for building and using APIs. API providers such as Backblaze can use Postman to build API documentation and provide a live environment for developers to experiment with those APIs. Today, you can interact with Backblaze B2 Cloud Storage via our new Postman Collection for the Backblaze S3 Compatible API.

Using the Backblaze S3 Compatible API

The Backblaze S3 Compatible API implements the most commonly used S3 operations, allowing applications to integrate with Backblaze B2 in exactly the same way they do with Amazon S3. Many of our Alliance Partners have used the S3 Compatible API in integrating their products and services with Backblaze B2. Often, integration is as simple as allowing the user to specify a custom endpoint, for example, https://s3.us-west-001.backblazeb2.com, alongside their API credentials in the S3 settings, and verifying that the application works as expected with Backblaze B2.

The Backblaze B2 Native API, introduced alongside Backblaze B2 back in 2015, provides a low-level interface to B2 Cloud Storage. We generally recommend that developers use the S3 Compatible API when writing new applications and integrations, as it is supported by a wider range of SDKs and libraries, and many developers already have experience with Amazon S3. You can use the Backblaze B2 web console or the B2 Native API to access functionality, such as application key management and lifecycle rules, that is not covered by the S3 Compatible API.
 
Our post on the B2 Native and S3 Compatible APIs provides a more detailed comparison.

Most applications and scripts use one of the AWS SDKs or the S3 commands in the AWS CLI to access Backblaze B2. All of the SDKs, and the CLI, allow you to override the default Amazon S3 endpoint in favor of Backblaze B2. Sometimes, though, you might want to interact directly with Backblaze B2 via the S3 Compatible API, perhaps in debugging an issue, or just to better understand how the service works.

Exploring the Backblaze S3 Compatible API in Postman

Our new Backblaze S3 Compatible API Documentation page is the definitive reference for developers wishing to access Backblaze B2 directly via the S3 Compatible API.

In addition to reading the documentation, you can click the Run in Postman button on the top right of the page, log in to the Postman website or desktop app (creating a Postman account is free), and interact with the API.

Integrate With Backblaze B2

Whether you are backing up, archiving data, or serving content via the web, Backblaze B2 is an easy to use and, at a quarter of the cost of Amazon S3, cost-effective cloud object storage solution. If you’re not already using Backblaze B2, sign up now and try it out—your first 10GB of storage is free!

The post Explore the Backblaze S3 Compatible API With Our New Postman Collection appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.