All posts by Amrit Singh

Navigating Cloud Storage: What is Latency and Why Does It Matter?

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/navigating-cloud-storage-what-is-latency-and-why-does-it-matter/

A decorative image showing a computer and a server arrows moving between them, and a stopwatch indicating time.

In today’s bandwidth-intensive world, latency is an important factor that can impact performance and the end-user experience for modern cloud-based applications. For many CTOs, architects, and decision-makers at growing small and medium sized businesses (SMBs), understanding and reducing latency is not just a technical need but also a strategic play. 

Latency, or the time it takes for data to travel from one point to another, affects everything from how snappy or responsive your application may feel to content delivery speeds to media streaming. As infrastructure increasingly relies on cloud object storage to manage terabytes or even petabytes of data, optimizing latency can be the difference between success and failure. 

Let’s get into the nuances of latency and its impact on cloud storage performance.

Upload vs. Download Latency: What’s the Difference?

In the world of cloud storage, you’ll typically encounter two forms of latency: upload latency and download latency. Each can impact the responsiveness and efficiency of your cloud-based application.

Upload Latency

Upload latency refers to the delay when data is sent from a client or user’s device to the cloud. Live streaming applications, backup solutions, or any application that relies heavily on real-time data uploading will experience hiccups if upload latency is high, leading to buffering delays or momentary stream interruptions.

Download Latency

Download latency, on the other hand, is the delay when retrieving data from the cloud to the client or end user’s device. Download latency is particularly relevant for content delivery applications, such as on demand video streaming platforms, e-commerce, or other web-based applications. Reducing download latency, creating a snappy web experience, and ensuring content is swiftly delivered to the end user will make for a more favorable user experience.

Ideally, you’ll want to optimize for latency in both directions, but, depending on your use case and the type of application you are building, it’s important to understand the nuances of upload and download latency and their impact on your end users.

Decoding Cloud Latency: Key Factors and Their Impact

When it comes to cloud storage, how good or bad the latency is can be influenced by a number of factors, each having an impact on the overall performance of your application. Let’s explore a few of these key factors.

Network Congestion

Like traffic on a freeway, packets of data can experience congestion on the internet. This can lead to slower data transmission speeds, especially during peak hours, leading to a laggy experience. Internet connection quality and the capacity of networks can also contribute to this congestion.

Geographical Distance

Often overlooked, the physical distance from the client or end user’s device to the cloud origin store can have an impact on latency. The farther the distance from the client to the server, the farther the data has to traverse and the longer it takes for transmission to complete, leading to higher latency.

Infrastructure Components

The quality of infrastructure, including routers, switches, and cables, may affect network performance and latency numbers. Modern hardware, such as fiber-optic cables, can reduce latency, unlike outdated systems that don’t meet current demands. Often, you don’t have full control over all of these infrastructure elements, but awareness of potential bottlenecks may be helpful, guiding upgrades wherever possible.

Technical Processes

  • TCP/IP Handshake: Connecting a client and a server involves a handshake process, which may introduce a delay, especially if it’s a new connection.
  • DNS Resolution: Latency can be increased by the time it takes to resolve a domain name to its IP address. There is a small reduction in total latency with faster DNS resolution times.
  • Data routing: Data does not necessarily travel a straight line from its source to its destination. Latency can be influenced by the effectiveness of routing algorithms and the number of hops that data must make.

Reduced latency and improved application performance are important for businesses that rely on frequently accessing data stored in cloud storage. This may include selecting providers with strategically positioned data centers, fine-tuning network configurations, and understanding how internet infrastructure affects the latency of their applications.

Minimizing Latency With Content Delivery Networks (CDNs)

Further reducing latency in your application may be achieved by layering a content delivery network (CDN) in front of your origin storage. CDNs help reduce the time it takes for content to reach the end user by caching data in distributed servers that store content across multiple geographic locations. When your end-user requests or downloads content, the CDN delivers it from the nearest server, minimizing the distance the data has to travel, which significantly reduces latency.

Backblaze B2 Cloud Storage integrates with multiple CDN solutions, including Fastly, Bunny.net, and Cloudflare, providing a performance advantage. And, Backblaze offers the additional benefit of free egress between where the data is stored and the CDN’s edge servers. This not only reduces latency, but also optimizes bandwidth usage, making it cost effective for businesses building bandwidth intensive applications such as on demand media streaming. 

To get slightly into the technical weeds, CDNs essentially cache content at the edge of the network, meaning that once content is stored on a CDN server, subsequent requests do not need to go back to the origin server to request data. 

This reduces the load on the origin server and reduces the time needed to deliver the content to the user. For companies using cloud storage, integrating CDNs into their infrastructure is an effective configuration to improve the global availability of content, making it an important aspect of cloud storage and application performance optimization.

Case Study: Musify Improves Latency and Reduces Cloud Bill by 70%

To illustrate the impact of reduced latency on performance, consider the example of music streaming platform Musify. By moving from Amazon S3 to Backblaze B2 and leveraging the partnership with Cloudflare, Musify significantly improved its service offering. Musify egresses about 1PB of data per month, which, under traditional cloud storage pricing models, can lead to significant costs. Because Backblaze and Cloudflare are both members of the Bandwidth Alliance, Musify now has no data transfer costs, contributing to an estimated 70% reduction in cloud spend. And, thanks to the high cache hit ratio, 90% of the transfer takes place in the CDN layer, which helps maintain high performance, regardless of the location of the file or the user.

Latency Wrap Up

As we wrap up our look at the role latency plays in cloud-based applications, it’s clear that understanding and strategically reducing latency is a necessary approach for CTOs, architects, and decision-makers building many of the modern applications we all use today.  There are several factors that impact upload and download latency, and it’s important to understand the nuances to effectively improve performance.

Additionally, Backblaze B2’s integrations with CDNs like Fastly, bunny.net, and Cloudflare offer a cost-effective way to improve performance and reduce latency. The strategic decisions Musify made demonstrate how reducing latency with a CDN can significantly improve content delivery while saving on egress costs, and reducing overall business OpEx.

For additional information and guidance on reducing latency, improving TTFB numbers and overall performance, the insights shared in “Cloud Performance and When It Matters” offer a deeper, technical look.

If you’re keen to explore further into how an object storage platform may support your needs and help scale your bandwidth-intensive applications, read more about Backblaze B2 Cloud Storage.

The post Navigating Cloud Storage: What is Latency and Why Does It Matter? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/the-power-of-specialized-cloud-providers-a-game-changer-for-saas-companies/

A decorative image showing a cloud with the Backblaze logo, then logos hanging off it it for Vultr, Fastly, Equinix metal, Terraform, and rclone.

“Nobody ever got fired for buying AWS.” It’s true: AWS’s one-size-fits-all solution worked great for most businesses, and those businesses made the shift away from the traditional model of on-prem and self-hosted servers—what we think of as Cloud 1.0—to an era where AWS was the cloud, the one and only, which is what we call Cloud 2.0. However, as the cloud landscape evolves, it’s time to question the old ways. Maybe nobody ever got fired for buying AWS, but these days, you can certainly get a lot of value (and kudos) for exploring other options. 

Developers and IT teams might hesitate when it comes to moving away from AWS, but AWS comes with risks, too. If you don’t have the resources to manage and maintain your infrastructure, costs can get out of control, for one. As we enter Cloud 3.0 where the landscape is defined by the open, multi-cloud internet, there is an emerging trend that is worth considering: the rise of specialized cloud providers.

Today, I’m sharing how software as a service (SaaS) startups and modern businesses can take advantage of these highly-focused, tailored services, each specializing and excelling in specific areas like cloud storage, content delivery, cloud compute, and more. Building on a specialized stack offers more control, return on investment, and flexibility, while being able to achieve the same performance you expect from hyperscaler infrastructure.

From a cost of goods sold perspective, AWS pricing wasn’t a great fit. From an engineering perspective, we didn’t want a net-new platform. So the fact that we got both with Backblaze—a drop-in API replacement with a much better cost structure—it was just a no-brainer.

—Rory Petty, Co-Founder & CTO, Tribute

The Rise of Specialized Cloud Providers

Specialized providers—including content delivery networks (CDNs) like Fastly, bunny.net, and Cloudflare, as well as cloud compute providers like Vultr—offer services that focus on a particular area of the infrastructure stack. Rather than trying to be everything to everyone, like the hyperscalers of Cloud 2.0, they do one thing and do it really well. Customers get best-of-breed services that allow them to build a tech stack tailored to their needs. 

Use Cases for Specialized Cloud Providers

There are a number of businesses that might benefit from switching from hyperscalers to specialized cloud providers, including:

In order for businesses to take advantage of the benefits (since most applications rely on more than just one service), these services must work together seamlessly. 

Let’s Take a Closer Look at How Specialized Stacks Can Work For You

If you’re wondering how exactly specialized clouds can “play well with each other,” we ran a whole series of application storage webinars that talk through specific examples and uses cases. I’ll share what’s in it for you below.

1. Low Latency Multi-Region Content Delivery with Fastly and Backblaze

Did you know a 100-millisecond delay in website load time can hurt conversion rates by 7%? In this session, Pat Patterson from Backblaze and Jim Bartos from Fastly discuss the importance of speed and latency in user experience. They highlight how Backblaze’s B2 Cloud Storage and Fastly’s content delivery network work together to deliver content quickly and efficiently across multiple regions. Businesses can ensure that their content is delivered with low latency, reducing delays and optimizing user experience regardless of the user’s location.

2. Scaling Media Delivery Workflows with bunny.net and Backblaze

Delivering content to your end users at scale can be challenging and costly. Users expect exceptional web and mobile experiences with snappy load times and zero buffering. Anything less than an instantaneous response may cause them to bounce. 

In this webinar, Pat Patterson demonstrates how to efficiently scale your content delivery workflows from content ingestion, transcoding, storage, to last-mile acceleration via bunny.net CDN. Pat demonstrates how to build a video hosting platform called “Cat Tube” and shows how to upload a video and play it using HTML5 video element with controls. Watch below and download the demo code to try it yourself.

3. Balancing Cloud Cost and Performance with Fastly and Backblaze

With a global economic slowdown, IT and development teams are looking for ways to slash cloud budgets without compromising performance. E-commerce, SaaS platforms, and streaming applications all rely on high-performant infrastructure, but balancing bandwidth and storage costs can be challenging. In this 45-minute session, we explored how to recession-proof your growing business with key cloud optimization strategies, including ways to leverage Fastly’s CDN to balance bandwidth costs while avoiding performance tradeoffs.

4. Reducing Cloud OpEx Without Sacrificing Performance and Speed

Greg Hamer from Backblaze and DJ Johnson from Vultr explore the benefits of building on best-of-breed, specialized cloud stacks tailored to your business model, rather than being locked into traditional hyperscaler infrastructure. They cover real-world use cases, including:

  • How Can Stock Photo broke free from AWS and reduced their cloud bill by 55% while achieving 4x faster generation.
  • How Monument Labs launched a new cloud-based photo management service to 25,000+ users.
  • How Black.ai processes 1000s of files simultaneously, with a significant reduction of infrastructure costs.

5. Leveling Up a Global Gaming Platform while Slashing Cloud Spend by 85%

James Ross of Nodecraft, an online gaming platform that aims to make gaming online easy, shares how he moved his global game server platform from Amazon S3 to Backblaze B2 for greater flexibility and 85% savings on storage and egress. He discusses the challenges of managing large files over the public internet, which can result in expensive bandwidth costs. By storing game titles on Backblaze B2 and delivering them through Cloudflare’s CDN, they achieve reduced latency since games are cached at the edge, and pay zero egress fees thanks to the Bandwidth Alliance. Nodecraft also benefited from Universal Data Migration, which allows customers to move large amounts of data from any cloud services or on-premises storage to Backblaze’s B2 Cloud Storage, managed by Backblaze and free of charge.

Migrating From a Hyperscaler

Though it may seem daunting to transition from a hyperscaler to a specialized cloud provider, it doesn’t have to be. Many specialized providers offer tools and services to make the transition as smooth as possible. 

  • S3-compatible APIs, SDKs, CLI: Interface with storage as you would with Amazon S3—switching can be as easy as dropping in a new storage target.
  • Universal Data Migration: Free and fully managed migrations to make switching as seamless as possible.
  • Free egress: Move data freely with the Bandwidth Alliance and other partnerships between specialized cloud storage providers.

As the decision maker at your growing SaaS company, it’s worth considering whether a specialized cloud stack could be a better fit for your business. By doing so you could potentially unlock cost savings, improve performance, and gain flexibility to adapt your services to your unique needs. The one-size-fits-all is no longer the only option out there. 

Want to Test It Out Yourself?

Take a proactive approach to cloud cost management: Get 10GB free to test and validate your proof of concept (POC) with Backblaze B2. All it takes is an email to get started.

Download the Ransomware Guide ➔ 

The post The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/the-free-credit-trap-building-saas-infrastructure-for-long-term-sustainability/

In today’s economic climate, cost cutting is on everyone’s mind, and businesses are doing everything they can to save money. But, it’s equally important that they can’t afford to compromise the integrity of their infrastructure or the quality of the customer experience. As a startup, taking advantage of free cloud credits from cloud providers like Amazon AWS, especially at a time like this, seems enticing. 

Using those credits can make sense, but it takes more planning than you might think to use them in a way that allows you to continue managing cloud costs once the credits run out. 

In this blog post, I’ll walk through common use cases for credit programs, the risks of using credits, and alternatives that help you balance growth and cloud costs.

The True Cost of “Free”

This post is part of a series exploring free cloud credits and the hidden complexities and limitations that come with these offers. Check out our previous installments:

The Shift to Cloud 3.0

As we see it, there have been three stages of “The Cloud” in its history:

Phase 1: What is the Cloud?

Starting around when Backblaze was founded in 2007, the public cloud was in its infancy. Most people weren’t clear on what cloud computing was or if it was going to take root. Businesses were asking themselves, “What is the cloud and how will it work with my business?”

Phase 2: Cloud = Amazon Web Services

Fast forward to 10 years later, and AWS and “The Cloud” started to become synonymous. Amazon had nearly 50% of market share of public cloud services, more than Microsoft, Google, and IBM combined. “The Cloud” was well-established, and for most folks, the cloud was AWS.

Phase 3: Multi-Cloud

Today, we’re in Phase 3 of the cloud. “The Cloud” of today is defined by the open, multi-cloud internet. Traditional cloud vendors are expensive, complicated, and seek to lock customers into their walled gardens. Customers have come to realize that (see below) and to value the benefits they can get from moving away from a model that demands exclusivity in cloud infrastructure.

An image displaying a Tweet from user Philo Hermans @Philo01 that says 

I migrated most infrastructure away from AWS. Now that I think about it, those AWS credits are a well-designed trap to create a vendor lock in, and once your credits expire and you notice the actual cost, chances are you are in shock and stuck at the same time (laughing emoji).
Source.

In Cloud Phase 3.0, companies are looking to reign in spending, and are increasingly seeking specialized cloud providers offering affordable, best-of-breed services without sacrificing speed and performance. How do you balance that with the draw of free credits? I’ll get into that next, and the two are far from mutually exclusive.

Getting Hooked on Credits: Common Use Cases

So, you have $100k in free cloud credits from AWS. What do you do with them? Well, in our experience, there are a wide range of use cases for credits, including:

  • App development and testing: Teams may leverage credits to run an app development proof of concept (PoC) utilizing Amazon EC2, RDS, and S3 for compute, database, and storage needs, for example, but without understanding how these will scale in the longer term, there may be risks involved. Spinning up EC2 instances can quickly lead to burning through your credits and getting hit with an unexpected bill.
  • Machine learning (ML): Machine learning models require huge amounts of computing power and storage. Free cloud credits might be a good way to start, but you can expect them to quickly run out if you’re using them for this use case. 
  • Data analytics: While free cloud credits may cover storage and computing resources, data transfer costs might still apply. Analyzing large volumes of data or frequently transferring data in and out of the cloud can lead to unexpected expenses.
  • Website hosting: Hosting your website with free cloud credits can eliminate the up front infrastructure spend and provide an entry point into the cloud, but remember that when the credits expire, traffic spikes you should be celebrating can crater your bottom line.
  • Backup and disaster recovery: Free cloud credits may have restrictions on data retention, limiting the duration for which backups can be stored. This can pose challenges for organizations requiring long-term data retention for compliance or disaster recovery purposes.

All of this is to say: Proper configuration, long-term management and upkeep, and cost optimization all play a role on how you scale on monolith platforms. It is important to note that the risks and benefits mentioned above are general considerations, and specific terms and conditions may vary depending on the cloud service provider and the details of their free credit offerings. It’s crucial to thoroughly review the terms and plan accordingly to maximize the benefits and mitigate the risks associated with free cloud credits for each specific use case. (And, given the complicated pricing structures we mentioned before, that might take some effort.)

Monument Uses Free Credits Wisely

Monument, a photo management service with a strong focus on security and privacy, utilized free startup credits from AWS. But, they knew free credits wouldn’t last forever. Monument’s co-founder, Ercan Erciyes, realized they’d ultimately lose money if they built the infrastructure for Monument Cloud on AWS.

He also didn’t want to accumulate tech debt and become locked in to AWS. Rather than using the credits to build a minimum viable product as fast as humanly possible, he used the credits to develop the AI model, but not to build their infrastructure. Read more about how they put AWS credits to use while building infrastructure that could scale as they grew.

➔ Read More

The Risks of AWS Credits: Lessons from Founders

If you’re handed $100,000 in credits, it’s crucial to be aware of the risks and implications that come along with it. While it may seem like an exciting opportunity to explore the capabilities of the cloud without immediate financial constraints, there are several factors to consider:

  1. The temptation to overspend: With a credit balance at your disposal just waiting to be spent, there is a possibility of underestimating the actual costs of your cloud usage. This can lead to a scenario where you inadvertently exhaust the credits sooner than anticipated, leaving you with unexpected expenses that may strain your budget.
  2. The shock of high bills once credits expire: Without proper planning and monitoring of your cloud usage, the transition from “free” to paying for services can result in high bills that catch you off guard. It is essential to closely track your cloud usage throughout the credit period and have a clear understanding of the costs associated with the services you’re utilizing. Or better yet, use those credits for a discrete project to test your PoC or develop your minimum viable product, and plan to build your long-term infrastructure elsewhere.
  3. The risk of vendor lock-in: As you build and deploy your infrastructure within a specific cloud provider’s ecosystem, the process of migrating to an alternative provider can seem complex and can definitely be costly (shameless plug: at Backblaze, we’ll cover your migration over 50TB). Vendor lock-in can limit your flexibility, making it challenging to adapt to changing business needs or take advantage of cost-saving opportunities in the future.

The problems are nothing new for founders, as the online conversation bears out.

First, there’s the old surprise bill:

A Tweet from user Ajul Sahul @anjuls that says 

Similar story, AWS provided us free credits so we though we will use it for some data processing tasks. The credit expired after one year and team forgot about the abandoned resources to give a surprise bill. Cloud governance is super importance right from the start.
Source.

Even with some optimization, AWS cloud spend can still be pretty “obscene” as this user vividly shows:

A Tweet from user DHH @dhh that says 

We spent $3,201,564.24 on cloud in 2022 at @37signals, mostly AWS. $907,837.83 on S3. $473,196.30 on RDS. $519,959.60 on OpenSearch. $123,852.30 on Elasticache. This is with long commits (S3 for 4 years!!), reserved instances, etc. Just obscene. Will publish full accounting soon.
Source.

There’s the founder raising rounds just to pay AWS bills:

A Tweet from user Guille Ojeda @itsguilleojeda that says 

Tech first startups raise their first rounds to pay AWS bills. By the way, there's free credits, in case you didn't know. Up to $100k. And you'll still need funding.
Source.

Some use the surprise bill as motivation to get paying customers.

Lastly, there’s the comic relief:

A tweet from user Mrinal Wahal @MrinalWahal that reads 

Yeah high credit card bills are scary but have you forgotten turning off your AWS instances?
Source.

Strategies for Balancing Growth and Cloud Costs

Where does that leave you today? Here are some best practices startups and early founders can implement to balance growth and cloud costs:

  1. Establishing a cloud cost management plan early on.
  2. Monitoring and optimizing cloud usage to avoid wasted resources.
  3. Leveraging multiple cloud providers.
  4. Moving to a new cloud provider altogether.
  5. Setting aside some of your credits for the migration.

1. Establishing a Cloud Cost Management Plan

Put some time into creating a well-thought-out cloud cost management strategy from the beginning. This includes closely monitoring your usage, optimizing resource allocation, and planning for the expiration of credits to ensure a smooth transition. By understanding the risks involved and proactively managing your cloud usage, you can maximize the benefits of the credits while minimizing potential financial setbacks and vendor lock-in concerns.

2. Monitoring and Optimizing Cloud Usage

Monitoring and optimizing cloud usage plays a vital role in avoiding wasted resources and controlling costs. By regularly analyzing usage patterns, organizations can identify opportunities to right-size resources, adopt automation to reduce idle time, and leverage cost-effective pricing options. Effective monitoring and optimization ensure that businesses are only paying for the resources they truly need, maximizing cost efficiency while maintaining the necessary levels of performance and scalability.

3. Leveraging Multiple Cloud Providers

By adopting a multi-cloud strategy, businesses can diversify their cloud infrastructure and services across different providers. This allows them to benefit from each provider’s unique offerings, such as specialized services, geographical coverage, or pricing models. Additionally, it provides a layer of protection against potential service disruptions or price increases from a single provider. Adopting a multi-cloud approach requires careful planning and management to ensure compatibility, data integration, and consistent security measures across multiple platforms. However, it offers the flexibility to choose the best-fit cloud services from different providers, reducing dependency on a single vendor and enabling businesses to optimize costs while harnessing the capabilities of various cloud platforms.

4. Moving to a New Cloud Provider Altogether

If you’re already deeply invested in a major cloud platform, shifting away can seem cumbersome, but there may be long-term benefits that outweigh the short term “pains” (this leads into the shift to Cloud 3.0). The process could involve re-architecting applications, migrating data, and retraining personnel on the new platform. However, factors such as pricing models, performance, scalability, or access to specialized services may win out in the end. It’s worth noting that many specialized providers have taken measures to “ease the pain” and make the transition away from AWS more seamless without overhauling code. For example, at Backblaze, we developed an S3 compatible API so switching providers is as simple as dropping in a new storage target.

5. Setting Aside Credits for the Migration

By setting aside credits for future migration, businesses can ensure they have the necessary resources to transition to a different provider without incurring significant up front expenses like egress fees to transfer large data sets. This strategic allocation of credits allows organizations to explore alternative cloud platforms, evaluate their pricing models, and assess the cost-effectiveness of migrating their infrastructure and services without worrying about being able to afford the migration.

Welcome to Cloud 3.0: Alternatives to AWS

In 2022, David Heinemeier Hansson, the creator of Basecamp and Hey, announced that he was moving Hey’s infrastructure from AWS to on-premises. Hansson cited the high cost of AWS as one of the reasons for the move. His estimate? “We stand to save $7m over five years from our cloud exit,” he said.  

Going back to on-premises solutions is certainly one answer to the problem of AWS bills. In fact, when we started designing Backblaze’s Personal Backup solution, we were faced with the same problem. Hosting data storage for our computer backup product on AWS was a non-starter—it was going to be too expensive, and our business wouldn’t be able to deliver a reasonable consumer price point and be solvent. So, we didn’t just invest in on-premises resources: We built our own Storage Pods, the first evolution of the Backblaze Storage Cloud. 

But, moving back to on-premises solutions isn’t the only answer—it’s just the only answer if it’s 2007 and your two options are AWS and on-premises solutions. The cloud environment as it exists today has better choices. We’ve now grown that collection of Storage Pods into the Backblaze B2 Storage Cloud, which delivers performant, interoperable storage at one-fifth the cost of AWS. And, we offer free egress to our content delivery network (CDN) and compute partners. Backblaze may provide an even more cost-effective solution for mid-sized SaaS startups looking to save on cloud costs while maintaining speed and performance.

As we transition to Cloud 3.0 in 2023 and beyond, companies are expected to undergo a shift, reevaluating their cloud spending to ensure long-term sustainability and directing saved funds into other critical areas of their businesses. The age of limited choices is over. The age of customizable cloud integration is here. 

So, shout out to David Heinemeier Hansson: We’d love to chat about your storage bills some time.

Want to Test It Yourself?

Take a proactive approach to cloud cost management: If you’ve got more than 50TB of data storage or want to check out our capacity-based pricing model, B2 Reserve, contact our Sales Team to test a PoC for free with Backblaze B2.

And, for the streamlined, self–serve option, all you need is an email to get started today.

FAQs About Cloud Spend

If you’re thinking about moving to Backblaze B2 after taking AWS credits, but you’re not sure if it’s right for you, we’ve put together some frequently asked questions that folks have shared with us before their migrations:

My cloud credits are running out. What should I do?

Backblaze’s Universal Data Migration service can help you off-load some of your data to Backblaze B2 for free. Speak with a migration expert today.

AWS has all of the services I need, and Backblaze only offers storage. What about the other services I need?

Shifting away from AWS doesn’t mean ditching the workflows you have already set up. You can migrate some of your data storage while keeping some on AWS or continuing to use other AWS services. Moreover, AWS may be overkill for small to midsize SaaS businesses with limited resources.

How should I approach a migration?

Identify the specific services and functionalities that your applications and systems require, such as CDN for content delivery or compute resources for processing tasks. Check out our partner ecosystem to identify other independent cloud providers that offer the services you need at a lower cost than AWS.

What CDN partners does Backblaze have?

With the ease of use, predictable pricing, zero egress, our joint solutions are perfect for businesses looking to reduce their IT costs, improve their operational efficiency, and increase their competitive advantage in the market. Our CDN partners include Fastly, bunny.net, and Cloudflare. And, we extend free egress to joint customers.

What compute partners does Backblaze have?

Our compute partners include Vultr and Equinix Metal. You can connect Backblaze B2 Cloud Storage with Vultr’s global compute network to access, store, and scale application data on-demand, at a fraction of the cost of the hyperscalers.

The post The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing Tech Day ‘22: Live Tech Talks, Demos, and Dialogues

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/announcing-tech-day-22-live-tech-talks-demos-and-dialogues/

For those looking to build and grow blazing applications and do more with their data, we’d like to welcome you to this year’s Tech Day ‘22. We have a great community that works with Backblaze B2 Cloud Storage, including our internal team, IT professionals, developers, tech decision makers, cloud partners and more—and we felt it was high time to bring you all together again to share ideas, discuss upcoming changes, win some swag, and network.

Join our Technical Evangelists in live interactive sessions, demos, and tech talks that help you unlock your cloud potential and put B2 Cloud Storage to work for you. Whatever your role in the tech world—or if you’re simply curious about leveraging the Backblaze B2 platform—we invite you to join us!

➔ Register Now

Here’s What to Expect at Tech Day ’22

Tech Day ’22 is happening October 31, 10 a.m. PT. Can’t make it? Sign up anyway and we’ll share the event recording straight to your inbox.

IaaS Unboxed

A live chat about leveraging the independent cloud ecosystem for storage, compute, delivery, and backup, along with a customer showcase.

Sneak Peek

An early look at the Q3 2022 Drive Stats data with Andy Klein as he walks through the latest learnings to inform your thinking and purchase decisions.

Hands-On Demos

Pat Patterson (Chief Technical Evangelist), and Greg Hamer (Senior Developer Evangelist) team up to facilitate an action-packed set of interactive sessions aimed at helping you do more in the cloud. If you don’t have an account already, you’ll definitely want to create a free Backblaze B2 account so you can follow along. All you need to do is sign up with your email and create a password—it’s really that easy.

  • Scaling a Social App with Appwrite: Appwrite is a self-hosted backend-as-a-service platform that provides developers with all the core APIs required to build any application. Appwrite’s storage abstraction allows developers to store project files in a range of devices, including Backblaze B2. In this session, you’ll learn how to get started with Appwrite, and quickly build a social app that stores user-generated content in a Backblaze B2 Bucket.
  • Go Serverless with Fastly Compute@Edge: Fastly has long been a Backblaze partner—mutual customers are able to serve assets stored in Backblaze B2 Buckets via Fastly’s global content delivery network with zero download charges from Backblaze B2. Compute@Edge leverages Fastly’s network to enable developers to create high-scale, globally-distributed applications, and execute code at the edge. Discover how to build a simple serverless application in JavaScript and deploy it globally with a single command.
  • Provisioning Resources with the Backblaze B2 Terraform Provider: Hashicorp Terraform is an open-source infrastructure-as-code software tool that enables you to safely and predictably create, change, and improve infrastructure. Learn how our Terraform Provider unlocks Backblaze B2’s capabilities for DevOps engineers, allowing you to create, list, and delete Buckets and application keys, as well as upload and download objects.
  • Storing and Querying Analytical Data with Trino: Trino is a SQL-compliant query engine that supports a wide range of business intelligence and analytical tools, allowing you to write queries against structured and semi-structured data in a variety of formats and storage locations. We’ll share how we optimized Backblaze’s Drive Stats data for queries and used Trino to gain new insights into nine years of real-world data.

And So Much More

Join the live Q&A and our user community of tech leaders, IT pros, and developers like you. Register for free to grab your spot (and swag) and we’ll see you on October 31.

➔ Register Now

The post Announcing Tech Day ‘22: Live Tech Talks, Demos, and Dialogues appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

“An Ideal Solution”: Daltix’s Automated Data Lake Archive Saves $100K

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/an-ideal-solution-daltixs-automated-data-lake-archive-saves-100k/

In the fast-moving consumer goods space, Daltix is a pioneer in providing complete, transparent, and high-quality retail data. With global industry leaders like GFK and Unilever depending on their pricing, product, promotion, and location data to build go-to market strategies and make critical decisions, maintaining a reliable data ecosystem is an imperative for Daltix.

As the company has grown since its founding in 2016, the amount of data Daltix is processing has increased exponentially. They’re currently managing around 250TB, but that amount is spread across billions of files, which soon created a massive drag on time and resources. With an infrastructure built almost entirely around AWS and billions of miniscule files to manage, Daltix started to outgrow AWS’ storage options in both scalability and cost efficiency.

The Daltix team in Belgium.

We got to chat with Charlie Orford, Principal Software Engineer for Daltix, about how Datix switched to Backblaze B2 Cloud Storage and their takeaways from that process. Here are some highlights:

  • They used a custom engine to migrate billions of files from AWS S3 to Backblaze B2.
  • Monthly costs reduced by $2,500 while increasing data portability and reliability.
  • Daltix established the infrastructure to automatically back up 8.4 million data objects every day.

Read on to learn how they did it.

A Complex Data Pipeline Built Around AWS

Most of the S3-based infrastructure Daltix built in the company’s early days is still intact. Historically, the data pipeline started with web-scraped resources written directly to Amazon S3, which were then standardized by Lamba-based extractors before being sent back to S3. Then AWS Batch picked up the resources to be augmented and enriched using other data sources.

All those steps took place before the data was ready for Daltix’s team of analysts. In order to optimize the pipeline and increase efficiency, Orford started absorbing pieces of that process into Kubernetes. But there was still a data storage problem; Daltix generates about 300GB of compressed data per day, and that figure was growing rapidly. “As we’d scaled up our data collection, we’d had to sharpen our focus on cost control, data portability, and reliability,” said Orford. “They’re obvious, but at scale, they’re extremely important.”

Cost Concerns Inspire The Search For Warm Archival Storage

By 2020, Daltix had started to realize the limitations of building so much of their infrastructure in AWS. For example, heavy customization around S3 metadata made the ability to move objects entirely dependent on the target system’s compatibility with S3. Orford was also concerned about the costs of permanently storing such a huge data lake in S3. As he puts it, “It was clear that there was no need to have everything in S3 forever. If we didn’t do anything about it, our S3 costs were going to continue to rise and eventually dwarf virtually all of our other AWS costs.”

Side-by-side comparison of server costs.

Because Daltix works with billions of tiny files, using Glacier was out of the question as its pricing model is based around retrieval fees. Even using Glacier Instant Retrieval, the sheer number of files Daltix works with would have forced them to rack up an additional $200,000 in fees per year. So Daltix’s data collection team—which produces more than 85% of the company’s overall data—pushed for an alternative solution that could address a number of competing concerns:

  • The sheer size of the data lake.
  • The need to store raw resources as discrete files (which means that batching is not an option).
  • Limitations on the team’s ability to invest time and effort.
  • A desire for simplicity to guarantee the solution’s reliability.

Daltix settled on using Amazon S3 for hot storage and moving warm storage into a new archival solution, which would reduce costs while keeping priority data accessible—even if the intention is to keep files stored away. “It was important to find something that would be very easy to integrate, have a low development risk, and start meaningfully eating into our costs,” said Orford. “For us, Backblaze really ticked all the boxes.”

Initial Migration Unlocks Immediate Savings of $2,000 Per Month

Before launching into a full migration, Orford and his team tested a proof of concept (POC) to make sure the solution addressed his key priorities:

  • Making sure the huge volume of data was migrated successfully.
  • Avoiding data corruption and checking for errors with audit logs.
  • Preserving custom metadata on each individual object.

“Early on, Backblaze worked with us hand-in-hand to come up with a custom migration tool that fit all our requirements,” said Orford. “That’s what gave us the confidence to proceed.” In partnership with Flexify, Backblaze delivered a tailor-made engine to ensure that the migration process would transfer the entire data lake reliably and with object-level metadata intact. After the initial POC bucket was migrated successfully, Daltix had everything they needed to start modeling and forecasting future costs. “As soon as we started interacting with Backblaze, we stopped looking at other options,” Orford said.

In August 2021, Daltix moved a 120TB bucket of 2.2 billion objects from standard storage in S3 to Backblaze B2 cloud storage. That initial migration alone unlocked an immediate cost savings of $2,000 per month, or $24,000 per year.

A peaceful data lake.

Quadruple the Data, Direct S3 Compatibility, and $100,000 Cumulative Savings

Today, Daltix is migrating about 3.2 million data objects (approximately 70GB of data) from Amazon S3 into Backblaze B2 every day. They keep 18 months of hot data in S3, and as soon as an object reaches 18 months and one day, it becomes eligible for archiving in B2. On the rare occasions that Daltix receives requests for data outside that 18-month window, they can pull data directly from Backblaze B2 into Amazon S3 thanks to Backblaze’s S3-compatible API and ever-available data.

Daily audit logs summarize how much data has been transferred, and the entire migration process happens automatically every day. “It runs in the background, there’s nothing to manage, we have full visibility, and it’s cost effective,” Orford said. “Backblaze B2 is an ideal solution for us.”

As daily data collection increases and more data ages out of the hot storage window, Orford expects more cost reductions. Orford expects it will take about a year and a half for daily migrations to nearly triple their current levels: that means Daltix will be backing up 9 million objects (about 450GB of data) to Backblaze B2 every day. Taking that long-term view, we see incredible cost savings for Daltix by switching from Amazon S3 to Backblaze B2. “By 2023, we forecast we will have realized a cumulative saving in the region of $75,000-$100,000 on our storage spend thanks to leveraging Backblaze B2, with expected ongoing savings of at least $30,000 per year,” said Orford.

“It runs in the background, there’s nothing to manage, we have full visibility, and it’s cost effective. B2 is an ideal solution for us.” —Charlie Orford, Principal Software Engineer, Daltix

Crunch the Numbers and See for Yourself

Want to find out what your business could do with an extra $30,000 a year? Check out our Cloud Storage Pricing Calculator to see what you could save switching to Backblaze B2.

The post “An Ideal Solution”: Daltix’s Automated Data Lake Archive Saves $100K appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Free Isn’t Always Free: A Guide to Free Cloud Tiers

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/free-isnt-always-free-a-guide-to-free-cloud-tiers/

Free Isn’t Always Free

They say “the best things in life are free.” But when most cloud storage companies offer a free tier, what they really want is money. While free tiers do offer some early-stage technical founders the opportunity to test out a proof of concept or allow students to experiment without breaking the bank, their ultimate goal is to turn you into a paying customer. This isn’t always nefarious (we offer 10GB for free, so we’re playing the same game!), but some cloud vendors’ free tiers come with hidden surprises that can lead to scary bills with little warning.

The truth is that free isn’t always free. Today, we’re digging into a cautionary tale for developers and technical founders exploring cloud services to support their applications or SaaS products. Naturally, you want to know if a cloud vendor’s free tier will work for you. Understanding what to expect and how to navigate free tiers accordingly can help you avoid huge surprise bills later.

Free Tiers: A Quick Reference

Most large, diversified cloud providers offer a free tier—AWS, Google Cloud Platform, and Azure, to name a few—and each one structures theirs a bit differently:

  • AWS: AWS has 100+ products and services with free options ranging from “always free” to 12 months free, and each has different use limitations. For example, you get 5GB of object storage free with AWS S3 for the first 12 months, then you are billed at the respective rate.
  • Google Cloud Platform: Google offers a $300 credit good for 90 days so you can explore services “for free.” They also offer an “always free tier” for specific services like Cloud Storage, Compute Engine, and several others that are free to a certain limit. For example, you get 5GB of storage for free and 1GB of network egress for their Cloud Storage service.
  • Azure: Azure offers a free trial similar to Google’s but with a shorter time frame (30 days) and lower credit amount ($200). It gives you the option to move up to paid when you’ve used up your credits or your time expires. Azure also offers a range of services that are free for 12 months and have varying limits and thresholds as well as an “always free tier” option.

After even a quick review of the free tier offers from major cloud providers, you can glean some immediate takeaways:

  1. You can’t rely on free tiers or promotional credits as a long-term solution. They work well for testing a proof of concept or a minimum viable product without making a big commitment, but they’re not going to serve you past the time or usage limits.
  2. “Free” has different mileage depending on the platform and service. Keep that in mind before you spin up servers and resources, and read the fine print as it relates to limitations.
  3. The end goal is to move you to paid. Obviously, the cloud providers want to move you from testing a proof of concept to paid, with your full production hosted and running on their platforms.

With Google Cloud Platform and Azure, you’re at least somewhat protected from being billed beyond the credits you receive since they require you to upgrade to the paid tier to continue. Thus, most of the horror stories you’ll see involve AWS. With AWS, once your trial expires or you exceed your allotted limits, you are billed the standard rate. For the purposes of this guide, we’ll look specifically at AWS.

The Problem With the AWS Free Tier

The internet is littered with cautionary tales of AWS bills run amok. A quick search for “AWS free tier bill” on Twitter or Reddit shows that it’s possible and pretty common to run up a bill on AWS’s so-called free tier…
Twitter - Free Tier Guide
Twitter 2 - Free Tier Guide
Reddit - Free Tier Guide

The problem with the AWS free tier is threefold:

  1. There are a number of ways a “free tier” instance can turn into a bill.
  2. Safeguards against surprise bills are mediocre at best.
  3. Surprise bills are scary, and next steps aren’t the most comforting.

Problem 1: It’s Really Easy to Go From Free to Not Free

There are a number of ways an unattended “free tier” instance turns into a bill, sometimes a catastrophically huge bill. Here are just a few:

  1. You spin up Elastic Compute Cloud (EC2) instances for a project and forget about them until they exceed the free tier limits.
  2. You sign up for several AWS accounts, and you can’t figure out which one is running up charges.
  3. Your account gets hacked and used for mining crypto (yes, this definitely happens, and it results in some of the biggest surprise bills of them all).

Problem 2: Safeguards Against Surprise Bills Are Mediocre at Best

Confounding the problem is the fact that AWS keeps safeguards against surprise billing to a minimum. The free tier has limits and defined constraints, and the only way to keep your account in the free zone is to keep usage below those limits (and this is key) for each service you use.

AWS has hundreds of services, and each service comes with its own pricing structure and limits. While one AWS service might be free, it can be paired with another AWS service that’s not free or doesn’t have the same free threshold, for example, egress between services. Thus, managing your usage to keep it within the free tier can be somewhat straightforward or prohibitively complex depending on which services you use.

Wait, Shouldn’t I Get Alerts?

Yes, you can get alerted if you’re approaching the free limit, but that’s not foolproof either. First, billing alarms are not instantaneous. The notification might come after you’ve already exceeded the limit. And second, not every service has alerts or alerts that work in the same way.

You can also configure services so that they automatically shut down when they exceed a certain billing threshold, but this may pose more problems than it solves. First, navigating the AWS UI to set this up is complex. Your average free tier user may not be aware of or even interested in how to set that up. Second, you may not want to shut down services depending on how you’re using AWS.

Problem 3: Knowing What to Do Next

If it’s not your first rodeo, you might not default to panic mode when you get that surprise bill. You tracked your usage. You know you’re in the right. All you have to do is contact AWS support and dispute the charge. But imagine how a college student might react to a bill the size of their yearly tuition. While large five- to six-figure bills might be negotiable and completely waived, there are untold numbers of two- to three-figure bills that just end up getting paid because people weren’t aware of how to dispute the charges.

Even experienced developers can fall victim to unexpected charges in the thousands.

Avoiding Unexpected AWS Bills in the First Place

The first thing to recognize is that free isn’t always free. If you’re new to the platform, there are a few steps you can take to put yourself in a better position to avoid unexpected charges:

  1. Read the fine print before spinning up servers or uploading test data.
  2. Look for sandboxed environments that don’t let you exceed charges beyond a certain amount or that allow you to set limits that shut off services once limits are exceeded.
  3. Proceed with caution and understand how alerts work before spinning up services.
  4. Steer clear of free tiers completely, because the short-term savings aren’t huge and aren’t worth the added risk.

Final Thought: It Ain’t Free If They Have Your CC

AWS requires credit card information before you can do anything on the free tier—all the more reason to be extremely cautious.

Shameless plug here: Backblaze B2 Cloud Storage offers the first 10GB of storage free, and you don’t need to give us a credit card to create an account. You can also set billing alerts and caps easily in your dashboard. So, you’re unlikely to run up a surprise bill.

Ready to get started with Backblaze B2 Cloud Storage? Sign up here today to get started with 10GB and no CC.

The post Free Isn’t Always Free: A Guide to Free Cloud Tiers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cloud Performance and When It Matters

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/cloud-performance-and-when-it-matters/

If you run an application that’s bandwidth intensive like media streaming, game hosting, or an e-commerce platform, performance is probably top of mind. You need to be able to deliver content to your users fast and without errors in order to keep them happy. But, what specific performance metrics matter for your use case?

As it turns out, you might think you need a Porsche when what you really need and want to transport your data with is a trusty, reliable (Still speedy!) Volvo.

In this post, we’re taking a closer look at performance metrics and when they matter as well as some strategies that can impact performance, including range requests, prefetching, and others. When you’re assessing a cloud solution for application development, taking these factors into consideration can help you make the best decision for your business.

Performance Metrics: Time to First Byte

Time to first byte (TTFB) is the time between a page request and when the page receives the first byte of information from the server. In other words, TTFB is measured by how long it takes between the start of the request and the start of the response, including DNS lookup and establishing the connection using a TCP handshake and SSL handshake if you’ve made the request over HTTPS.

TTFB identifies pages that load slowly due to server-side calculations that could instead benefit from client-side scripting. It’s often used to assess search rankings by displaying websites that respond to a request faster and appear more usable before other websites.

TTFB is a useful metric, but it doesn’t tell the whole story every time and shouldn’t be the only metric used to make decisions when it comes to choosing a cloud storage solution. For example, when David Liu, Founder and CEO of Musify, a music streaming app, approached his search for a new cloud storage provider, he had a specific TTFB benchmark in mind. He thought he absolutely needed to meet this benchmark in order for his new storage solution to work for his use case, however, upon further testing, he found that his initial benchmark was more aggressive than he actually needed. The performance he got by utilizing Cloudflare in front of his origin store in Backblaze B2 Cloud Storage more than met his needs and served his users well.

Optimizing Cloud Storage Performance

TTFB is the dominant method of measuring performance, but TTFB can be impacted by any number of factors—your location, your connection, the data being sent, etc. As such, there are ways to improve TTFB, including using a content delivery network (CDN) on top of origin storage, range requests, and prefetching.

Performance and Content Delivery Networks

A CDN helps speed content delivery by storing content at the edge, meaning faster load times and reduced latency. For high-bandwidth use cases, a CDN can optimize media delivery.

Companies like Kanopy, a media streaming service; Big Cartel, an e-commerce platform; and CloudSpot, a professional photo gallery platform, use a CDN between their origin storage in Backblaze B2 and their end users to great effect. Kanopy offers a library of 25,000+ titles to 45 million patrons worldwide. Latency and poor performance is not an option. “Video needs to have a quick startup time,” Kanopy’s Lead Video Software Engineer, Pierre-Antoine Tible said. “With Backblaze over [our CDN] Cloudflare, we didn’t have any issues.”

For Big Cartel, hosting one million customer sites likewise demands high-speed performance. Big Cartel’s Technical Director, Lee Jensen, noted, “We had no problems with the content served from Backblaze B2. The time to serve files in our 99th percentile, including fully rendering content, was under one second, and that’s our worst case scenario.” The time to serve files in their 75th percentile was under just 200 to 300 milliseconds, and that’s when content needs to be pulled from origin storage in Backblaze B2 when it’s not already cached in their CDN Fastly’s edge servers.

“We had no problems with the content served from Backblaze B2. The time to serve files in our 99th percentile, including fully rendering content, was under one second, and that’s our worst case scenario.”
—Lee Jensen, Technical Director, Big Cartel

Range Requests and Performance

HTTP range requests allow sending only a portion of an HTTP message from a server to a client. Partial requests are useful for large media or downloading files with pause and resume functions, and they’re common for developers who like to concatenate files and store them as big files. For example, if a user wants to skip to a clip of a full video or a specific frame in a video, using range requests means the application doesn’t have to serve the whole file.

Because the Backblaze B2 vault architecture separates files into shards, you get the same performance whether you call the whole file or just part of the file in a range request. Rather than wasting time learning how to optimize performance on a new platform or adjusting your code to comply with frustrating limitations, developers moving over to Backblaze B2 can utilize existing code they’re already invested in.

Prefetching and Performance

Prefetching is a way to “queue up” data before it’s actually required. This improves latency if that data is subsequently requested. When you’re using a CDN in front of your origin storage, this means the user queues up data/files/content in the CDN before someone asks for it.

Video streaming service, Kanopy, uses prefetching with popular videos they expect will see high demand in certain regions. This would violate some cloud storage providers’ terms of service because they egress out more than they store. Because Kanopy gets free egress between their origin store in Backblaze B2 and their CDN Cloudflare, the initial download cost for prefetching is $0. (Backblaze also has partnerships with other CDN providers like Fastly and bunny.net to offer zero egress.) The partnership means Kanopy doesn’t have to worry about running up egress charges, and they’re empowered to use prefetching to optimize their infrastructure.

Other Metrics to Consider When Assessing Cloud Performance

In addition to TTFB, there are a number of other metrics to consider when it comes to assessing cloud performance, including availability, the provider’s service level agreements (SLAs), and durability.

Availability measures the percentage of time the data is available to be accessed. All data occasionally becomes unavailable due to regular operating procedures like system maintenance. But, obviously data availability is very important when you’re serving content around the globe 24/7. Backblaze B2, for example, commits to a 99.9% uptime with no cold delays. Commitments like uptime are usually outlined in a cloud provider’s SLA—an agreement that lists the performance metrics the cloud provider agrees to provide.

Durability measures how healthy your data is. Object storage providers express data durability as an annual percentage in nines, as in two nines before the decimal point and as many nines as warranted after the decimal point. For example, 11 nines of durability is expressed as 99.999999999%. What this means is that the storage vendor is promising that your data will remain intact while it is under their care without losing any more than 0.000000001% of your data in a year (in the case of 11 nines annual durability).

Ready to Get Started?

Understanding the different performance metrics that might impact your data can help when you’re evaluating cloud storage providers. Ready to get started with Backblaze B2? We offer the first 10GB free.

The post Cloud Performance and When It Matters appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Developers Get EC2 Alternative With Vultr Cloud Compute and Bare Metal

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/developers-get-ec2-alternative-with-vultr-cloud-compute-and-bare-metal/

The old saying, “birds of a feather flock together,” couldn’t be more true of the latest addition to the Backblaze partner network. Today, we announce a new partnership with Vultr—the largest privately-owned, global hyperscale cloud—to serve developers and businesses with infrastructure as a service that’s easier to use and lower cost than perhaps better known alternatives.

With the Backblaze + Vultr combination, developers now have the ability to connect data stored in Backblaze B2 with virtualized cloud compute and bare metal resources in Vultr—providing a compelling alternative to Amazon S3 and EC2. Each Vultr compute instance includes a fixed amount of bandwidth, meaning that developers can easily transfer data between Vultr’s 17 global locations and Backblaze at no additional cost.

In addition to replacing AWS EC2, Vultr’s complete product line also offers load balancers and block storage which can seamlessly replace Amazon Elastic Load Balancing (ELB) and Elastic Block Storage (EBS).

With this partnership, developers of any size can avoid vendor lock-in, access best of breed services, and do more with the data they have stored in the cloud with ease, including:

  • Running analysis on stored data.
  • Deploying applications and storing application data.
  • Transcoding media and provisioning origin storage for streaming and video on-demand applications.

Backblaze + Vultr: Better Together

Vultr’s ease of use and comparatively low costs have motivated more than 1.3 million developers around the world to use its service. We recognized a shared culture in Vultr, which is why we’re looking forward to seeing what our joint customers can do with this partnership. Like Backblaze, Vultr was founded with minimal outside investment. Both services are transparent, affordable, simple to start without having to talk to sales (although sales support is only a call or email away), and, above all, easy. Vultr is on a mission to simplify deployment of cloud infrastructure, and Backblaze is on a mission to simplify cloud storage.

Rather than trying to be everything for everyone, both businesses play to their strengths, and customers get the benefit of working with unconflicted partners.

Vultr’s pricing often comes in at half the cost of the big three—Amazon, Google, and Microsoft—and with Vultr’s bundled egress, we’re working together to alleviate the burden of bandwidth costs, which can be disproportionately huge for small and medium-sized businesses.

“The Backblaze-Vultr partnership means more developers can build the flexible tech stacks they want to build, without having to make needlessly tough choices between access and affordability,” said Shane Zide, VP of Global Partnerships at Vultr. “When two companies who focus on ease of use and price performance work together, the whole is greater than the sum of the parts.”

Fly Higher With Backblaze B2 + Vultr

Existing Backblaze B2 customers now have unfettered access to compute resources with Vultr, and Vultr customers can connect to astonishingly easy cloud storage with Backblaze B2. If you’re not yet a B2 Cloud Storage customer, create an account to get started in minutes. If you’re already a B2 Cloud Storage customer, click here to activate an account with Vultr.

For developers looking to do more with their data, we welcome you to join the flock. Get started with Backblaze B2 and Vultr today.

The post Developers Get EC2 Alternative With Vultr Cloud Compute and Bare Metal appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Finding a 1Up When Free Cloud Credits Run Out

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/finding-a-1up-when-free-cloud-credits-run-out/

For people in the early stages of development, a cloud storage provider that offers free credits might seem like a great deal. And diversified cloud providers do offer these kinds of promotions to help people get started with storing data: Google Cloud Free Tier and AWS Free Tier offer credits and services for a limited time, and both providers also have incentive funds for startups which can be unlocked through incubators that grant additional credits of up to tens of thousands of dollars.

Before you run off to give them a try though, it’s important to consider the long-term realities that await you on the far side of these promotions.

The reality is that once they’re used up, budget items that were zeros yesterday can become massive problems tomorrow. Twitter is littered with countless experiences of developers finding themselves surprised with an unexpected bill and the realization that they need to figure out how to navigate the complexities of their cloud provider—fast.

What to Do When You Run Out of Free Cloud Storage Credits

So, what do you do once you’re out of credits? You could try signing up with different emails to game the system, or look into getting into a different incubator for more free credits. If you plan on your app being around for a few years and succeeding, the solution of finding more credits isn’t scalable, and the process of applying to another incubator would take too long. You can always switch from Google Cloud Platform to AWS to get free credits elsewhere, but transferring data between providers almost always incurs painful egress charges.

If you’re already sure about taking your data out of your current provider, read ahead to the section titled “Cloud to Cloud Migration” to learn how transferring your data can be easier and faster than you think.

Because chasing free credits won’t work forever, this post offers three paths for navigating your cloud bills after free tiers expire. It covers:

  • Staying with the same provider. Once you run out of free credits, you can optimize your storage instances and continue using (and paying) for the same provider.
  • Exploring multi-cloud options. You can port some of your data to another solution and take advantage of the freedom of a multi-cloud strategy.
  • Choosing another provider. You can transfer all of your data to a different cloud that better suits your needs.

Path 1: Stick With Your Current Cloud Provider

If you’re running out of promotional credits with your current provider, your first path is to just continue using their storage services. Many people see this as your only option because of the frighteningly high egress fees you’d face if you try to leave. If you choose to stay with the same provider, be sure to review and account for all of the instances you’ve spun up.

Here’s an example of a bill that one developer faced after their credits expired: This user found themselves locked into an unexpected $2,700 bill because of egress costs. Looking closer at their experience, the spike in charges was due to a data transfer of 30TB of data. The first 1GB of data transferred out is free, followed by egress costing $0.09 per gigabyte for the first 10TB and $0.085 per gigabyte for the next 40TB. Doing the math, that’s:

$0.085/GB x 20,414 GB = $1735, $0.090/GB x 10,239 GB = $921

Choosing to stay with your current cloud provider is a straightforward path, but it’s not necessarily the easiest or least expensive option, which is why it’s important to conduct a thorough audit of the current cloud services you have in use to optimize your cloud spend.

Optimizing Your Current Cloud Storage Solution

Over time, cloud infrastructure tends to become more complex and varied, and your cloud storage bills follow the same pattern. Cloud pricing transparency in general is an issue with most diversified providers—in short: It’s hard to understand what you’re paying for, and when. If you haven’t seen a comparison yet, a breakdown contrasting storage providers is shared in this post.

Many users find that AWS and Google Cloud are so complex that they turn to services that can help them monitor and optimize their cloud spend. These cost management services charge based on a percentage of your AWS spend. For a startup with limited resources, paying for these professional services can be challenging, but manually predicting cloud costs and optimizing spending is also difficult, as well as time consuming.

The takeaway for sticking with your current provider: Be a budget hawk for every fee you may be at risk of incurring, and ensure your development keeps you from unwittingly racking up heavy fees.

Path 2: Take a Multi-cloud Approach

For some developers, although you may want to switch to a different cloud after your free credits expire, your code can’t be easily separated from your cloud provider. In this case, a multi-cloud approach can achieve the necessary price point while maintaining the required level of service.

Short term, you can mitigate your cloud bill by immediately beginning to port any data you generate going forward to a more affordable solution. Even if the process of migrating your existing data is challenging, this move will stop your current bill from ballooning.

Beyond mitigation, there are multiple benefits to using a multi-cloud solution. A multi-cloud strategy gives companies the freedom to use the best possible cloud service for each workload. There are other benefits to taking a multi-cloud approach:

  • Redundancy: Some major providers have faced outages recently. A multi-cloud strategy allows you to have a backup of your data to continue serving your customers even if your primary cloud provider goes down.
  • Functionality: With so many providers introducing new features and services, it’s unlikely that a single cloud provider will meet all of your needs. With a multi-cloud approach, you can pick and choose the best services from each provider. Multinational companies can also optimize for their particular geographical regions.
  • Flexibility: Avoid vendor lock-in if you outgrow a single cloud provider with a diverse cloud infrastructure.
  • Cost: You may find that one cloud provider offers a lower price for compute and another for storage. A multi-cloud strategy allows you to pick and choose which works best for your budget.

The takeaway for pursuing multi-cloud: It might not solve your existing bill, but it will mitigate your exposure to additional fees going forward. And it offers the side benefit of providing a best-of-breed approach to your development tech stack.

Path 3: Find a New Cloud Provider

Finally, you can choose to move all of your data to a different cloud storage provider. We recommend taking a long-term approach: Look for cloud storage that allows you to scale with the least amount of friction while continuing to support everything you need for a good customer experience in your app. You’ll want to consider cost, usability, and solutions when looking for a new provider.

Cost

Many cloud providers use a multi-tier approach, which can become complex as your business starts to scale its cloud infrastructure. Switching to a provider that has single-tier pricing helps businesses planning for growth predict their cloud storage cost and optimize its spend, saving time and money for use on future opportunities. You can use this pricing calculator to check storage costs of Backblaze B2 Cloud Storage against AWS, Azure, and Google Cloud.

One example of a startup that saved money and was able to grow their business by switching to another storage provider is CloudSpot, a SaaS photography platform. They had initially gotten their business off the ground with the help of a startup incubator. Then in 2019, their AWS storage costs skyrocketed, but their team felt locked in to using Amazon.

When they looked at other cloud providers and eventually transferred their data out of AWS, they were able to save on storage costs that allowed them to reintroduce services they had previously been forced to shut down due to their AWS bill. Reviving these services made an immediate impact on customer acquisition and recurring revenue.

Usability

Time spent trying to navigate a complicated platform is a significant cost to business. Aiden Korotkin of AK Productions, a full-service video production company based in Washington, D.C., experienced this first hand. Korotkin initially stored his client data in Google Cloud because the platform had offered him a promotional credit. When the credits ran out in about a year, he found himself frustrated with the inefficiency, privacy concerns, and overall complexity of Google Cloud.

Korotkin chose to switch to Backblaze B2 Cloud Storage with the help of solution engineers that helped him figure out the best storage solution for his business. After quickly and seamlessly transferring his first 12TB in less than a day, he noticed a significant difference from using Google Cloud. “If I had to estimate, I was spending between 30 minutes to an hour trying to figure out simple tasks on Google (e.g. setting up a new application key, or syncing to a third-party source). On Backblaze it literally takes me five minutes,” he emphasized.

Integrations

Workflow integrations can make cloud storage easier to use and provide additional features. By selecting multiple best-of-breed providers, you can achieve better functionality with significantly reduced price and complexity.

Content delivery network (CDN) partnerships with Cloudflare and Fastly allow developers using services like Backblaze B2 to take advantage of free egress between the two services. Game developers can serve their games to users without paying egress between their origin source and their CDN, and media management solutions that can integrate directly with cloud storage to make media assets easy to find, sort, and pull into a new project or editing tool. Take a look at other solutions integrated with cloud storage that can support your workflows.

Cloud to Cloud Migration

After choosing a new cloud provider, you can plan your data migration. Your data may be spread out across multiple buckets, service providers, or different storage tiers—so your first task is discovering where your data is and what can and can’t move. Once you’re ready, there is a range of solutions for moving your data, but when it comes to moving between cloud services, a data migration tool like Flexify.IO can help make things a lot easier and faster.

Instead of manually offloading static and production data from your current cloud storage provider and reuploading it into your new provider, Flexify.IO reads the data from the source storage and writes it to the destination storage via inter-cloud bandwidth. Flexify.IO achieves fast and secure data migration at cloud-native speeds because the data transfer happens within the cloud environment.

Supercharged Data Migration with Flexify.IO

For developers with customer-facing applications, it’s especially important that customers still retain access to data during the migration from one cloud provider to another. When CloudSpot moved about 700TB of data from AWS to Backblaze B2 in just six days with help from Flexify.IO, customers were actually still uploading images to their Amazon S3 buckets. The migration process was able to support both environments and allowed them to ensure everything worked properly. It was also necessary because downtime was out of the question—customers access their data so frequently that one of CloudSpot’s galleries is accessed every one or two seconds.

What’s Next?

If you’re interested in exploring a different cloud storage service for your solution, you can easily sign up today, or contact us for more information on how to run a free POC or just to begin transferring your data out of your current cloud provider.

The post Finding a 1Up When Free Cloud Credits Run Out appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Development Simplified: CORS Support for Backblaze S3 Compatible APIs

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/development-simplified-cors-support-for-backblaze-s3-compatible-apis/

Since its inception in 2009, Cross-Origin Resource Sharing (CORS) has offered developers a convenient way of bypassing an inherently secure default setting—namely the same-origin policy (SOP). Allowing selective cross-origin requests via CORS has saved developers countless hours and money by reducing maintenance costs and code complexity. And now with CORS support for Backblaze’s recently launched S3 Compatible APIs, developers can continue to scale their experience without needing a complete code overhaul.

If you haven’t been able to adopt Backblaze B2 Cloud Storage in your development environment because of issues related to CORS, we hope this latest release gives you an excuse to try it out. Whether you are using our B2 Native APIs or S3 Compatible APIs, CORS support allows you to build rich client-side web applications with Backblaze B2. With the simplicity and affordability this service offers, you can put your time and money back to work on what’s really important: serving end users.

Top Three Reasons to Enable CORS

B2 Cloud Storage is popular among agile teams and developers who want to take advantage of easy to use and affordable cloud storage while continuing to seamlessly support their applications and workflows with minimal to no code changes. With Backblaze S3 Compatible APIs, pointing to Backblaze B2 for storage is dead simple. But if CORS is key to your workflow, there are three additional compelling reasons for you to test it out today:

  • Compatible storage with no re-coding. By enabling CORS rules for your custom web application or SaaS service that uses our S3 Compatible APIs, your development team can serve and upload data via B2 Cloud Storage without any additional coding or reconfiguring required. This will save you valuable development time as you continue to deliver a robust experience for your end users.
  • Seamless integration with your plugins. Even if you don’t choose B2 Cloud Storage as the primary backend for your business but you do use it for discreet plugins or content serving sites, enabling CORS rules for those applications will come in handy. Developers who configure PHP, NodeJS, and WordPress plugins via the S3 Compatible APIs to upload or download files from web applications can do so easily by enabling CORS rules in their Backblaze B2 Buckets. With CORS support enabled, these plugins work seamlessly.
  • Serving your web assets with ease. Consider an even simpler scenario in which you want to serve a custom web font from your B2 Cloud Storage Bucket. Most modern browsers will require a preflight check for loading the font. By configuring the CORS rules in that bucket to allow the font to be served in the origin(s) of your choice, you will be able to use your custom font seamlessly across your domains from a single source.

Whether you are relying on B2 Cloud Storage as your primary cloud infrastructure for your web application or simply using it to serve cross-origin assets such as fonts or images, enabling CORS rules in your buckets will allow for proper and secure resource sharing.

Enabling CORS Made Simple and Fast

If your web page or application is hosted in a different origin from images, fonts, videos, or stylesheets stored in B2 Cloud Storage, you need to add CORS rules to your bucket to achieve proper functionality. Thankfully, enabling CORS rules is easy and can be found in your B2 Cloud Storage settings:

You will have the option of sharing everything in your bucket with every origin, select origins, or defining custom rules with the Backblaze B2 CLI.

Learning More and Getting Started

If you’re dying to learn more about the fundamentals of CORS as well as additional specifics about how it works with B2 Cloud Storage, you can dig into this informative Knowledge Base article. If you’re just pumped that CORS is now easily available in our S3 Compatible APIs suite, well then, you’re probably already on your way to a smoother, more reasonably priced development experience. If you’ve got a question or a response, we always love to hear from you in the comments or you can contact us for assistance.

The post Development Simplified: CORS Support for Backblaze S3 Compatible APIs appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Free Cloud Storage: What’s the Catch?

Post Syndicated from Amrit Singh original https://www.backblaze.com/blog/free-cloud-storage-whats-the-catch/

Major cloud storage providers including Amazon and Google offer free tiers of storage or promotional credits to incentivize developers and startups to begin storing data on their clouds. While this may seem attractive to early-stage businesses looking for an affordable jumpstart, it’s important to consider the long-term realities of scaling your business once you’re beyond the threshold of these freebie offers.

As your business evolves, so does how you utilize, store, and manage your data. Aiden Korotkin, the founder of AK Productions—a full-service video production company based in Washington, D.C.—learned this the hard way. He was quick to choose Google Cloud Storage when he was getting started, only to realize it wasn’t the right fit as his business grew.

At the time, Google offered a promotional credit. “That lasted about a year,” Korotkin said, and when the credit expired, it forced him to take a closer look at the cost efficiency and security of the Google Cloud Platform. “I realized that it’s super confusing,” he explained, “you have a lot of options, but it seems like chaos unless you know exactly what you’re doing.”

Making the Cloud Storage Decision: How Much Will “Free” Cost You?

The overhead of managing your cloud platform is only one of the factors to consider when planning your cloud infrastructure rollout. Complexity, predictability, and retrieval are all things you should keep in mind when picking the right solution for your business case. Evaluating all of these factors helps you understand the true cost of ownership and the value of the platform over time.

We hope this guide to three key factors for cloud storage selection will help you decide the right, next best step for you and your growing business.

Factor 1: Complexity

The promotional offers that many cloud storage providers boast are fairly straightforward and clear: Google Cloud Free Tier offers a three month free trial with a $300 credit to use with any Google Cloud services. AWS Free Tier offers various free services including 5GB of S3 Storage for 12 months. Both providers also have incentive funds for startups which can be unlocked through incubators or VCs which grant additional credits of up to tens of thousands of dollars. For next to nothing your data is on its way and you can move on.

But while it may be tempting to jump on one of these offers, it’s worth spending some time learning the breakdown of data storage and utilization costs for each platform, as well as understanding any longer-term or service-related fees you may be on the hook for. Decoding your monthly bill or dealing with surprising service charges when you’re managing a few gigabytes or terabytes is doable, but as you grow, navigating the tiered pricing structures that many of the legacy cloud providers operate under becomes quite complicated.

That’s a bit of an understatement. The reality of the situation is that a whole industry of consultants and businesses has sprung up around this issue. These third-party vendors specialize in helping businesses understand and optimize their cloud invoices from Amazon AWS and Google Cloud Storage. When you need to hire another business just to understand what you’re paying for, it’s time to ask some questions about whether they’re right for you.

Tiered pricing may make sense for businesses who are capable of optimizing their infrastructure to a T. But how many startups does that describe? Most early-stage entrepreneurs do not have the resources to undertake this feat of planning and engineering and find themselves struggling to keep their cloud costs down when they graduate from free plans.

As if the complexity of pricing tables wasn’t bad enough, understanding what you can do with your data once it is stored can be highly confusing, too. Consider egress fees: These are the fees you’ll pay to download your data from the cloud. Most major cloud providers including Amazon and Google charge high egress fees ranging anywhere from $90 to $120+ per terabyte. Attempting to gauge just how expensive egress will be for you can feel impossible. As a result, businesses often begin storing data on these cloud platforms with ease, but as their data sets grow, they find themselves unable to leave due to the high egress costs.

With Amazon AWS, many businesses find that the complexity transcends pricing and stretches into the functionalities provided by the platform. Without the right tools and resources, you may spend hours or days configuring your environment. Tristan Pelligrino, co-founder of Motion, a B2B content marketing agency, spent significant amounts of time simply setting up and onboarding new users. “The interface is very complex. It felt like I was recreating the wheel every time I set up a new user experience,” Pelligrino said. “It was frustrating, but someone had to do it.” The problem being, every moment he spent on AWS was time he wasn’t investing in his creative work.

Not having a 360 degree awareness of your platform’s functionality and how to properly configure it may seem like a minor issue, but sometimes the complexity can obscure vital information. Like when you discover that your backup solution isn’t actually backing up your data. The team at Crisp Video Group, a creative agency serving law firms, scrambled to recover what they could when they realized that their Amazon S3 configuration had failed to back up 109 days worth of data.

At the end of the day, understanding, and oftentimes, avoiding complexity is key in the cloud storage selection process. If it’s not crystal clear what you need and what you’re going to be paying, you may want to consider how much an initial dose of “free” will cost you in the long run. Which leads to factor two.

Factor 2: Predictability

As an early-stage startup, it’s hard to predict how quickly you’re going to grow, which makes it even harder to predict your cloud costs on a monthly or annual basis. Most cloud providers will make it extremely easy to spin up new servers and store data, but as you scale, it’s important to have a clear idea of the costs involved so that you can attain predictable growth. Without this control, cloud storage costs could spiral, significantly hurting your OpEx margins and potentially making useful data inaccessible or unusable in make or break situations.

Predicting your cloud storage costs should be simple, in theory. You have three main dimensions: storage (how much data you store), download (the fee to get your data out of the cloud), and transactions (“stuff” you might do to your data inside the cloud). Yet most vendors continue to make it extremely difficult to understand your monthly bill which adds unnecessary strain when budgeting and forecasting your cloud spend. According to ZDNet, “37% of IT executives found their cloud storage costs to be unpredictable.”

This was the case for Gavin Wade, the founder & CEO of the SaaS photography platform CloudSpot, who realized that his business’s 700TB+ of data stored on Amazon S3 was eating into their OpEx margins. Even more discouraging was the realization that it would be an even bigger financial undertaking to move the data to another service.

“We had a few internal conversations where we concluded that we were stuck with Amazon. That’s never a good feeling in business,” Wade mentioned, as his team looked for other places to affect changes and cut costs.

Being able to accurately project your cash flow is essential to being able to take advantage of other opportunities as they arise. If you make the choice early on to lock into a provider that doesn’t offer predictability on a budget line, you won’t have the clarity you need when it’s most important.

Factor 3: Retrieval

Along with tiered pricing, most cloud providers offer several storage classes meant for different use cases. Again, this may be useful if you have optimized accordingly, but more often than not, these storage classes add unnecessary complexity. This is especially true when it comes to the timing and expense of getting your data back, or, retrieval.

Amazon S3 offers the following storage classes:

S3 Standard: Active data that needs to be accessed frequently and quickly.
S3 Intelligent-Tiering: Moves data automatically across tiers depending on usage.
S3 Standard-Infrequent Access: Data that is accessed less frequently but requires rapid access when needed.
S3 Glacier: Long-term archive with longer retrieval times ranging from minutes to hours.
S3 Deep Glacier: Long-term archive comparable to magnetic tape libraries with the slowest retrieval times.

You’ll typically see the terms “hot” and “cold” storage used to describe the nature in which the data is stored and accessed. Hot storage, in this case, S3 Standard, stores data that needs to be accessed right away. Most cloud providers including Amazon and Google charge a premium for hot data because it is resource-intensive.

Cold storage, such as Amazon’s S3 Glacier and Deep Glacier classes, store data that is accessed less frequently and doesn’t require the fast access of warmer or hot data. This tier is commonly used for archival purposes. Though prices for cold storage systems are lower than hot storage, they often incur high retrieval costs and access to the data in cold storage typically requires patience and planning. If you are unable to predict when data will be needed and have time-sensitive retrieval requirements, cold storage may not be suitable for your needs.

As you’re starting out, it can be easy to convince yourself that some data will be less important to you in the long run, especially when making that decision locks you into a cheaper storage tier. But it’s easy to underestimate just how valuable readily accessible data can be in the long run. And whatever decision you make in this regard will only compound over time as your data expands.

Even larger organizations with years of experience in innovation, like Complex Networks, found Amazon S3’s tiered structure problematic as they scaled their production efforts. “S3 has multiple storage classes, each with its associated costs, fees, and wait times,” said Jermaine Harrell, Manager of Media Infrastructure & Technology at Complex Networks. Working with Amazon Glacier to archive content, they found that the long retrieval times and ballooning retrieval costs made the solution untenable for their specific use case.

A realistic approach to your retrieval needs is an essential day one decision if you’re in a data intensive business.

Finding the Right Platform

It helps to read the fine print to make sure there are no hidden costs or minimum duration fees associated with the cloud platform you are considering. Find a platform that is simple when it comes to pricing and billing—this will be helpful in the long run as you scale your cloud infrastructure.

As you grow, budgeting for your cloud spend should be simple, so it’s best to avoid dealing with tiered, complicated pricing structures if you can. A cloud service with a flat pricing structure will allow you to forecast your OpEx spend, without needing to scan pages of pricing tables and charts.

Lastly, if you need to tap into your data at any given moment, make sure it’s readily available without having to pay premiums for cold storage retrieval and wait days just to access your data. Cold storage options are becoming less and less useful for most modern organizations that need to tap into their data at any given moment.

Though it may be tempting to take up Amazon or Google Cloud for their incentive programs and promotional credits, the perceived price and true value of their platforms may not be apparent upon first glance. Most early-stage startups do not have the time, resources, and money to continually upkeep and reevaluate their cloud services. So even before they begin to scale, it’s important to choose a service that is transparent, predictable, and allows you to access all of your data when you need it.

The post Free Cloud Storage: What’s the Catch? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.