Tag Archives: Egress

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/cache-reserve-open-beta/

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Earlier this year, we introduced Cache Reserve. Cache Reserve helps users serve content from Cloudflare’s cache for longer by using R2’s persistent data storage. Serving content from Cloudflare’s cache benefits website operators by reducing their bills for egress fees from origins, while also benefiting website visitors by having content load faster.

Cache Reserve has been in closed beta for a few months while we’ve collected feedback from our initial users and continued to develop the product. After several rounds of iterating on this feedback, today we’re extremely excited to announce that Cache Reserve is graduating to open beta – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting.

If you want to see the benefits of Cache Reserve for yourself and give us some feedback– you can go to the Cloudflare dashboard, navigate to the Caching section and enable Cache Reserve by pushing one button.

How does Cache Reserve fit into the larger picture?

Content served from Cloudflare’s cache begins its journey at an origin server, where the content is hosted. When a request reaches the origin, the origin compiles the content needed for the response and sends it back to the visitor.

The distance between the visitor and the origin can affect the performance of the asset as it may travel a long distance for the response. This is also where the user is charged a fee to move the content from where it’s stored on the origin to the visitor requesting the content. These fees, known as “bandwidth” or “egress” fees, are familiar monthly line items on the invoices for users that host their content on cloud providers.

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Cloudflare’s CDN sits between the origin and visitor and evaluates the origin’s response to see if it can be cached. If it can be added to Cloudflare’s cache, then the next time a request comes in for that content, Cloudflare can respond with the cached asset, which means there’s no need to send the request to the origin– reducing egress fees for our customers. We also cache content in data centers close to the visitor to improve the performance and cut down on the transit time for a response.

To help assets remain cached for longer, a few years ago we introduced Tiered Cache which organizes all of our 250+ global data centers into a hierarchy of lower-tiers (generally closer to visitors) and upper-tiers (generally closer to origins). When a request for content cannot be served from a lower-tier’s cache, the upper-tier is checked before going to the origin for a fresh copy of the content. Organizing our data centers into tiers helps us cache content in the right places for longer by putting multiple caches between the visitor’s request and the origin.

Why do cache misses occur?
Misses occur when Cloudflare cannot serve the content from cache and must go back to the origin to retrieve a fresh copy. This can happen when a customer sets the cache-control time to signify when the content is out of date (stale) and needs to be revalidated. The other element at play – how long the network wants content to remain cached – is a bit more complicated and can fluctuate depending on eviction criteria.

CDNs must consider whether they need to evict content early to optimize storage of other assets when cache space is full. At Cloudflare, we prioritize eviction based on how recently a piece of cached content was requested by using an algorithm called “least recently used” or LRU. This means that even if cache-control signifies that a piece of content should be cached for many days, we may still need to evict it earlier (if it is least-requested in that cache) to cache more popular content.

This works well for most customers and website visitors, but is often a point of confusion for people wondering why content is unexpectedly displaying a miss. If eviction did not happen then content would need to be cached in data centers that were further away from visitors requesting that data, harming the performance of the asset and injecting inefficiencies into how Cloudflare’s network operates.

Some customers, however, have large libraries of content that may not be requested for long periods of time. Using the traditional cache, these assets would likely be evicted and, if requested again, served from the origin. Keeping assets in cache requires that they remain popular on the Internet which is hard given what’s popular or current is constantly changing. Evicting content that becomes cold means additional origin egress for the customer if that content needs to be pulled repeatedly from the origin.

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Enter Cache Reserve
This is where Cache Reserve shines. Cache Reserve serves as the ultimate upper-tier data center for content that might otherwise be evicted from cache. Once admitted to Cache Reserve, content can be stored for a much longer period of time– 30 days by default. If another request comes in during that period, it can be extended for another 30 days (and so on) or until cache-control signifies that we should no longer serve that content from cache. Cache Reserve serves as a safety net to backstop all cacheable content, so customers don’t have to worry about unwanted cache eviction and origin egress fees.

How does Cache Reserve save egress?

The promise of Cache Reserve is that hit ratios will increase and egress fees from origins will decrease for long tail content that is rarely requested and may be evicted from cache.

However, there are additional egress savings built into the product. For example, objects are written to Cache Reserve on misses. This means that when fetching the content from the origin on a cache miss, we both use that to respond to a request while also writing the asset to Cache Reserve, so customers won’t experience egress from serving that asset for a long time.

Cache Reserve is designed to be used with tiered cache enabled for maximum origin shielding. When there is a cache miss in both the lower and upper tiers, Cache Reserve is checked and if there is a hit, the response will be cached in both the lower and upper tier on its way back to the visitor without the origin needing to see the request or serve any additional data.

Cache Reserve accomplishes these origin egress savings for a low price, based on R2 costs. For more information on Cache Reserve prices and operations, please see the documentation here.

Scaling Cache Reserve on Cloudflare’s developer platform

When we first announced Cache Reserve, the response was overwhelming. Over 20,000 users wanted access to the beta, and we quickly made several interesting discoveries about how people wanted to use Cache Reserve.

The first big challenge we found was that users hated egress fees as much as we do and wanted to make sure that as much content as possible was in Cache Reserve. During the closed beta we saw usage above 8,000 PUT operations per second sustained, and objects served at a rate of over 3,000 GETs per second. We were also caching around 600Tb for some of our large customers. We knew that we wanted to open the product up to anyone that wanted to use it and in order to scale to meet this demand, we needed to make several changes quickly. So we turned to Cloudflare’s developer platform.

Cache Reserve stores data on R2 using its S3-compatible API. Under the hood, R2 handles all the complexity of an object storage system using our performant and scalable developer primitives: Workers and Durable Objects. We decided to use developer platform tools because it would allow us to implement different scaling strategies quickly. The advantage of building on the Cloudflare developer platform is that Cache Reserve was easily able to experiment to see how we could best distribute the high load we were seeing, all while shielding the complexity of how Cache Reserve works from users.  

With the single press of a button, Cache Reserve performs these functions:

  • On a cache miss, Pingora (our new L7 proxy) reaches out to the origin for the content and writes the response to R2. This happens while the content continues its trip back to the visitor (thereby avoiding needless latency).
  • Inside R2, a Worker writes the content to R2’s persistent data storage while also keeping track of the important metadata that Pingora sends about the object (like origin headers, freshness values, and retention information) using Durable Objects storage.
  • When the content is next requested, Pingora looks up where the data is stored in R2 by computing the cache key. The cache key’s hash determines both the object name in R2 and which bucket it was written to, as each zone’s assets are sharded across multiple buckets to distribute load.
  • Once found, Pingora attaches the relevant metadata and sends the content from R2 to the nearest upper-tier to be cached, then to the lower-tier and finally back to the visitor.
Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

This is magic! None of the above needs to be managed by the user. By bringing together R2, Workers, Durable Objects, Pingora, and Tiered Cache we were able to quickly build and make changes to Cache Reserve to scale as needed…

What’s next for Cache Reserve

In addition to the work we’ve done to scale Cache Reserve, opening the product up also opens the door to more features and integrations across Cloudflare. We plan on putting additional analytics and metrics in the hands of Cache Reserve users, so they know precisely what’s in Cache Reserve and how much egress it’s saving them. We also plan on building out more complex integrations with R2 so if customers want to begin managing their storage, they are able to easily make that transition. Finally, we’re going to be looking into providing more options for customers to control precisely what is eligible for Cache Reserve. These features represent just the beginning for how customers will control and customize their cache on Cloudflare.

What’s some of the feedback been so far?

As a long time Cloudflare customer, we were eager to deploy Cache Reserve to provide cost savings and improved performance for our end users. Ensuring our application always performs optimally for our global partners and delivery riders is a primary focus of Delivery Hero. With Cache Reserve our cache hit ratio improved by 5% enabling us to scale back our infrastructure and simplify what is needed to operate our global site and provide additional cost savings.
Wai Hang Tang, Director of Engineering at Delivery Hero

Anthology uses Cloudflare’s global cache to drastically improve the performance of content for our end users at schools and universities. By pushing a single button to enable Cache Reserve, we were able to provide a great experience for teachers and students and reduce two-thirds of our daily egress traffic.
Paul Pearcy, Senior Staff Engineer at Anthology

At Enjoei we’re always looking for ways to help make our end-user sites faster and more efficient. By using Cloudflare Cache Reserve, we were able to drastically improve our cache hit ratio by more than 10% which reduced our origin egress costs. Cache Reserve also improved the performance for many of our merchants’ sites in South America, which improved their SEO and discoverability across the Internet (Google, Criteo, Facebook, Tiktok)– and it took no time to set it up.
Elomar Correia, Head of DevOps SRE | Enterprise Solutions Architect at Enjoei

In the live events industry, the size and demand for our cacheable content can be extremely volatile, which causes unpredictable swings in our egress fees. Additionally, keeping data as close to our users as possible is critical for customer experience in the high traffic and low bandwidth scenarios our products are used in, such as conventions and music festivals. Cache Reserve helps us mitigate both of these problems with minimal impact on our engineering teams, giving us more predictable costs and lower latency than existing solutions.
Jarrett Hawrylak, VP of Engineering | Enterprise Ticketing at Patron Technology

How can I use it today?

As of today, Cache Reserve is in open beta, meaning that it’s available to anyone who wants to use it.

To use the Cache Reserve:

  • Simply go to the Caching tile in the dashboard.
  • Navigate to the Cache Reserve page and push the enable data sync button (or purchase button).

Enterprise Customers can work with their Cloudflare Account team to access Cache Reserve.

Customers can ensure Cache Reserve is working by looking at the baseline metrics regarding how much data is cached and how many operations we’ve seen in the Cache Reserve section of the dashboard. Specific requests served by Cache Reserve are available by using Logpush v2 and finding HTTP requests with the field “CacheReserveUsed.”

We will continue to make sure that we are quickly triaging the feedback you give us and making improvements to help ensure Cache Reserve is easy to use, massively beneficial, and your choice for reducing egress fees for cached content.

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Try it out

We’ve been so excited to get Cache Reserve in more people’s hands. There will be more exciting developments to Cache Reserve as we continue to invest in giving you all the tools you need to build your perfect cache.

Try Cache Reserve today and let us know what you think.

Cloudflare Gateway dedicated egress and egress policies

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/gateway-dedicated-egress-policies/

Cloudflare Gateway dedicated egress and egress policies

Cloudflare Gateway dedicated egress and egress policies

Today, we are highlighting how Cloudflare enables administrators to create security policies while using dedicated source IPs. With on-premise appliances like legacy VPNs, firewalls, and secure web gateways (SWGs), it has been convenient for organizations to rely on allowlist policies based on static source IPs. But these hardware appliances are hard to manage/scale, come with inherent vulnerabilities, and struggle to support globally distributed traffic from remote workers.

Throughout this week, we’ve written about how to transition away from these legacy tools towards Internet-native Zero Trust security offered by services like Cloudflare Gateway, our SWG. As a critical service natively integrated with the rest of our broader Zero Trust platform, Cloudflare Gateway also enables traffic filtering and routing for recursive DNS, Zero Trust network access, remote browser isolation, and inline CASB, among other functions.

Nevertheless, we recognize that administrators want to maintain the convenience of source IPs as organizations transition to cloud-based proxy services. In this blog, we describe our approach to offering dedicated IPs for egressing traffic and share some upcoming functionality to empower administrators with even greater control.

Cloudflare’s dedicated egress IPs

Source IPs are still a popular method of verifying that traffic originates from a known organization/user when accessing applications and third party destinations on the Internet. When organizations use Cloudflare as a secure web gateway, user traffic is proxied through our global network, where we apply filtering and routing policies at the closest data center to the user. This is especially powerful for globally distributed workforces or roaming users. Administrators do not have to make updates to static IP lists as users travel, and no single location becomes a bottleneck for user traffic.

Today the source IP for proxied traffic is one of two options:

  • Device client (WARP) Proxy IP – Cloudflare forward proxies traffic from the user using an IP from the default IP range shared across all Zero Trust accounts
  • Dedicated egress IP – Cloudflare provides customers with a dedicated IP (IPv4 and IPv6) or range of IPs geolocated to one or more Cloudflare network locations

The WARP Proxy IP range is the default egress method for all Cloudflare Zero Trust customers. It is a great way to preserve the privacy of your organization as user traffic is sent to the nearest Cloudflare network location which ensures the most performant Internet experience. But setting source IP security policies based on this default IP range does not provide the granularity that admins often require to filter their user traffic.

Dedicated egress IPs are useful in situations where administrators want to allowlist traffic based on a persistent identifier. As their name suggests, these dedicated egress IPs are exclusively available to the assigned customer—and not used by any other customers routing traffic through Cloudflare’s network.

Additionally, leasing these dedicated egress IPs from Cloudflare helps avoid any privacy concerns which arise when carving them out from an organization’s own IP ranges. And furthermore, alleviates the need to protect your any of the IP ranges that are assigned to your on-premise VPN appliance from DDoS attacks or otherwise.

Dedicated egress IPs are available as add-on to for any Cloudflare Zero Trust enterprise-contracted customer. Contract customers can select the specific Cloudflare data centers used for their dedicated egress, and all subscribing customers receive at least two IPs to start, so user traffic is always routed to the closest dedicated egress data center for performance and resiliency. Finally, organizations can egress their traffic through Cloudflare’s dedicated IPs via their preferred on-ramps. These include Cloudflare’s device client (WARP), proxy endpoints, GRE and IPsec on-ramps, or any of our 1600+ peering network locations, including major ISPs, cloud providers, and enterprises.

Customer use cases today

Cloudflare customers around the world are taking advantage of Gateway dedicated egress IPs to streamline application access. Below are three most common use cases we’ve seen deployed by customers of varying sizes and across industries:

  • Allowlisting access to apps from third parties: Users often need to access tools controlled by suppliers, partners, and other third party organizations. Many of those external organizations still rely on source IP to authenticate traffic. Dedicated egress IPs make it easy for those third parties to fit within these existing constraints.
  • Allowlisting access to SaaS apps: Source IPs are still commonly used as a defense-in-depth layer for how users access SaaS apps, alongside other more advanced measures like multi-factor authentication and identity provider checks.
  • Deprecating VPN usage: Often hosted VPNs will be allocated IPs within the customers advertised IP range. The security flaws, performance limitations, and administrative complexities of VPNs are well-documented in our recent Cloudflare blog. To ease customer migration, users will often choose to maintain any IP allowlist processes in place today.

Through this, administrators are able to maintain the convenience of building policies with fixed, known IPs, while accelerating performance for end users by routing through Cloudflare’s global network.

Cloudflare Zero Trust egress policies

Today, we are excited to announce an upcoming way to build more granular policies using Cloudflare’s dedicated egress IPs. With a forthcoming egress IP policy builder in the Cloudflare Zero Trust dashboard, administrators can specify which IP is used for egress traffic based on identity, application, network and geolocation attributes.

Administrators often want to route only certain traffic through dedicated egress IPs—whether for certain applications, certain Internet destinations, and certain user groups. Soon, administrators can set their preferred egress method based on a wide variety of selectors such as application, content category, domain, user group, destination IP, and more. This flexibility helps organizations take a layered approach to security, while also maintaining high performance (often via dedicated IPs) to the most critical destinations.

Furthermore, administrators will be able to use the egress IP policy builder to geolocate traffic to any country or region where Cloudflare has a presence. This geolocation capability is particularly useful for globally distributed teams which require geo-specific experiences.

For example, a large media conglomerate has marketing teams that would verify the layouts of digital advertisements running across multiple regions. Prior to partnering with Cloudflare, these teams had clunky, manual processes to verify their ads were displaying as expected in local markets: either they had to ask colleagues in those local markets to check, or they had to spin up a VPN service to proxy traffic to the region. With an egress policy these teams would simply be able to match a custom test domain for each region and egress using their dedicated IP deployed there.

What’s Next

You can take advantage of Cloudflare’s dedicated egress IPs by adding them onto a Cloudflare Zero Trust Enterprise plan or contacting your account team. If you would like to be contacted when we release the Gateway egress policy builder, join the waitlist here.

Workers, Now Even More Unbound: 15 Minutes, 100 Scripts, and No Egress

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/workers-now-even-more-unbound/

Workers, Now Even More Unbound: 15 Minutes, 100 Scripts, and No Egress

Workers, Now Even More Unbound: 15 Minutes, 100 Scripts, and No Egress

Our mission is to enable developers to build their applications, end to end, on our platform, and ruthlessly eliminate limitations that may get in the way. Today, we’re excited to announce you can build large, data-intensive applications on our network, all without breaking the bank; starting today, we’re dropping egress fees to zero.

More Affordable: No Egress Fees

Building more on any platform historically comes with a caveat — high data transfer cost. These costs often come in the form of egress fees. Especially in the case of data intensive workloads, egress data transfer costs can come at a high premium, depending on the provider.

What exactly are data egress fees? They are the costs of retrieving data from a cloud provider. Cloud infrastructure providers generally pay for bandwidth based on capacity, but often bill customers based on the amount of data transferred. Curious to learn more about what this means for end users? We recently wrote an analysis of AWS’ Egregious Egress — a good read if you would like to learn more about the ‘Hotel California’ model AWS has spun up. Effectively, data egress fees lock you into their platform, making you choose your provider based not on which provider has the best infrastructure for your use case, but instead choosing the provider where your data resides.

At Cloudflare, we’re working to flip the script for our customers. Our recently announced R2 Storage waives the data egress fees other providers implement for similar products. Cloudflare is a founding member of the Bandwidth Alliance, aiming to help our mutual customers overcome these data transfer fees.

We’re keeping true to our mission and, effective immediately, dropping all Egress Data Transfer fees associated with Workers Unbound and Durable Objects. If you’re using Workers Unbound today, your next bill will no longer include Egress Data Transfer fees. If you’re not using Unbound yet, now is a great time to experiment. With Workers Unbound, get access to longer CPU time limits and pay only for what you use, and don’t worry about the data transfer cost. When paired with Bandwidth Alliance partners, this is a cost-effective solution for any data intensive workloads.

More Unbound: 15 Minutes

This week has been about defining what the future of computing is going to look like. Workers are great for your latency sensitive workloads, with zero-milliseconds cold start times, fast global deployment, and the power of Cloudflare’s network. But Workers are not limited to lightweight tasks — we want you to run your heavy workloads on our platform as well. That’s why we’re announcing you can now use up to 15 minutes of CPU time on your Workers! You can run your most compute-intensive tasks on Workers using Cron Triggers. To get started, head to the Settings tab in your Worker and select the ‘Unbound’ usage model.

Once you’ve confirmed your Usage Model is Unbound, switch to the Triggers tab and click Add Cron Trigger. You’ll see a ‘Maximum Duration’ is listed, indicating whether your schedule is eligible for 15 Minute workloads.

Wait, there’s more (literally!)

That’s not all. As a platform, it is validating to see our customers want to grow even more with us, and we’ve been working to address these restrictions. That’s why, starting today, all customers will be allowed to deploy up to 100 Worker scripts. With the introduction of Services, that represents up to 100 environments per account. This higher limit will allow our customers to migrate more use cases to the Workers platform.

We’re also delighted to announce that, alongside this increase, the Workers platform will plan to support scripts larger in size. This increase will allow developers to build Workers with more libraries and new possibilities, like running Golang with WASM. Check out an example of esbuild running on a Worker, in a script that’s just over 2MB compressed. If you’re interested in larger script sizes, sign up here.

The future of cloud computing is here, and it’s on Cloudflare. Workers has always been the secure, fast serverless offering, and has recently been named a leader in the space. Now, it is even more affordable and flexible too.
We can’t wait to see what ambitious projects our customers build. Developers are now better positioned than ever to deploy large and complex applications on Cloudflare. Excited to build using Workers, or get engaged with the community? Join our Discord server to keep up with the latest on Cloudflare Workers.

AWS’s Egregious Egress

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/aws-egregious-egress/

AWS’s Egregious Egress

AWS’s Egregious Egress

When web hosting services first emerged in the mid-1990s, you paid for everything on a separate meter: bandwidth, storage, CPU, and memory. Over time, customers grew to hate the nickel-and-dime nature of these fees. The market evolved to a fixed-fee model. Then came Amazon Web Services.

AWS was a huge step forward in terms of flexibility and scalability, but a massive step backward in terms of pricing. Nowhere is that more apparent than with their data transfer (bandwidth) pricing. If you look at the (ironically named) AWS Simple Monthly Calculator you can calculate the price they charge for bandwidth for their typical customer. The price varies by region, which shouldn’t surprise you because the cost of transit is dramatically different in different parts of the world.

Charging for Stocks, Paying for Flows

AWS charges customers based on the amount of data delivered — 1 terabyte (TB) per month, for example. To visualize that, imagine data is water. AWS fills a bucket full of water and then charges you based on how much water is in the bucket. This is known as charging based on “stocks.”

On the other hand, AWS pays for bandwidth based on the capacity of their network. The base unit of wholesale bandwidth is priced as one Megabit per second per month (1 Mbps). Typically, a provider like AWS, will pay for bandwidth on a monthly fee based on the number of Mbps that their network uses at its peak capacity. So, extending the analogy, AWS doesn’t pay for the amount of water that ends up in their customers’ buckets, but rather the capacity based on the diameter of the “hose” that is used to fill them. This is known as paying for “flows.”

Translating Flows to Stocks

You can translate between flow and stock pricing by knowing that a 1 Mbps connection (think of it as the "hose") can transfer 0.3285 TB (328GB) if utilized to its fullest capacity over the course of a month (think of it as running the "hose" at full capacity to fill the "bucket" for a month).1 AWS obviously has more than 1 Mbps of capacity — they can certainly transfer more than 0.3285 TB per month — but you can use this as the base unit of their bandwidth costs, and compare it against what they charge a customer to deliver 1 Terabyte (1TB), in order to figure out the AWS bandwidth markup.

One more subtlety to be as accurate as possible. Wholesale bandwidth is also billed at the 95th percentile. That effectively cuts off the peak hour or so of use every day. That means a 1 Mbps connection running at 100% can actually likely transfer closer to 0.3458 TB (346GB) per month.

Two more factors are important: utilization and regional costs. AWS can’t run all their connections at 100% utilization 24×7 for a month. Instead, they’ll have some average utilization per transit connection in any month. It’s reasonable to estimate that they likely run at between 20% and 40% average utilization. That would be a typical average utilization range for the industry. The higher their utilization, the more efficient they are, the lower their costs, and the higher their effective customer markup will be.

To be conservative, we’ve assumed that AWS’s average utilization is the bottom of that range (20%), but you can download the raw data and adjust the assumptions however you think makes sense.

We have a good sense of the wholesale prices of bandwidth in different regions around the world based on what Cloudflare sees in the market when we buy bandwidth ourselves. We’d imagine AWS gets at least as good of pricing as we do. We’ve included a rough estimate of these prices in the calculation, rounding up on the wholesale price wherever there was a question (which makes AWS look better).

Massive Markups

Based on these assumptions, here’s our best estimate of AWS’s effective markup for egress bandwidth on a per-region basis.

AWS’s Egregious Egress
AWS’s Egregious Egress

Don’t rest easy, South Korea with your merely 357% markup. The general rule of thumb appears to be that the older a market is, the more Amazon wrings from its customers in egregious egress markups — and the Seoul availability zone is only a bit over four years old. Winter, unfortunately, inevitably seems to come to AWS customers.

AWS Stands Alone In Not Passing On Savings to Customers

Remember, this is for the transit bandwidth that AWS is paying for. For the bandwidth that they exchange with a network like Cloudflare, where they are directly connected (settlement-free peered) over a private network interface (PNI), there are no meaningful incremental costs and their effective margins are nearly infinite. Add in the effect of rebates Amazon collects from colocation providers who charge cross connect fees to customers, and the effective markup is likely even higher.

Some other cloud providers take into account that their costs are lower when passing over peering connections. Both Microsoft Azure and Google Cloud will substantially discount egress charges for their mutual Cloudflare customers. Members of the Bandwidth Alliance — Alibaba, Automattic, Backblaze, Cherry Servers, Dataspace, DNS Networks, DreamHost, HEFICED, Kingsoft Cloud, Liquid Web, Scalway, Tencent, Vapor, Vultr, Wasabi, and Zenlayer — waive bandwidth charges for mutual Cloudflare customers.

AWS’s Egregious Egress

At this point, the majority of hosting providers in the industry either substantially discount or entirely waive egress fees when sending traffic from their network to a peer like Cloudflare. AWS is the notable exception in the industry. It’s worth noting that we invited AWS to be a part of the Bandwidth Alliance, and they politely declined.

It seems like a no-brainer that if we’re not paying for the bandwidth costs, and the hosting provider isn’t paying for the bandwidth costs, customers shouldn’t be charged for the bandwidth costs at the same rate as if the traffic was being sent over the public Internet. Unfortunately, Amazon’s supposed obsession over doing the right thing for customers doesn’t extend to egress charges.

Artificially Held High

Amazon’s mission statement is: “We strive to offer our customers the lowest possible prices, the best available selection, and the utmost convenience.” And yet, when it comes to egress, their prices are far from the lowest possible.

During the last ten years, industry wholesale transit prices have fallen an average of 23% annually. Compounded over that time, wholesale bandwidth is 93% less expensive than 10 years ago. However, AWS’s egress fees over that same period have fallen by only 25%.

And, since 2018, the egress fees AWS charges in North America and Europe have not dropped a penny even as wholesale prices in those markets over the same time period have fallen by more than half.

AWS’s Hotel California Pricing

Another oddity of AWS’s pricing is that they charge for data transferred out of their network but not for data transferred into their network. If the only time you’ve paid for bandwidth is with your residential Internet connection, then this may make some sense. Because of some technical limitations of the cable network, download bandwidth is typically higher than upload bandwidth on cable modem connections. But that’s not how wholesale bandwidth is bought or sold.

AWS’s Egregious Egress

Wholesale bandwidth isn’t like your home cable connection. Instead, it’s symmetrical. That means that if you purchase a 1 Mbps (1 Megabit per second) connection, then you have the capacity to send 1 Megabit out and receive another 1 Megabit in every second. If you receive 1 Mbps in and simultaneously 1 Mbps out, you pay the same price as if you receive 1 Mbps in and 0 Mbps out or 0 Mbps in and 1 Mbps out. In other words, ingress (data sent to AWS) doesn’t cost them any more or less than egress (data sent from AWS). And yet, they charge customers more to take data out than put it in. It’s a head scratcher.

We’ve tried to be charitable in trying to understand why AWS would charge this way. Disappointingly, there just doesn’t seem to be an innocent explanation. As we dug in, even things like writes versus reads and the wear they put on storage media, as well as the challenges of capacity planning for storage capacity, suggest that AWS should charge less for egress than ingress.

But they don’t.

The only rationale we can reasonably come up with for AWS’s egress pricing: locking customers into their cloud, and making it prohibitively expensive to get customer data back out. So much for being customer-first.

But… But… But…

AWS may object that this doesn’t take into account the cost of things like metro dark fiber between data centers, amortized optical and other networking equipment, and cross connects. In our experience, those costs amount to a rounding error of less than one cent per Mbps when operating at AWS-like scale. And these prices have been falling at a similar rate to the decline in the price of bandwidth over the past 10 years. Yet AWS’s egress prices have barely budged.

All the data above is derived from what’s published on AWS’s simple pricing calculator. There’s no doubt that some large customers are able to negotiate lower prices. But these are the prices charged to small businesses and startups by default. And, when we’ve reviewed pricing even with large AWS customers, the egress fees remain egregious.

It’s Not Too Late!

We have a lot of mutual customers who use Cloudflare and AWS. They’re a great service, and we want to support our mutual customers and provide services in a way that meets their needs and is always as secure, fast, reliable, and efficient as possible. We remain hopeful that AWS will do the right thing, lower their egress fees, join the Bandwidth Alliance — following the lead of the majority of the rest of the hosting industry — and pass along savings from peering with Cloudflare and other networks to all their customers.

AWS’s Egregious Egress

…….
1Here’s the calculation to convert a 1 Mbps flow into TB stocks: 1 Mbps @ 100% for 1 month = (1 million bits per second) * (60 seconds / minute) * (60 minutes / hour) * (730 hours on average/month) divided by (eight bits / byte) divided by 10^12 (to convert bytes to Terabytes) = 0.3285 TB/month.

Empowering customers with the Bandwidth Alliance

Post Syndicated from Arjunan Rajeswaran original https://blog.cloudflare.com/empowering-customers-with-the-bandwidth-alliance/

Empowering customers with the Bandwidth Alliance

High Egress Fees

Empowering customers with the Bandwidth Alliance

Debates over the benefits and drawbacks of walled gardens versus open ecosystems have carried on since the beginnings of the tech industry. As applied to the Internet, we don’t think there’s much to debate. There’s a reason why it’s easier today than ever before to start a company online: open standards. They’ve encouraged a flourishing of technical innovation, made the Internet faster and safer, and easier and less expensive for anyone to have an Internet presence.

Of course, not everyone likes competition. Breaking open standards — with proprietary ones — is a common way to stop competition. In the cloud industry, a more subtle way to gain power over customers and lock them in has emerged. Something that isn’t obvious at the start: high egress fees.

You probably won’t notice them when you embark on your cloud journey. And if you need to bring data into your environment, there’s no data charge. But say you want to get that data out? Or go multi-cloud, and work with another cloud provider who is best-in-class? That’s when the charges start rolling in.

To make matters worse, as the number and diversity of applications in your IT stack increases, the lock-in power of egress fees increases as well. As more data needs to travel between more applications across clouds and there is more data to move to a newer, better cloud service, the egress tax increases, further locking you in. You lose the ability to choose best-of-breed services or to negotiate prices with your provider.

Why We Launched The Bandwidth Alliance

This is not a better Internet. So wherever we can, we’re on the lookout for ways to prevent this from happening — in this case, with our Bandwidth Alliance partners. We launched the Bandwidth Alliance in late 2018 with over fifteen cloud providers who also believe in an open Internet where data can flow freely. In short, partners in the Bandwidth Alliance have agreed to reduce egress fees for data transfer — either in their entirety or at a steep discount.

Empowering customers with the Bandwidth Alliance

How did we do this — the power of Cloudflare’s network

Say you’re hosted in a facility in Ashburn, Virginia and a user visits your service from Sydney, Australia. There is a cost to moving the data between the two places. In this example, a cloud provider would use their own global backbone to carry the traffic across the United States and then across the Pacific, eventually handing it off to the users’ ISP. Someone has to maintain the infrastructure that hauls that traffic more than 9,000 miles from Ashburn to Sydney.

Cloudflare has more than 206 data centers globally in almost every major city. Our network automatically receives traffic at whatever data center is nearest to the user and then carries it to the data center closest to where the origin is hosted.

As part of the Bandwidth Alliance, this traffic is then delivered to the partner data center via private network interconnects (PNI) or peered connections. These PNIs typically occur within the same facility through a fiber optic cable between routers, or via a dedicated connection between two facilities at a very low cost. Unlike when there’s a transit provider in between, there’s no middleman, so neither Cloudflare nor our partners bear incremental costs for transferring the data over this PNI.

Cloudflare is one of the most interconnected networks in the world, peering with over 9,500 networks globally, including major ISPs, cloud providers, and enterprises. Cloudflare is connected with partners in many global regions via Private Interconnections, Internet exchanges with private peering, and via public peering.

Empowering customers with the Bandwidth Alliance

Customer benefit

Since its inception, the Bandwidth Alliance program has provided many customers significant benefits: both in egress cost savings and more importantly, in choice across their needs of compute, storage, and other services. Providing this choice and preventing vendor lock-in has allowed our customers to choose the right product for their use case while benefiting from significant savings.

We looked at a sample set of our customers benefiting from the Bandwidth Alliance and estimated their egress savings based on the amount of data (GB) flowing from the origin to us. We estimated the potential savings using the \$0.08/GB retail price vs. the discounted \$0.04/GB for large amounts of data transferred. Of course, customers could save more by using one of our partners with whom the cost is $0/GB. We compared the savings to the amount of money these customers spend on us. These savings are in the range of 7.5% to 27% or, in other words, for every \$1 spent on Cloudflare customers are saving up to \$0.27 — that is a no-brainer offer to take advantage of.

The Bandwidth Alliance also offers customers the option to choose a cloud that meets their price and feature requirements. For a media delivery use case, choosing the best storage provider and Cloudflare has allowed one customer to save up to 85% in storage costs. Another customer who went with a best-of-breed solution across storage and the Cloudflare global network reduced their overall cloud bill by 50%. Customers appreciate these benefits of choice:

“We were looking at moving our data from one storage provider to another, and it was a total no-brainer to use a service that was part of the Bandwidth Alliance. It really makes sense for anyone looking at any cloud service, especially one that’s pushing a lot of traffic.” — James Ross, Co-founder/CTO, Nodecraft

Earlier this month we made it even easier for our joint customers with Microsoft Azure to realize the benefits of discounted egress to Cloudflare with Microsoft Azure’s Data Transfer Routing Preference. With a few clicks on the Azure dashboard, Cloudflare customer egress bills will automatically be discounted. It’s been exciting to hear positive customer feedback:

“Before taking advantage of the Routing Preference by Azure via Cloudflare, Egress fees were one of the key reasons that restricted us from having more multi-cloud solutions since it can be high and unpredictable at times as the traffic scales. Enabling Routing Preference on the Azure dashboard was quick and easy. It was a one-and-done effort, and we get discounted Egress rates on every Azure bill.”  — Darin MacRae, Chief Architect / Cloud Computing, MyRadar.com

If you’re looking to find the right set of cloud storage, networking and security solutions to meet your needs, consider the Bandwidth Alliance as an alternative to being locked-in to a single platform. We hope it helps.

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Post Syndicated from Deeksha Lamba original https://blog.cloudflare.com/discounted-egress-for-cloudflare-customers-from-microsoft-azure-is-now-available/

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Today, we are excited to announce that Cloudflare customers can choose Microsoft Azure with a lower cost data transfer solution via the Microsoft Routing Preference service. Mutual customers can benefit from lower cost and predictable performance across our interconnected networks. Microsoft Azure has developed a seamless process to allow customers to choose this cost optimized routing solution.  We have customers using this new integration today and are excited to make this generally available to all our customers and prospects.

The power of interconnected networks

So how are we able to enable this great solution for our customers? The answer lies in our globally interconnected network.

Cloudflare is one of the most interconnected networks in the world, peering with over 9,500 networks globally, including major ISPs, cloud providers, and enterprises. We currently interconnect with Azure through private or public peering across all major regions — including private interconnections at key locations (see below).

Private Network Interconnects typically occur within the same facility through a fiber optic cable between routers for the two networks; peered connections occur at Internet exchanges offering high performance and availability. We are actively working on expanding on this interconnectivity between Azure and Cloudflare for our customers.

In addition to the private interconnections, we also have five Internet exchanges with private peering, and over 108 public peering links with Azure

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Wondering what this really means? Let’s look at an example. Say an Internet visitor is in Sydney and requests content from an origin that’s hosted in an Azure location in Chicago. When the visitor makes a request, Cloudflare automatically carries it to the Cloudflare data center in Sydney. The traffic is then routed over Cloudflare’s network all the way to Chicago where the origin is hosted on Azure. The request is then handed over to an Azure data center over our private interconnections.

On the way back (egress path), the request is handed over from Azure network to Cloudflare at the origin in Chicago via our private interconnection (without involving any ISP). Then it’s carried entirely over the Cloudflare network to Sydney and back to the visitor.

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Why does the Internet need this?

Customer choice. That’s an important ingredient to help build a better Internet for our customers — free of vendor lock-in, and with open Internet standards. We’ve worked with the Azure team to enable this interconnectivity, giving the customers the flexibility to choose multiple best-of-breed products without having to worry about high data transfer costs.

What is even more exciting is working with Microsoft, a company that shares our philosophy of promoting customer flexibility and helping customers resist vendor lock-in:

“Microsoft Azure is committed to offering services that make it easy to use offerings from industry leaders like Cloudflare – enabling choice to address customer’s business need.”
Jeff Cohen, Partner Group Program Manager for Azure Networking.

Easy for customers to get started

Cloudflare customers now have the option to leverage Azure routing preference and as a result use both platforms for their respective features and services offering the most secure and performant solution.

Most importantly customers can avail of this lower cost solution with just three simple steps.

Step 1: Choose Internet routing on your Azure dashboard for origin in Azure storage:

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Step 2: Enable Internet routing on your Firewall and virtual network tab:

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Step 3: Enter your updated endpoint urls from Azure into your Cloudflare dashboard:

Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs

Once enabled, the discounting is automatic and ongoing from the next monthly bill. Further details on the discounted rates can be found in Azure’s Bandwidth pricing.

A number of customers are already enjoying these benefits:

“Enabling cost-optimized egress by Cloudflare and Azure via Routing Preference from the Azure dashboard has been very smooth for us with minimal effort. Cloudflare was proactive in reaching out with its customer-centric approach.”
Joakim Jamte, Engineering Manager, Bannerflow

“Before taking advantage of the Routing Preference by Azure via Cloudflare, Egress fees were one of the key reasons that restricted us from having more multi-cloud solutions since it can be high and unpredictable at times as the traffic scales. Enabling Routing Preference on the Azure dashboard was quick and easy. It was a one-and-done effort and we get discounted Egress rates on every Azure bill.”
Darin MacRae, Chief Architect / Cloud Computing, MyRadar.com

“Along with Cloudflare’s excellent security features and high performing CDN, the data transfer rates from Azure’s Routing Preference enabled by Cloudflare make the offer very compelling. Enabling and receiving the discount was very easy and helped us optimize our investment without any effort.”
Arthur Roodenburg, CIO, Act-3D B.V.

We’re pleased today to offer this benefit to all Cloudflare customers. If you are interested in taking advantage of Routing Preference please reach out.