Tag Archives: Impact Week

Closing out 2022 with our latest Impact Report

Post Syndicated from Andie Goodwin original https://blog.cloudflare.com/impact-report-2022/

Closing out 2022 with our latest Impact Report

Closing out 2022 with our latest Impact Report

To conclude Impact Week, which has been filled with announcements about new initiatives and features that we are thrilled about, today we are publishing our 2022 Impact Report.

In short, the Impact Report is an annual summary highlighting how we are helping build a better Internet and the progress we are making on our environmental, social, and governance priorities. It is where we showcase successes from Cloudflare Impact programs, celebrate awards and recognitions, and explain our approach to fundamental values like transparency and privacy.

We believe that a better Internet is principled, for everyone, and sustainable; these are the three themes around which we constructed the report. The Impact Report also serves as our repository for disclosures consistent with our commitments for the Global Reporting Initiative (GRI), Sustainability Accounting Standards Board (SASB), and UN Global Compact (UNGC).

Check out the full report to:

  • Explore how we are expanding the value and scope of our Cloudflare Impact programs
  • Review our latest diversity statistics — and our newest employee resource group
  • Understand how we are supporting humanitarian and human rights causes
  • Read quick summaries of Impact Week announcements
  • Examine how we calculate and validate emissions data

As fantastic as 2022 has been for scaling up Cloudflare Impact and making strides toward a better Internet, we are aiming even higher in 2023. To keep up with developments throughout the year, follow us on Twitter and LinkedIn, and keep an eye out for updates on our Cloudflare Impact page.

Everything you might have missed during Cloudflare’s Impact Week 2022

Post Syndicated from Lorraine Bellon original https://blog.cloudflare.com/everything-you-might-have-missed-during-cloudflares-impact-week-2022/

Everything you might have missed during Cloudflare's Impact Week 2022

Everything you might have missed during Cloudflare's Impact Week 2022

And that’s a wrap! Impact Week 2022 has come to a close. Over the last week, Cloudflare announced new commitments in our mission to help build a better Internet, including delivering Zero Trust services for the most vulnerable voices and for critical infrastructure providers. We also announced new products and services, and shared technical deep dives.

Were you able to keep up with everything that was announced? Watch the Impact Week 2022 wrap-up video on Cloudflare TV, or read our recap below for anything you may have missed.

Product announcements

Blog Summary
Cloudflare Zero Trust for Project Galileo and the Athenian Project We are making the Cloudflare One Zero Trust suite available to teams that qualify for Project Galileo or Athenian at no cost. Cloudflare One includes the same Zero Trust security and connectivity solutions used by over 10,000 customers today to connect their users and safeguard their data.
Project Safekeeping – protecting the world’s most vulnerable infrastructure with Zero Trust Under-resourced organizations that are vital to the basic functioning of our global communities (such as community hospitals, water treatment facilities, and local energy providers) face relentless cyber attacks, threatening basic needs for health, safety and security. Cloudflare’s mission is to help make a better Internet. We will help support these vulnerable infrastructure by providing our enterprise-level Zero Trust cybersecurity solution to them at no cost, with no time limit.
Cloudflare achieves FedRAMP authorization to secure more of the public sector We are excited to announce our public sector suite of services, Cloudflare for Government, has achieved FedRAMP Moderate Authorization. The Federal Risk and Authorization Management Program (“FedRAMP”) is a US-government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.
A new, configurable and scalable version of Geo Key Manager, now available in Closed Beta At Cloudflare, we want to give our customers tools that allow them to maintain compliance in this ever-changing environment. That’s why we’re excited to announce a new version of Geo Key Manager — one that allows customers to define boundaries by country, by region, or by standard.

Everything you might have missed during Cloudflare's Impact Week 2022

Technical deep dives

Blog Summary
Cloudflare is joining the AS112 project to help the Internet deal with misdirected DNS queries Cloudflare is participating in the AS112 project, becoming an operator of the loosely coordinated, distributed sink of the reverse lookup (PTR) queries for RFC 1918 addresses, dynamic DNS updates and other ambiguous addresses.
Measuring BGP RPKI Route Origin Validation The Border Gateway Protocol (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn’t originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the Resource Public Key Infrastructure (RPKI) framework, but can we declare it to be safe yet?

Customer stories

Blog Summary
Democratizing access to Zero Trust with Project Galileo Learn how organizations under Project Galileo use Cloudflare Zero Trust to protect their organization from cyberattacks.
Securing the inboxes of democracy Cloudflare email security worked hard in the 2022 U.S. midterm elections to ensure that the email inboxes of those seeking office were secure.
Expanding Area 1 email security to the Athenian Project We are excited to share that we have grown our offering under the Athenian Project to include Cloudflare’s Area 1 email security suite to help state and local governments protect against a broad spectrum of phishing attacks to keep voter data safe and secure.
How Cloudflare helps protect small businesses Large-scale cyber attacks on enterprises and governments make the headlines, but the impacts of cyber conflicts can be felt more profoundly and acutely by small businesses that struggle to keep the lights on during normal times. In this blog, we’ll share new research on how small businesses, including those using our free services, have leveraged Cloudflare services to make their businesses more secure and resistant to disruption.

Internet access

Blog Summary
Cloudflare expands Project Pangea to connect and protect (even) more community networks A year and a half ago, Cloudflare launched Project Pangea to help provide Internet services to underserved communities. Today, we’re sharing what we’ve learned by partnering with community networks, and announcing an expansion of the project.
The US government is working on an “Internet for all” plan. We’re on board. The US government has a $65 billion program to get all Americans on the Internet. It’s a great initiative, and we’re on board.
The Montgomery, Alabama Internet Exchange is making the Internet faster. We’re happy to be there. Internet Exchanges are a critical part of a strong Internet. Here’s the story of one of them.
Partnering with civil society to track Internet shutdowns with Radar Alerts and API We want to tell you more about how we work with civil society organizations to provide tools to track and document the scope of these disruptions. We want to support their critical work and provide the tools they need so they can demand accountability and condemn the use of shutdowns to silence dissent.
How Cloudflare helps next-generation markets At Cloudflare, part of our role is to make sure every person on the planet with an Internet connection has a good experience, whether they’re in a next-generation market or a current-gen market. In this blog we talk about how we define next-generation markets, how we help people in these markets get faster access to the websites and applications they use on a daily basis, and how we make it easy for developers to deploy services geographically close to users in next-generation markets.

Everything you might have missed during Cloudflare's Impact Week 2022

Sustainability

Blog Summary
Independent report shows: moving to Cloudflare can cut your carbon footprint We didn’t start out with the goal to reduce the Internet’s environmental impact. But as the Internet has become an ever larger part of our lives, that has changed. Our mission is to help build a better Internet — and a better Internet needs to be a sustainable one.
A more sustainable end-of-life for your legacy hardware appliances with Cloudflare and Iron Mountain We’re excited to announce an opportunity for Cloudflare customers to make it easier to decommission and dispose of their used hardware appliances in a sustainable way. We’re partnering with Iron Mountain to offer preferred pricing and value-back for Cloudflare customers that recycle or remarket legacy hardware through their service.
How we’re making Cloudflare’s infrastructure more sustainable With the incredible growth of the Internet, and the increased usage of Cloudflare’s network, even linear improvements to sustainability in our hardware today will result in exponential gains in the future. We want to use this post to outline how we think about the sustainability impact of the hardware in our network, and what we’re doing to continually mitigate that impact.
Historical emissions offsets (and Scope 3 sneak preview) Last year, Cloudflare committed to removing or offsetting the historical emissions associated with powering our network by 2025. We are excited to announce our first step toward offsetting our historical emissions by investing in 6,060 MTs’ worth of reforestation carbon offsets as part of the Pacajai Reduction of Emissions from Deforestation and forest Degradation (REDD+) Project in the State of Para, Brazil.
How we redesigned our offices to be more sustainable Cloudflare is working hard to ensure that we’re making a positive impact on the environment around us, with the goal of building the most sustainable network. At the same time, we want to make sure that the positive changes that we are making are also something that our local Cloudflare team members can touch and feel, and know that in each of our actions we are having a positive impact on the environment around us. This is why we make sustainability one of the underlying goals of the design, construction, and operations of our global office spaces.
More bots, more trees Once a year, we pull data from our Bot Fight Mode to determine the number of trees we can donate to our partners at One Tree Planted. It’s part of the commitment we made in 2019 to deter malicious bots online by redirecting them to a challenge page that requires them to perform computationally intensive, but meaningless tasks. While we use these tasks to drive up the bill for bot operators, we account for the carbon cost by planting trees.

Policy

Blog Summary
The Challenges of Sanctioning the Internet As governments continue to use sanctions as a foreign policy tool, we think it’s important that policymakers continue to hear from Internet infrastructure companies about how the legal framework is impacting their ability to support a global Internet. Here are some of the key issues we’ve identified and ways that regulators can help balance the policy goals of sanctions with the need to support the free flow of communications for ordinary citizens around the world.
An Update on Cloudflare’s Assistance to Ukraine On February 24, 2022, when Russia invaded Ukraine, Cloudflare jumped into action to provide services that could help prevent potentially destructive cyber attacks and keep the global Internet flowing. During Impact Week, we want to provide an update on where things currently stand, the role of security companies like Cloudflare, and some of our takeaways from the conflict so far.
Two months later: Internet use in Iran during the Mahsa Amini Protests A series of protests began in Iran on September 16, following the death in custody of Mahsa Amini — a 22 year old who had been arrested for violating Iran’s mandatory hijab law. The protests and civil unrest have continued to this day. But the impact hasn’t just been on the ground in Iran — the impact of the civil unrest can be seen in Internet usage inside the country, as well.
How Cloudflare advocates for a better Internet We thought this week would be a great opportunity to share Cloudflare’s principles and our theories behind policy engagement. Because at its core, a public policy approach needs to reflect who the company is through their actions and rhetoric. And as a company, we believe there is real value in helping governments understand how companies work, and helping our employees understand how governments and law-makers work.
Applying Human Rights Frameworks to our approach to abuse What does it mean to apply human rights frameworks to our response to abuse? As we’ll talk about in more detail, we use human rights concepts like access to fair process, proportionality (the idea that actions should be carefully calibrated to minimize any effect on rights), and transparency.
The Unintended Consequences of blocking IP addresses This blog dives into a discussion of IP blocking: why we see it, what it is, what it does, who it affects, and why it’s such a problematic way to address content online.

Everything you might have missed during Cloudflare's Impact Week 2022

Impact

Blog Summary
Closing out 2022 with our latest Impact Report Our Impact Report is an annual summary highlighting how we are trying to build a better Internet and the progress we are making on our environmental, social, and governance priorities.
Working to help the HBCU Smart Cities Challenge The HBCU Smart Cities Challenge invites all HBCUs across the United States to build technological solutions to solve real-world problems.
Introducing Cloudflare’s Third Party Code of Conduct Cloudflare is on a mission to help build a better Internet, and we are committed to doing this with ethics and integrity in everything that we do. This commitment extends beyond our own actions, to third parties acting on our behalf. We are excited to share our Third Party Code of Conduct, specifically formulated with our suppliers, resellers and other partners in mind.
The latest from Cloudflare’s seventeen Employee Resource Groups In this blog post, we highlight a few stories from some of our 17 Employee Resource Groups (ERGs), including the most recent, Persianflare.

What’s next?

That’s it for Impact Week 2022. But let’s keep the conversation going. We want to hear from you!

Visit the Cloudflare Community to share your thoughts about Impact Week 2022, or engage with our team on Facebook, Twitter, LinkedIn, and YouTube.

Or if you’d like to rewatch any Cloudflare TV segments associated with the above stories, visit the Impact Week hub on our website.

Watch on Cloudflare TV

How Cloudflare advocates for a better Internet

Post Syndicated from Christiaan Smits original https://blog.cloudflare.com/how-cloudflare-advocates-for-a-better-internet/

How Cloudflare advocates for a better Internet

How Cloudflare advocates for a better Internet

We mean a lot of things when we talk about helping to build a better Internet. Sometimes, it’s about democratizing technologies that were previously only available to the wealthiest and most technologically savvy companies, sometimes it’s about protecting the most vulnerable groups from cyber attacks and online prosecution. And the Internet does not exist in a vacuum.

As a global company, we see the way that the future of the Internet is affected by governments, regulations, and people. If we want to help build a better Internet, we have to make sure that we are in the room, sharing Cloudflare’s perspective in the many places where important conversations about the Internet are happening. And that is why we believe strongly in the value of public policy.

We thought this week would be a great opportunity to share Cloudflare’s principles and our theories behind policy engagement. Because at its core, a public policy approach needs to reflect who the company is through their actions and rhetoric. And as a company, we believe there is real value in helping governments understand how companies work, and helping our employees understand how governments and law-makers work. Especially now, during a time in which many jurisdictions are passing far-reaching laws that shape the future of the Internet, from laws on content moderation, to new and more demanding regulations on cybersecurity.

Principled, Curious, Transparent

At Cloudflare, we have three core company values: we are Principled, Curious, and Transparent. By principled, we mean thoughtful, consistent, and long-term oriented about what the right course of action is. By curious, we mean taking on big challenges and understanding the why and how behind things. Finally, by transparent, we mean being clear on why and how we decide to do things both internally and externally.

Our approach to public policy aims to integrate these three values into our engagement with stakeholders. We are thoughtful when choosing the right issues to prioritize, and are consistent once we have chosen to take a position on a particular topic. We are curious about the important policy conversations that governments and institutions around the world are having about the future of the Internet, and want to understand the different points of view in that debate. And we aim to be as transparent as possible when talking about our policy stances, by, for example, writing blogs, submitting comments to public consultations, or participating in conversations with policymakers and our peers in the industry. And, for instance with this blog, we also aim to be transparent about our actual advocacy efforts.

What makes Cloudflare different?

With approximately 20 percent of websites using our service, including those who use our free tier, Cloudflare protects a wide variety of customers from cyberattack. Our business model relies on economies of scale, and customers choosing to add products and services to our entry-level cybersecurity protections. This means our policy perspective can be broad: we are advocating for a better Internet for our customers who are Fortune 1000 companies, as well as for individual developers with hobby blogs or small business websites. It also means that our perspective is distinct: we have a business model that is unique, and therefore a perspective that often isn’t represented by others.

Strategy

We are not naive: we do not believe that a growing company can command the same attention as some of the Internet giants, or has the capacity to engage on as many issues as those bigger companies. So how do we prioritize? What’s our rule of thumb on how and when we engage?

Our starting point is to think about the policy developments that have the largest impact on our own activities. Which issues could force us to change our model? Cause significant (financial) impact? Skew incentives for stronger cybersecurity? Then we do the exercise again, this time, thinking about whether our perspective on that policy issue is dramatically different from those of other companies in the industry. Is it important to us, but we share the same perspective as other cybersecurity, infrastructure, or cloud companies? We pass. For example, while changing corporate tax rates could have a significant financial impact on our business, we don’t exactly have a unique perspective on that. So that’s off the list. But privacy? There we think we have a distinct perspective, as a company that practices privacy by design, and supports and develops standards that help ensure privacy on the Internet. And crucially: we think privacy will be critical to the future of the Internet. So on public policy ideas related to privacy we engage. And then there is our unique vantage point, derived from our global network. This often gives us important insight and data, which we can use to educate policymakers on relevant issues.

Our engagement channels

Our Public Policy team includes people who have worked in government, law firms and the tech industry before they joined Cloudflare. The informal networks, professional relationships, and expertise that they have built over the course of their careers are instrumental in ensuring that Cloudflare is involved in important policy conversations about the Internet. We do not have a Political Action Committee, and we do not make political contributions.

As mentioned, we try to focus on the issues where we can make a difference, where we have a unique interest, perspective and expertise. Nonetheless, there are many policies and regulations that could affect not only us at Cloudflare, but the entire Internet ecosystem. In order to track policy developments worldwide, and ensure that we are able to share information, we are members of a number of associations and coalitions.

Some of these advocacy groups represent a particular industry, such as software companies, or US based technology firms, and engage with lawmakers on a wide variety of relevant policy issues for their particular sector. Other groups, in contrast, focus their advocacy on a more specific policy issue.

In addition to formal trade association memberships, we will occasionally join coalitions of companies or civil society organizations assembled for particular advocacy purposes. For example, we periodically engage with the Stronger Internet coalition, to share information about policies around encryption, privacy, and free expression around the world.

It almost goes without saying that, given our commitment to transparency as a company and entirely in line with our own ethics code and legal compliance, we fully comply with all relevant rules around advocacy in jurisdictions across the world. You can also find us in transparency registers of governmental entities, where these exist. Because we want to be transparent about how we advocate for a better Internet, today we have published an overview of the organizations we work with on our website.

Working to help the HBCU Smart Cities Challenge

Post Syndicated from Nikole Phillips original https://blog.cloudflare.com/working-to-help-the-hbcu-smart-cities-challenge/

Working to help the HBCU Smart Cities Challenge

Working to help the HBCU Smart Cities Challenge

Anyone who knows me knows that I am a proud member of the HBCU (Historically Black College or University) alumni. The HBCU Smart Cities Challenge invites all HBCUs across the United States to build technological solutions to solve real-world problems. When I learned that Cloudflare would be supporting the HBCU Smart Cities Challenge, I was on board immediately for so many personal reasons.

In addition to volunteering mentors as part of this partnership, Cloudflare offered HBCU Smart Cities the opportunity to apply for Project Galileo to protect and accelerate their online presence. Project Galileo provides free cyber security protection to free speech, public interest, and civil society organizations that are vulnerable to cyber attacks. After more than three years working at Cloudflare, I know that we can make the difference in bridging the gap in accessibility to the digital landscape by directly securing the Internet against today’s threats as well as optimizing performance, which plays a bigger role than most would think.

What is an HBCU?

A Historically Black College or University is defined as “any historically black college or university that was established prior to 1964, whose principal mission was, and is, the education of black Americans, and that is accredited by a nationally recognized accrediting agency or association determined by the Secretary of Education.” (Source: What is an HBCU? HBCU Lifestyle).  I had the honor of graduating from the nation’s first degree-granting HBCU, Lincoln University of Pennsylvania.

One of the main reasons that I decided to attend a HBCU is that the available data suggests that HBCUs close the socioeconomic gap for Black students more than other high-education institutions (Source: HBCUs Close Socioeconomic Gap, Here’s How, 2021). This is exemplified by my own experience — I was a student that came from a low-income background, and became the first generation college graduate in my family. I believe it is due to HBCUs providing a united, supportive, and safe space for people from the African diaspora which equips us to be our best.

The HBCU Smart Cities Challenge

There are a wide range of problems the HBCU Smart Cities Challenge invites students to tackle. These problems include water management in Tuskegee, AL; broadband and security access in Raleigh, NC; public health for the City of Columbia, SC; and affordable housing in Winston-Salem, NC—just to name a few. Applying skills with smart technology to real-life problems helps improve upon the existing infrastructure in these cities.

To solve these problems, the challenge brings together students at HBCUs to build smart city applications. Over several months, developers, entrepreneurs, designers, and engineers will develop tech solutions using Internet of Things technology. In October, Cloudflare presented as part of a town hall in the HBCU Smart Cities series. We encouraged local leaders to think about using historic investments in broadband buildout to also lay the foundation for Smart Cities infrastructure at the same time. We described how, with solid infrastructure in place, the Smart Cities applications that are built on top of that infrastructure- would be fast, reliable, and secure — which is a necessity for infrastructure that residents rely on.

Here are some quotes from Norma McGowan Jackson, District 1 Councilwoman of City of Tuskegee and HBCU Smart City Fellow Arnold Bhebhe:

As the council person for District 1 in the City of Tuskegee, which represents Tuskegee University, as the Council liaison for the HBCU Smart Cities Challenge, as a Tuskegee native, and as a Tuskegee Institute, (now University) alumnae, I am delighted to be a part of this collaboration. Since the days of Dr. Booker T. Washington, the symbiotic relationship between the Institute (now University) and the surrounding community has been acknowledged as critical for both entities and this opportunity to further enhance that relationship is a sure win-win!
– Norma McGowan Jackson, District 1 Councilwoman of City of Tuskegee

The HBCU Smart Cities Challenge has helped me to better understand that even though we live in an unpredictable world, our ability to learn and adapt to change can make us better innovators. I’m super grateful to have the opportunity to reinforce my problem-solving, creativity, and communication skills alongside like-minded HBCU students who are passionate about making a positive impact in our community.
– Arnold Bhebhe, Junior at Alabama State University majoring in computer science

How Cloudflare helps

Attending an HBCU was one of the best decisions I have made in my life, and my motivation was seeing the product of HBCU graduates — noting that the first woman Vice President of the United States, Kamala Harris, is a HBCU graduate from Howard University.

The biggest honor for me is having the opportunity to build on the brilliance of these college students in this partnership, because I was in their shoes almost 25 years ago.

Further, to help protect websites associated with HBCU Smart Cities projects, Cloudflare has invited students in the program to apply for Project Galileo.

Finally, the HBCU Smart Cities Challenge are continually looking for mentors, sponsors and partnerships, as well as support and resources for the students. If you’re interested, please go here to learn more.

The latest from Cloudflare’s seventeen Employee Resource Groups

Post Syndicated from Sofia Good original https://blog.cloudflare.com/the-latest-from-cloudflares-seventeen-employee-resource-groups/

The latest from Cloudflare's seventeen Employee Resource Groups

The latest from Cloudflare's seventeen Employee Resource Groups

In this blog post, we’ll highlight a few stories from some of our 17 Employee Resource Groups (ERGs), including the most recent, Persianflare. But first, let me start with a personal story.

Do you remember being in elementary school and sitting in a classroom with about 30 other students when the teacher was calling on your classmates to read out loud from a book? The opportunity to read out loud was an exciting moment for many of my peers; one that made them feel proud of themselves. I, on the other hand, was frozen, in a state of panic, worried that I wouldn’t be able to sound out a word or completely embarrass myself by stuttering. I would practice reading the next paragraph in hopes that I wouldn’t mess up when I was called on. What I didn’t know at the time was that I was dyslexic, and I could barely read, especially out loud to a large group of people.

That is where I began to know the feeling of isolation. This feeling compounded year after year, when I wasn’t able to perform the way my peers did. My isolation prevailed from elementary school to middle school, through high school and even into college.

In college, I found a community that changed everything called Eye to Eye – a national non-profit organization that provides mentorship programs for students with learning disabilities. I attended one of their conferences with 200 other students. It was a profound moment when I realized that everyone in the room shared the experience of anxiety and fear around learning. Joining this community made me feel that I was not alone. Community for me is a group of people who have shared experience. Community allowed me to see my learning disability as an asset not a deficit. This is what I think the author, Nilofer Merchant, meant in the 11 Rules for Creating Value in the Social Era, when she said “The social object that unites people isn’t a company or a product; the social object that most unites people is a shared value or purpose.

When I came to work at Cloudflare, I decided to become an ERG leader for Flarability, which provides a space for discussing disability and accessibility at Cloudflare. The same deep sense of community that I felt at Eye to Eye was available to me when I joined Flarability.

Globally, 85% of the company participates in ERGs and this year alone they hosted over 54 initiatives and events. As the pandemic persisted over the last year, Cloudflare remained a hybrid workforce which posed many challenges for our company culture. ERGs had to rethink the way they foster connection. Today, we are highlighting Persianflare, Afroflare, Greencloud and Desiflare because they have taken different approaches to building community.

Persianflare is our newest ERG, and in a very short amount of time, ERG Leader Shadnaz Nia has already brought together an entire community of folks who have created lasting impact. Here’s how…

The latest from Cloudflare's seventeen Employee Resource Groups

Persianflare: Amplify the voice of Iranian people
Shadnaz Nia, Solutions Engineer, Toronto

Persianflare is the newest ERG which strives to nurture a community for all Persians and allies, where we perpetuate freedom of speech, proudly celebrate rich Persian heritage, appending growth and Persian convention. We have assembled with a desire to amplify the voice of the Iranian people, and bring awareness to human rights violations there, during unprecedented and unbearable times in history.

Cloudflare’s mission is to build a better Internet and one of our initiatives this year was to support building a better Internet that provides a more private Internet for the people of Iran. At Cloudflare, we are fortunate to have executive leadership that takes action and works tirelessly to provide more uncensored and accessible Internet in countries such as Iran. We’ve done this by offering WARP solutions in native Farsi language for all Persian-speaking users, empowering them to access uncensored information and news, in turn, strengthening the voice of Persian people living in Iran.

This project was accidentally routed to my team by the Product Team when they needed a translator. At the time, myself and other Persian employees were feeling powerless as a result of Mahsa Amini being murdered in the custody of the Iranian morality police. This was the birth of Persianflare. Through this project, I started collaborating with other employees and discovered that many of my colleagues really cared about this cause and wanted to help. Throughout the course of this initiative, we found more Persian employees in other regions and let them know about our progress. Words cannot describe how I felt when the app was released. It was one of the purest moments in my life. By bringing together Persian employees, allies of Persianflare, and the Product Team, this community was able to create real change for the people of Iran. To me, that is the power of community.

We plan to circle together to celebrate the global Human Rights Day on December 10, 2022, to continue discussing and growing our community. As the newest ERG, we are just getting started.”

The latest from Cloudflare's seventeen Employee Resource Groups

Afroflare: Sharing experiences
Sieh Johnson, Engineering Manager, Austin
Trudi Bawah-Atalia, Recruiting Coordinator, London

“Afroflare is a community for People of African Diaspora to build connections while learning from each others’ lives, perspectives and experiences. Our initiatives in 2022 centered on creating and cultivating community by supporting and echoing Black voices and achievements within and outside of Cloudflare. Earlier in the year, given that the pandemic was slowing down but still active, we provided curated content that would allow members and allies to create, contribute and learn about cultures of the African diaspora at their own pace.

For our celebration of US Black History Month, we aggregated lists of Black-owned businesses and non-profit communities in various cities to support. We also hosted internal chats called “The Plug,” which highlight the immense talent of our members at Cloudflare. Lastly, Afroflare worked with allies to present an Allyship workshop called “Celebrating Black Joy- An Upskilling Session for Allies.”

As 2022 progressed, we shifted to a hybrid of in-person and virtual events in order to foster more interaction and strengthen bonds. Our celebration of UK Black History Month included a cross-culture party in our Lisbon office. We collaborated with Latinflare & Wineflare on an International Wine tasting event featuring South African wines, and an African + Caribbean food tasting event called “Taste from Home. We wrapped up the festivities with a Black Woman in Tech panel to discuss bias and how to navigate various obstacles faced by the BIPOC community in tech.

Virtually or in-person, this community has become a family – we laugh together, cry together, teach each other, and continue to grow together year after year. Each member’s experiences and culture are valued as we forge spaces for everyone to feel free to be authentic. Afroflare looks forward to continuing its goals of creating safe spaces, as well as educating, championing, and supporting our members and allies in 2023.”

The latest from Cloudflare's seventeen Employee Resource Groups

Greencloud: A more sustainable Internet
Annika Garbers, Product Manager, Georgia

“Greencloud is a coalition of Cloudflare employees who are passionate about the environment. Our vision is to address the climate crisis through an intersectional lens and help Cloudflare become a clear leader in sustainable practices among tech companies. Greencloud was initially founded in 2019 but experienced most of its growth in membership, engagement, and activities after the pandemic started. The group became an outlet for current and new employees to connect on shared passions and channel our COVID-fueled anxieties for the world into productive climate-focused action.

Greencloud’s organizing since 2020 has primarily centered around “two weeks of action.” The first is Impact Week (happening now!), which includes projects driven by Greencloud members to help our customers build a more sustainable Internet using Cloudflare products. The second is Earth Week, scheduled around the global Earth Day celebration in April, which focuses on awareness and education. We’ve leveraged the tools available to Cloudflare employees and our community, like our blog and Cloudflare TV, to share information with a broader audience and on a wider range of topics than would be possible with in-person events. This year, publicly available Earth Week programming included sessions about Cloudflare’s sustainability focus for our hardware lifecycle, an interview with a sustainability-focused Project Galileo participant, a Policy team overview of our sustainability reporting practices, and a conversation about sustainable transportation with our People team. Covering a wide range of topics throughout our events and content created by our members not only helps everyone learn something new, it also reminds us of the importance of embracing and encouraging diverse perspectives in every community. The diversity of the Greencloud collective is a small demonstration of the reality that climate change is only successfully addressed through holistic action by people with many outlooks and skills working together.

As we embrace more flexible modes of work, the Greencloud crew is looking forward to maintaining our virtual events as well as introducing more in-person opportunities to engage with each other and our local communities. Building and maintaining deep connections with each other is key to the momentum and sustainability(!) of this work in the long term.”

The latest from Cloudflare's seventeen Employee Resource Groups

Desiflare: South Asian delights
Arwa Ginwala, Solutions Engineering Manager, San Francisco

“Our goal for Desiflare is to build a sense of community among Cloudflare employees using the rich South Asian culture as a platform to bring people together. The Desiflare initiatives that had the most impact were in-person events after the two long years of pandemic. People were longing for a sense of community and belonging after months of Zoom fatigue. It was a fresh breath of air to see fellow Desis in person, across multiple Cloudflare offices. Folks hired during the pandemic got an opportunity to come visit the newly-renovated office in San Francisco. Desis enjoyed South Asian food at the Austin office for the first time since Desiflare’s inception. Diwali was celebrated in Singapore and Sydney offices, and the community in London played cricket, a sport very popular and well-loved in the South Asian community. Regardless of country of origin, gender, age or cultural beliefs, and in the presence of a competitive atmosphere, everyone shared and rejoiced with memories from their childhoods.

We realize that Desiflare members have different levels of comfort regarding meeting people in person or traveling for in-person events. But everyone wants to feel a sense of community and connection with people who share the same interests. Keeping this in mind, it was important to organize events where everyone felt included and had a chance to be part of the community based on their preferences. We met virtually for weekly chai time, monthly lunches, and are now organizing virtual jam sessions for many Desis to showcase their talent and enjoy South Asian music regardless of where they are located. The community has been most engaged on the Desiflare chat room. It has provided a platform for discussing common topics that help people feel supported. Desiflare gives a unique opportunity to employees to connect with their culture and roots regardless of their job title and team. It’s a way to network cross-functionally, and allows you to bring your whole self to work, which is one of the best things about working at Cloudflare.”

Conclusion

The ERGs at Cloudflare have helped us realize the power of community and how critical it is for hybrid work. What I have learned alongside our ERG leaders is that if we as individuals want to feel connected, understood and seen, our ERG communities are essential. You can check out all the incredible ERGs on the Life at Cloudflare page, and I encourage you to consider starting an ERG at your company.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

Post Syndicated from Carlos Rodrigues original https://blog.cloudflare.com/rpki-updates-data/

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

The Border Gateway Protocol (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn’t originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the Resource Public Key Infrastructure (RPKI) framework, but can we declare it to be safe yet?

If the question needs asking, you might suspect we can’t. There is a shortage of reliable data on how much of the Internet is protected from preventable routing problems. Today, we’re releasing a new method to measure exactly that: what percentage of Internet users are protected by their Internet Service Provider from these issues. We find that there is a long way to go before the Internet is protected from routing problems, though it varies dramatically by country.

Why RPKI is necessary to secure Internet routing

The Internet is a network of independently-managed networks, called Autonomous Systems (ASes). To achieve global reachability, ASes interconnect with each other and determine the feasible paths to a given destination IP address by exchanging routing information using BGP. BGP enables routers with only local network visibility to construct end-to-end paths based on the arbitrary preferences of each administrative entity that operates that equipment. Typically, Internet traffic between a user and a destination traverses multiple AS networks using paths constructed by BGP routers.

BGP, however, lacks built-in security mechanisms to protect the integrity of the exchanged routing information and to provide authentication and authorization of the advertised IP address space. Because of this, AS operators must implicitly trust that the routing information exchanged through BGP is accurate. As a result, the Internet is vulnerable to the injection of bogus routing information, which cannot be mitigated by security measures at the client or server level of the network.

An adversary with access to a BGP router can inject fraudulent routes into the routing system, which can be used to execute an array of attacks, including:

  • Denial-of-Service (DoS) through traffic blackholing or redirection,
  • Impersonation attacks to eavesdrop on communications,
  • Machine-in-the-Middle exploits to modify the exchanged data, and subvert reputation-based filtering systems.

Additionally, local misconfigurations and fat-finger errors can be propagated well beyond the source of the error and cause major disruption across the Internet.

Such an incident happened on June 24, 2019. Millions of users were unable to access Cloudflare address space when a regional ISP in Pennsylvania accidentally advertised routes to Cloudflare through their capacity-limited network. This was effectively the Internet equivalent of routing an entire freeway through a neighborhood street.

Traffic misdirections like these, either unintentional or intentional, are not uncommon. The Internet Society’s MANRS (Mutually Agreed Norms for Routing Security) initiative estimated that in 2020 alone there were over 3,000 route leaks and hijacks, and new occurrences can be observed every day through Cloudflare Radar.

The most prominent proposals to secure BGP routing, standardized by the IETF focus on validating the origin of the advertised routes using Resource Public Key Infrastructure (RPKI) and verifying the integrity of the paths with BGPsec. Specifically, RPKI (defined in RFC 7115) relies on a Public Key Infrastructure to validate that an AS advertising a route to a destination (an IP address space) is the legitimate owner of those IP addresses.

RPKI has been defined for a long time but lacks adoption. It requires network operators to cryptographically sign their prefixes, and routing networks to perform an RPKI Route Origin Validation (ROV) on their routers. This is a two-step operation that requires coordination and participation from many actors to be effective.

The two phases of RPKI adoption: signing origins and validating origins

RPKI has two phases of deployment: first, an AS that wants to protect its own IP prefixes can cryptographically sign Route Origin Authorization (ROA) records thereby attesting to be the legitimate origin of that signed IP space. Second, an AS can avoid selecting invalid routes by performing Route Origin Validation (ROV, defined in RFC 6483).

With ROV, a BGP route received by a neighbor is validated against the available RPKI records. A route that is valid or missing from RPKI is selected, while a route with RPKI records found to be invalid is typically rejected, thus preventing the use and propagation of hijacked and misconfigured routes.

One issue with RPKI is the fact that implementing ROA is meaningful only if other ASes implement ROV, and vice versa. Therefore, securing BGP routing requires a united effort and a lack of broader adoption disincentivizes ASes from commiting the resources to validate their own routes. Conversely, increasing RPKI adoption can lead to network effects and accelerate RPKI deployment. Projects like MANRS and Cloudflare’s isbgpsafeyet.com are promoting good Internet citizenship among network operators, and make the benefits of RPKI deployment known to the Internet. You can check whether your own ISP is being a good Internet citizen by testing it on isbgpsafeyet.com.

Measuring the extent to which both ROA (signing of addresses by the network that controls them) and ROV (filtering of invalid routes by ISPs) have been implemented is important to evaluating the impact of these initiatives, developing situational awareness, and predicting the impact of future misconfigurations or attacks.

Measuring ROAs is straightforward since ROA data is readily available from RPKI repositories. Querying RPKI repositories for publicly routed IP prefixes (e.g. prefixes visible in the RouteViews and RIPE RIS routing tables) allows us to estimate the percentage of addresses covered by ROA objects. Currently, there are 393,344 IPv4 and 86,306 IPv6 ROAs in the global RPKI system, covering about 40% of the globally routed prefix-AS origin pairs1.

Measuring ROV, however, is significantly more challenging given it is configured inside the BGP routers of each AS, not accessible by anyone other than each router’s administrator.

Measuring ROV deployment

Although we do not have direct access to the configuration of everyone’s BGP routers, it is possible to infer the use of ROV by comparing the reachability of RPKI-valid and RPKI-invalid prefixes from measurement points within an AS2.

Consider the following toy topology as an example, where an RPKI-invalid origin is advertised through AS0 to AS1 and AS2. If AS1 filters and rejects RPKI-invalid routes, a user behind AS1 would not be able to connect to that origin. By contrast, if AS2 does not reject RPKI invalids, a user behind AS2 would be able to connect to that origin.

While occasionally a user may be unable to access an origin due to transient network issues, if multiple users act as vantage points for a measurement system, we would be able to collect a large number of data points to infer which ASes deploy ROV.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

If, in the figure above, AS0 filters invalid RPKI routes, then vantage points in both AS1 and AS2 would be unable to connect to the RPKI-invalid origin, making it hard to distinguish if ROV is deployed at the ASes of our vantage points or in an AS along the path. One way to mitigate this limitation is to announce the RPKI-invalid origin from multiple locations from an anycast network taking advantage of its direct interconnections to the measurement vantage points as shown in the figure below. As a result, an AS that does not itself deploy ROV is less likely to observe the benefits of upstream ASes using ROV, and we would be able to accurately infer ROV deployment per AS3.

Note that it’s also important that the IP address of the RPKI-invalid origin should not be covered by a less specific prefix for which there is a valid or unknown RPKI route, otherwise even if an AS filters invalid RPKI routes its users would still be able to find a route to that IP.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

The measurement technique described here is the one implemented by Cloudflare’s isbgpsafeyet.com website, allowing end users to assess whether or not their ISPs have deployed BGP ROV.

The isbgpsafeyet.com website itself doesn’t submit any data back to Cloudflare, but recently we started measuring whether end users’ browsers can successfully connect to invalid RPKI origins when ROV is present. We use the same mechanism as is used for global performance data4. In particular, every measurement session (an individual end user at some point in time) attempts a request to both valid.rpki.cloudflare.com, which should always succeed as it’s RPKI-valid, and invalid.rpki.cloudflare.com, which is RPKI-invalid and should fail when the user’s ISP uses ROV.

This allows us to have continuous and up-to-date measurements from hundreds of thousands of browsers on a daily basis, and develop a greater understanding of the state of ROV deployment.

The state of global ROV deployment

The figure below shows the raw number of ROV probe requests per hour during October 2022 to valid.rpki.cloudflare.com and invalid.rpki.cloudflare.com. In total, we observed 69.7 million successful probes from 41,531 ASNs.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

Based on APNIC’s estimates on the number of end users per ASN, our weighted5 analysis covers 96.5% of the world’s Internet population. As expected, the number of requests follow a diurnal pattern which reflects established user behavior in daily and weekly Internet activity6.

We can also see that the number of successful requests to valid.rpki.cloudflare.com (gray line) closely follows the number of sessions that issued at least one request (blue line), which works as a smoke test for the correctness of our measurements.

As we don’t store the IP addresses that contribute measurements, we don’t have any way to count individual clients and large spikes in the data may introduce unwanted bias. We account for that by capturing those instants and excluding them.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

Overall, we estimate that out of the four billion Internet users, only 261 million (6.5%) are protected by BGP Route Origin Validation, but the true state of global ROV deployment is more subtle than this.

The following map shows the fraction of dropped RPKI-invalid requests from ASes with over 200 probes over the month of October. It depicts how far along each country is in adopting ROV but doesn’t necessarily represent the fraction of protected users in each country, as we will discover.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

Sweden and Bolivia appear to be the countries with the highest level of adoption (over 80%), while only a few other countries have crossed the 50% mark (e.g. Finland, Denmark, Chad, Greece, the United States).

ROV adoption may be driven by a few ASes hosting large user populations, or by many ASes hosting small user populations. To understand such disparities, the map below plots the contrast between overall adoption in a country (as in the previous map) and median adoption over the individual ASes within that country. Countries with stronger reds have relatively few ASes deploying ROV with high impact, while countries with stronger blues have more ASes deploying ROV but with lower impact per AS.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

In the Netherlands, Denmark, Switzerland, or the United States, adoption appears mostly driven by their larger ASes, while in Greece or Yemen it’s the smaller ones that are adopting ROV.

The following histogram summarizes the worldwide level of adoption for the 6,765 ASes covered by the previous two maps.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

Most ASes either don’t validate at all, or have close to 100% adoption, which is what we’d intuitively expect. However, it’s interesting to observe that there are small numbers of ASes all across the scale. ASes that exhibit partial RPKI-invalid drop rate compared to total requests may either implement ROV partially (on some, but not all, of their BGP routers), or appear as dropping RPKI invalids due to ROV deployment by other ASes in their upstream path.

To estimate the number of users protected by ROV we only considered ASes with an observed adoption above 95%, as an AS with an incomplete deployment still leaves its users vulnerable to route leaks from its BGP peers.

If we take the previous histogram and summarize by the number of users behind each AS, the green bar on the right corresponds to the 261 million users currently protected by ROV according to the above criteria (686 ASes).

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

Looking back at the country adoption map one would perhaps expect the number of protected users to be larger. But worldwide ROV deployment is still mostly partial, lacking larger ASes, or both. This becomes even more clear when compared with the next map, plotting just the fraction of fully protected users.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

To wrap up our analysis, we look at two world economies chosen for their contrasting, almost symmetrical, stages of deployment: the United States and the European Union.

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation
Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

112 million Internet users are protected by 111 ASes from the United States with comprehensive ROV deployments. Conversely, more than twice as many ASes from countries making up the European Union have fully deployed ROV, but end up covering only half as many users. This can be reasonably explained by end user ASes being more likely to operate within a single country rather than span multiple countries.

Conclusion

Probe requests were performed from end user browsers and very few measurements were collected from transit providers (which have few end users, if any). Also, paths between end user ASes and Cloudflare are often very short (a nice outcome of our extensive peering) and don’t traverse upper-tier networks that they would otherwise use to reach the rest of the Internet.

In other words, the methodology used focuses on ROV adoption by end user networks (e.g. ISPs) and isn’t meant to reflect the eventual effect of indirect validation from (perhaps validating) upper-tier transit networks. While indirect validation may limit the “blast radius” of (malicious or accidental) route leaks, it still leaves non-validating ASes vulnerable to leaks coming from their peers.

As with indirect validation, an AS remains vulnerable until its ROV deployment reaches a sufficient level of completion. We chose to only consider AS deployments above 95% as truly comprehensive, and Cloudflare Radar will soon begin using this threshold to track ROV adoption worldwide, as part of our mission to help build a better Internet.

When considering only comprehensive ROV deployments, some countries such as Denmark, Greece, Switzerland, Sweden, or Australia, already show an effective coverage above 50% of their respective Internet populations, with others like the Netherlands or the United States slightly above 40%, mostly driven by few large ASes rather than many smaller ones.

Worldwide we observe a very low effective coverage of just 6.5% over the measured ASes, corresponding to 261 million end users currently safe from (malicious and accidental) route leaks, which means there’s still a long way to go before we can declare BGP to be safe.

……
1https://rpki.cloudflare.com/
2Gilad, Yossi, Avichai Cohen, Amir Herzberg, Michael Schapira, and Haya Shulman. "Are we there yet? On RPKI’s deployment and security." Cryptology ePrint Archive (2016).
3Geoff Huston. “Measuring ROAs and ROV”. https://blog.apnic.net/2021/03/24/measuring-roas-and-rov/
4Measurements are issued stochastically when users encounter 1xxx error pages from default (non-customer) configurations.
5Probe requests are weighted by AS size as calculated from Cloudflare’s worldwide HTTP traffic.
6Quan, Lin, John Heidemann, and Yuri Pradkin. "When the Internet sleeps: Correlating diurnal networks with external factors." In Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 87-100. 2014.

Introducing Cloudflare’s Third Party Code of Conduct

Post Syndicated from Andria Jones original https://blog.cloudflare.com/introducing-cloudflares-third-party-code-of-conduct/

Introducing Cloudflare's Third Party Code of Conduct

Introducing Cloudflare's Third Party Code of Conduct

Cloudflare is on a mission to help build a better Internet, and we are committed to doing this with ethics and integrity in everything that we do. This commitment extends beyond our own actions, to third parties acting on our behalf. Cloudflare has the same expectations of ethics and integrity of our suppliers, resellers, and other partners as we do of ourselves.

Our new code of conduct for third parties

We first shared publicly our Code of Business Conduct and Ethics during Cloudflare’s initial public offering in September 2019. All Cloudflare employees take legal training as part of their onboarding process, as well as an annual refresher course, which includes the topics covered in our Code, and they sign an acknowledgement of our Code and related policies as well.

While our Code of Business Conduct and Ethics applies to all directors, officers and employees of Cloudflare, it has not extended to third parties. Today, we are excited to share our Third Party Code of Conduct, specifically formulated with our suppliers, resellers, and other partners in mind. It covers such topics as:

  • Human Rights
  • Fair Labor
  • Environmental Sustainability
  • Anti-Bribery and Anti-Corruption
  • Trade Compliance
  • Anti-Competition
  • Conflicts of Interest
  • Data Privacy and Security
  • Government Contracting

But why have another Code?

We work with a wide range of third parties in all parts of the world, including countries with a high risk of corruption, potential for political unrest, and also countries that are just not governed by the same laws that we may see as standard in the United States. We wanted a Third Party Code of Conduct that serves as a statement of Cloudflare’s core values and commitments, and a call for third parties who share the same.

The following are just a few examples of how we want to ensure our third parties act with ethics and integrity on our behalf, even when we aren’t watching:

We want to ensure that the servers and other equipment in our supply chain are sourced responsibly, from manufacturers who respect human rights — free of forced or child labor, with environmental sustainability at the forefront.

We want to provide our products and services to customers based on the quality of Cloudflare, not because a third party reseller may bribe a customer to enter into an agreement.

We want to ensure there are no conflicts of interest with our third parties, that might give someone an unfair advantage.

As a government contractor, we want to ensure that we do not have telecommunications or video surveillance equipment, systems, or services from prohibited parties in our supply chain to protect national security interests.

Having a Third Party Code of Conduct is also industry standard. As Cloudflare garners an increasing number of Fortune 500 and other enterprise customers, we find ourselves reviewing and committing to their Third Party Codes of Conduct as well.

How it works

Our Third Party Code of Conduct is not meant to replace our terms of service or other contractual agreements. Rather, it is meant to supplement them, highlighting Cloudflare’s ethical commitments and encouraging our suppliers, resellers, and other partners to commit to the same. We will be cascading this new Code to all existing third parties, and include it at onboarding for all new third parties going forward. A violation of the Code, or any contractual agreements between Cloudflare and our third parties, may result in termination of the relationship.

This Third Party Code of Conduct is only one facet of Cloudflare’s third party due diligence program, and it complements the other work that Cloudflare does in this area. Cloudflare rigorously screens and vets our suppliers and partners at onboarding, and we continue to routinely monitor and audit them over time. We are always looking for ways to communicate with, educate, and learn from our third parties as well.

Join our mission

Are you a supplier, reseller or other partner who shares these values of ethics and integrity? Come work with us and join Cloudflare on its mission to help build a better, more ethical Internet.

The unintended consequences of blocking IP addresses

Post Syndicated from Alissa Starzak original https://blog.cloudflare.com/consequences-of-ip-blocking/

The unintended consequences of blocking IP addresses

The unintended consequences of blocking IP addresses

In late August 2022, Cloudflare’s customer support team began to receive complaints about sites on our network being down in Austria. Our team immediately went into action to try to identify the source of what looked from the outside like a partial Internet outage in Austria. We quickly realized that it was an issue with local Austrian Internet Service Providers.

But the service disruption wasn’t the result of a technical problem. As we later learned from media reports, what we were seeing was the result of a court order. Without any notice to Cloudflare, an Austrian court had ordered Austrian Internet Service Providers (ISPs) to block 11 of Cloudflare’s IP addresses.

In an attempt to block 14 websites that copyright holders argued were violating copyright, the court-ordered IP block rendered thousands of websites inaccessible to ordinary Internet users in Austria over a two-day period. What did the thousands of other sites do wrong? Nothing. They were a temporary casualty of the failure to build legal remedies and systems that reflect the Internet’s actual architecture.

Today, we are going to dive into a discussion of IP blocking: why we see it, what it is, what it does, who it affects, and why it’s such a problematic way to address content online.

Collateral effects, large and small

The craziest thing is that this type of blocking happens on a regular basis, all around the world. But unless that blocking happens at the scale of what happened in Austria, or someone decides to highlight it, it is typically invisible to the outside world. Even Cloudflare, with deep technical expertise and understanding about how blocking works, can’t routinely see when an IP address is blocked.

For Internet users, it’s even more opaque. They generally don’t know why they can’t connect to a particular website, where the connection problem is coming from, or how to address it. They simply know they cannot access the site they were trying to visit. And that can make it challenging to document when sites have become inaccessible because of IP address blocking.

Blocking practices are also wide-spread. In their Freedom on the Net report, Freedom House recently reported that 40 out of the 70 countries that they examined – which vary from countries like Russia, Iran and Egypt to Western democracies like the United Kingdom and Germany –  did some form of website blocking. Although the report doesn’t delve into exactly how those countries block, many of them use forms of IP blocking, with the same kind of potential effects for a partial Internet shutdown that we saw in Austria.

Although it can be challenging to assess the amount of collateral damage from IP blocking, we do have examples where organizations have attempted to quantify it. In conjunction with a case before the European Court of Human Rights, the European Information Society Institute, a Slovakia-based nonprofit, reviewed Russia’s regime for website blocking in 2017. Russia exclusively used IP addresses to block content. The European Information Society Institute concluded that IP blocking led to “collateral website blocking on a massive scale” and noted that as of June 28, 2017, “6,522,629 Internet resources had been blocked in Russia, of which 6,335,850 – or 97% – had been blocked collaterally, that is to say, without legal justification.”

In the UK, overbroad blocking prompted the non-profit Open Rights Group to create the website Blocked.org.uk. The website has a tool enabling users and site owners to report on overblocking and request that ISPs remove blocks. The group also has hundreds of individual stories about the effect of blocking on those whose websites were inappropriately blocked, from charities to small business owners. Although it’s not always clear what blocking methods are being used, the fact that the site is necessary at all conveys the amount of overblocking. Imagine a dressmaker, watchmaker or car dealer looking to advertise their services and potentially gain new customers with their website. That doesn’t work if local users can’t access the site.

One reaction might be, “Well, just make sure there are no restricted sites sharing an address with unrestricted sites.” But as we’ll discuss in more detail, this ignores the large difference between the number of possible domain names and the number of available IP addresses, and runs counter to the very technical specifications that empower the Internet. Moreover, the definitions of restricted and unrestricted differ across nations, communities, and organizations. Even if it were possible to know all the restrictions, the designs of the protocols — of the Internet, itself — mean that it is simply infeasible, if not impossible, to satisfy every agency’s constraints.

Overblocking websites is not only a problem for users; it has legal implications. Because of the effect it can have on ordinary citizens looking to exercise their rights online, government entities (both courts and regulatory bodies) have a legal obligation to make sure that their orders are necessary and proportionate, and don’t unnecessarily affect those who are not contributing to the harm.

It would be hard to imagine, for example, that a court in response to alleged wrongdoing would blindly issue a search warrant or an order based solely on a street address without caring if that address was for a single family home, a six-unit condo building, or a high rise with hundreds of separate units. But those sorts of practices with IP addresses appear to be rampant.

In 2020, the European Court of Human Rights (ECHR) – the court overseeing the implementation of the Council of Europe’s European Convention on Human Rights – considered a case involving a website that was blocked in Russia not because it had been targeted by the Russian government, but because it shared an IP address with a blocked website. The website owner brought suit over the block. The ECHR concluded that the indiscriminate blocking was impermissible, ruling that the block on the lawful content of the site “amounts to arbitrary interference with the rights of owners of such websites.” In other words, the ECHR ruled that it was improper for a government to issue orders that resulted in the blocking of sites that were not targeted.

Using Internet infrastructure to address content challenges

Ordinary Internet users don’t think a lot about how the content they are trying to access online is delivered to them. They assume that when they type a domain name into their browser, the content will automatically pop up. And if it doesn’t, they tend to assume the website itself is having problems unless their entire Internet connection seems to be broken. But those basic assumptions ignore the reality that connections to a website are often used to limit access to content online.

Why do countries block connections to websites? Maybe they want to limit their own citizens from accessing what they believe to be illegal content – like online gambling or explicit material – that is permissible elsewhere in the world. Maybe they want to prevent the viewing of a foreign news source that they believe to be primarily disinformation. Or maybe they want to support copyright holders seeking to block access to a website to limit viewing of content that they believe infringes their intellectual property.

To be clear, blocking access is not the same thing as removing content from the Internet. There are a variety of legal obligations and authorities designed to permit actual removal of illegal content. Indeed, the legal expectation in many countries is that blocking is a matter of last resort, after attempts have been made to remove content at the source.

Blocking just prevents certain viewers – those whose Internet access depends on the ISP that is doing the blocking – from being able to access websites. The site itself continues to exist online and is accessible by everyone else. But when the content originates from a different place and can’t be easily removed, a country may see blocking as their best or only approach.

We recognize the concerns that sometimes drive countries to implement blocking. But fundamentally, we believe it’s important for users to know when the websites they are trying to access have been blocked, and, to the extent possible, who has blocked them from view and why. And it’s critical that any restrictions on content should be as limited as possible to address the harm, to avoid infringing on the rights of others.

Brute force IP address blocking doesn’t allow for those things. It’s fully opaque to Internet users. The practice has unintended, unavoidable consequences on other content. And the very fabric of the Internet means that there is no good way to identify what other websites might be affected either before or during an IP block.

To understand what happened in Austria and what happens in many other countries around the world that seek to block content with the bluntness of IP addresses, we have to understand what is going on behind the scenes. That means diving into some technical details.

Identity is attached to names, never addresses

Before we even get started describing the technical realities of blocking, it’s important to stress that the first and best option to deal with content is at the source. A website owner or hosting provider has the option of removing content at a granular level, without having to take down an entire website. On the more technical side, a domain name registrar or registry can potentially withdraw a domain name, and therefore a website, from the Internet altogether.

But how do you block access to a website, if for whatever reason the content owner or content source is unable or unwilling to remove it from the Internet?  There are only three possible control points.

The first is via the Domain Name System (DNS), which translates domain names into IP addresses so that the site can be found. Instead of returning a valid IP address for a domain name, the DNS resolver could lie and respond with a code, NXDOMAIN, meaning that “there is no such name.” A better approach would be to use one of the honest error numbers standardized in 2020, including error 15 for blocked, error 16 for censored, 17 for filtered, or 18 for prohibited, although these are not widely used currently.

Interestingly, the precision and effectiveness of DNS as a control point depends on whether the DNS resolver is private or public. Private or ‘internal’ DNS resolvers are operated by ISPs and enterprise environments for their own known clients, which means that operators can be precise in applying content restrictions. By contrast, that level of precision is unavailable to open or public resolvers, not least because routing and addressing is global and ever-changing on the Internet map, and in stark contrast to addresses and routes on a fixed postal or street map. For example, private DNS resolvers may be able to block access to websites within specified geographic regions with at least some level of accuracy in a way that public DNS resolvers cannot, which becomes profoundly important given the disparate (and inconsistent) blocking regimes around the world.

The second approach is to block individual connection requests to a restricted domain name. When a user or client wants to visit a website, a connection is initiated from the client to a server name, i.e. the domain name. If a network or on-path device is able to observe the server name, then the connection can be terminated. Unlike DNS, there is no mechanism to communicate to the user that access to the server name was blocked, or why.

The third approach is to block access to an IP address where the domain name can be found. This is a bit like blocking the delivery of all mail to a physical address. Consider, for example, if that address is a skyscraper with its many unrelated and independent occupants. Halting delivery of mail to the address of the skyscraper causes collateral damage by invariably affecting all parties at that address. IP addresses work the same way.

Notably, the IP address is the only one of the three options that has no attachment to the domain name. The website domain name is not required for routing and delivery of data packets; in fact it is fully ignored. A website can be available on any IP address, or even on many IP addresses, simultaneously. And the set of IP addresses that a website is on can change at any time. The set of IP addresses cannot definitively be known by querying DNS, which has been able to return any valid address at any time for any reason, since 1995.

The idea that an address is representative of an identity is anathema to the Internet’s design, because the decoupling of address from name is deeply embedded in the Internet standards and protocols, as is explained next.

The Internet is a set of protocols, not a policy or perspective

Many people still incorrectly assume that an IP address represents a single website. We’ve previously stated that the association between names and addresses is understandable given that the earliest connected components of the Internet appeared as one computer, one interface, one address, and one name. This one-to-one association was an artifact of the ecosystem in which the Internet Protocol was deployed, and satisfied the needs of the time.

Despite the one-to-one naming practice of the early Internet, it has always been possible to assign more than one name to a server (or ‘host’). For example, a server was (and is still) often configured with names to reflect its service offerings such as mail.example.com and www.example.com, but these shared a base domain name.  There were few reasons to have completely different domain names until the need to colocate completely different websites onto a single server. That practice was made easier in 1997 by the Host header in HTTP/1.1, a feature preserved by the SNI field in a TLS extension in 2003.

Throughout these changes, the Internet Protocol and, separately, the DNS protocol, have not only kept pace, but have remained fundamentally unchanged. They are the very reason that the Internet has been able to scale and evolve, because they are about addresses, reachability, and arbitrary name to IP address relationships.

The designs of IP and DNS are also entirely independent, which only reinforces that names are separate from addresses. A closer inspection of the protocols’ design elements illuminates the misperceptions of policies that lead to today’s common practice of controlling access to content by blocking IP addresses.

By design, IP is for reachability and nothing else

Much like large public civil engineering projects rely on building codes and best practice, the Internet is built using a set of open standards and specifications informed by experience and agreed by international consensus. The Internet standards that connect hardware and applications are published by the Internet Engineering Task Force (IETF) in the form of “Requests for Comment” or RFCs — so named not to suggest incompleteness, but to reflect that standards must be able to evolve with knowledge and experience. The IETF and its RFCs are cemented in the very fabric of communications, for example, with the first RFC 1 published in 1969. The Internet Protocol (IP) specification reached RFC status in 1981.

Alongside the standards organizations, the Internet’s success has been helped by a core idea known as the end-to-end (e2e) principle, codified also in 1981, based on years of trial and error experience. The end-to-end principle is a powerful abstraction that, despite taking many forms, manifests a core notion of the Internet Protocol specification: the network’s only responsibility is to establish reachability, and every other possible feature has a cost or a risk.

The idea of “reachability” in the Internet Protocol is also enshrined in the design of IP addresses themselves. Looking at the Internet Protocol specification, RFC 791, the following excerpt from Section 2.3 is explicit about IP addresses having no association with names, interfaces, or anything else.

Addressing

    A distinction is made between names, addresses, and routes [4].   A
    name indicates what we seek.  An address indicates where it is.  A
    route indicates how to get there.  The internet protocol deals
    primarily with addresses.  It is the task of higher level (i.e.,
    host-to-host or application) protocols to make the mapping from
    names to addresses.   The internet module maps internet addresses to
    local net addresses.  It is the task of lower level (i.e., local net
    or gateways) procedures to make the mapping from local net addresses
    to routes.
                            [ RFC 791, 1981 ]

Just like postal addresses for skyscrapers in the physical world, IP addresses are no more than street addresses written on a piece of paper. And just like a street address on paper, one can never be confident about the entities or organizations that exist behind an IP address. In a network like Cloudflare’s, any single IP address represents thousands of servers, and can have even more websites and services — in some cases numbering into the millions — expressly because the Internet Protocol is designed to enable it.

Here’s an interesting question: could we, or any content service provider, ensure that every IP address matches to one and only one name? The answer is an unequivocal no, and here too, because of a protocol design — in this case, DNS.

The number of names in DNS always exceeds the available addresses

A one-to-one relationship between names and addresses is impossible given the Internet specifications for the same reasons that it is infeasible in the physical world. Ignore for a moment that people and organizations can change addresses. Fundamentally, the number of people and organizations on the planet exceeds the number of postal addresses. We not only want, but need for the Internet to accommodate more names than addresses.

The difference in magnitude between names and addresses is also codified in the specifications. IPv4 addresses are 32 bits, and IPv6 addresses are 128 bits. The size of a domain name that can be queried by DNS is as many as 253 octets, or 2,024 bits (from Section 2.3.4 in RFC 1035, published 1987). The table below helps to put those differences into perspective:

The unintended consequences of blocking IP addresses

On November 15, 2022, the United Nations announced the population of the Earth surpassed eight billion people. Intuitively, we know that there cannot be anywhere near as many postal addresses. The difference between the number of possible names on the planet, and similarly on the Internet, does and must exceed the number of available addresses.

The proof is in the pudding names!

Now that those two relevant principles about IP addresses and DNS names in the international standards are understood – that IP address and domain names serve distinct purposes and there is no one to one relationship between the two – an examination of a recent case of content blocking using IP addresses can help to see the reasons it is problematic. Take, for example, the IP blocking incident in Austria late August 2022. The goal was to restrict access to 14 target domains, by blocking 11 IP addresses (source: RTR.Telekom. Post via the Internet Archive) — the mismatch between those two numbers should have been a warning flag that IP blocking might not have the desired effect.

Analogies and international standards may explain the reasons that IP blocking should be avoided, but we can see the scale of the problem by looking at Internet-scale data. To better understand and explain the severity of IP blocking, we decided to generate a global view of domain names and IP addresses (thanks are due to a PhD research intern, Sudheesh Singanamalla, for the effort). In September 2022, we used the authoritative zone files for the top-level domains (TLDs) .com, .net, .info, and .org, together with top-1M website lists, to find a total of 255,315,270 unique names. We then queried DNS from each of five regions and recorded the set of IP addresses returned. The table below summarizes our findings:

The unintended consequences of blocking IP addresses

The table above makes clear that it takes no more than 10.7 million addresses to reach 255,315,270 million names from any region on the planet, and the total set of IP addresses for those names from everywhere is about 16 million — the ratio of names to IP addresses is nearly 24x in Europe and 16x globally.

There is one more worthwhile detail about the numbers above: The IP addresses are the combined totals of both IPv4 and IPv6 addresses, meaning that far fewer addresses are needed to reach all 255M websites.

We’ve also inspected the data a few different ways to find some interesting observations. For example, the figure below shows the cumulative distribution (CDF) of the proportion of websites that can be visited with each additional IP address. On the y-axis is the proportion of websites that can be reached given some number of IP addresses. On the x-axis, the 16M IP addresses are ranked from the most domains on the left, to the least domains on the right. Note that any IP address in this set is a response from DNS and so it must have at least one domain name, but the highest numbers of domains on IP addresses in the set number are in the 8-digit millions.

The unintended consequences of blocking IP addresses

By looking at the CDF there are a few eye-watering observations:

  • Fewer than 10 IP addresses are needed to reach 20% of, or approximately 51 million, domains in the set;
  • 100 IPs are enough to reach almost 50% of domains;
  • 1000 IPs are enough to reach 60% of domains;
  • 10,000 IPs are enough to reach 80%, or about 204 million, domains.

In fact, from the total set of 16 million addresses, fewer than half, 7.1M (43.7%), of the addresses in the dataset had one name. On this ‘one’ point we must be additionally clear: we are unable to ascertain if there was only one and no other names on those addresses because there are many more domain names than those contained only in .com, .org, .info., and .net — there might very well be other names on those addresses.

In addition to having a number of domains on a single IP address, any IP address may change over time for any of those domains.  Changing IP addresses periodically can be helpful with certain security, performance, and to improve reliability for websites. One common example in use by many operations is load balancing. This means DNS queries may return different IP addresses over time, or in different places, for the same websites. This is a further, and separate, reason why blocking based on IP addresses will not serve its intended purpose over time.

Ultimately, there is no reliable way to know the number of domains on an IP address without inspecting all names in the DNS, from every location on the planet, at every moment in time — an entirely infeasible proposition.

Any action on an IP address must, by the very definitions of the protocols that rule and empower the Internet, be expected to have collateral effects.

Lack of transparency with IP blocking

So if we have to expect that the blocking of an IP address will have collateral effects, and it’s generally agreed that it’s inappropriate or even legally impermissible to overblock by blocking IP addresses that have multiple domains on them, why does it still happen? That’s hard to know for sure, so we can only speculate. Sometimes it reflects a lack of technical understanding about the possible effects, particularly from entities like judges who are not technologists. Sometimes governments just ignore the collateral damage – as they do with Internet shutdowns – because they see the blocking as in their interest. And when there is collateral damage, it’s not usually obvious to the outside world, so there can be very little external pressure to have it addressed.

It’s worth stressing that point. When an IP is blocked, a user just sees a failed connection. They don’t know why the connection failed, or who caused it to fail. On the other side, the server acting on behalf of the website doesn’t even know it’s been blocked until it starts getting complaints about the fact that it is unavailable. There is virtually no transparency or accountability for the overblocking. And it can be challenging, if not impossible, for a website owner to challenge a block or seek redress for being inappropriately blocked.

Some governments, including Austria, do publish active block lists, which is an important step for transparency. But for all the reasons we’ve discussed, publishing an IP address does not reveal all the sites that may have been blocked unintentionally. And it doesn’t give those affected a means to challenge the overblocking. Again, in the physical world example, it’s hard to imagine a court order on a skyscraper that wouldn’t be posted on the door, but we often seem to jump over such due process and notice requirements in virtual space.

We think talking about the problematic consequences of IP blocking is more important than ever as an increasing number of countries push to block content online. Unfortunately, ISPs often use IP blocks to implement those requirements. It may be that the ISP is newer or less robust than larger counterparts, but larger ISPs engage in the practice, too, and understandably so because IP blocking takes the least effort and is readily available in most equipment.

And as more and more domains are included on the same number of IP addresses, the problem is only going to get worse.

Next steps

So what can we do?

We believe the first step is to improve transparency around the use of IP blocking. Although we’re not aware of any comprehensive way to document the collateral damage caused by IP blocking, we believe there are steps we can take to expand awareness of the practice. We are committed to working on new initiatives that highlight those insights, as we’ve done with the Cloudflare Radar Outage Center.

We also recognize that this is a whole Internet problem, and therefore has to be part of a broader effort. The significant likelihood that blocking by IP address will result in restricting access to a whole series of unrelated (and untargeted) domains should make it a non-starter for everyone. That’s why we’re engaging with civil society partners and like-minded companies to lend their voices to challenge the use of blocking IP addresses as a way of addressing content challenges and to point out collateral damage when they see it.

To be clear, to address the challenges of illegal content online, countries need legal mechanisms that enable the removal or restriction of content in a rights-respecting way. We believe that addressing the content at the source is almost always the best and the required first step. Laws like the EU’s new Digital Services Act or the Digital Millennium Copyright Act provide tools that can be used to address illegal content at the source, while respecting important due process principles. Governments should focus on building and applying legal mechanisms in ways that least affect other people’s rights, consistent with human rights expectations.

Very simply, these needs cannot be met by blocking IP addresses.

We’ll continue to look for new ways to talk about network activity and disruption, particularly when it results in unnecessary limitations on access. Check out Cloudflare Radar for more insights about what we see online.

Applying Human Rights Frameworks to our approach to abuse

Post Syndicated from Alissa Starzak original https://blog.cloudflare.com/applying-human-rights-frameworks-to-our-approach-to-abuse/

Applying Human Rights Frameworks to our approach to abuse

Applying Human Rights Frameworks to our approach to abuse

Last year, we launched Cloudflare’s first Human Rights Policy, formally stating our commitment to respect human rights under the UN Guiding Principles on Business and Human Rights (UNGPs) and articulating how we planned to meet the commitment as a business to respect human rights. Our Human Rights Policy describes many of the concrete steps we take to implement these commitments, from protecting the privacy of personal data to respecting the rights of our diverse workforce.

We also look to our human rights commitments in considering how to approach complaints of abuse by those using our services. Cloudflare has long taken positions that reflect our belief that we must consider the implications of our actions for both Internet users and the Internet as a whole. The UNGPs guide that understanding by encouraging us to think systematically about how the decisions Cloudflare makes may affect people, with the goal of building processes to incorporate those considerations.

Human rights frameworks have also been adopted by policymakers seeking to regulate content and behavior online in a rights-respecting way. The Digital Services Act recently passed by the European Union, for example, includes a variety of requirements for intermediaries like Cloudflare that come from human rights principles. So using human rights principles to help guide our actions is not only the right thing to do, it is likely to be required by law at some point down the road.

So what does it mean to apply human rights frameworks to our response to abuse? As we’ll talk about in more detail below, we use human rights concepts like access to fair process, proportionality (the idea that actions should be carefully calibrated to minimize any effect on rights), and transparency.

Human Rights online

The first step is to understand the integral role the Internet plays in human rights. We use the Internet not only to find and share information, but for education, commerce, employment, and social connection. Not only is the Internet essential to our rights of freedom of expression, opinion and association, the UN considers it an enabler of all of our human rights.

The Internet allows activists and human rights defenders to expose abuses across the globe. It allows collective causes to grow into global movements. It provides the foundation for large-scale organizing for political and social change in ways that have never been possible before. But all of that depends on having access to it.

And as we’ve seen, access to a free, open, and interconnected Internet is not guaranteed.  Authoritarian governments take advantage of the critical role it plays by denying access to it altogether and using other tactics to intimidate their populations. As described by a recent UN report, government-mandated Internet “shutdowns complement other digital measures used to suppress dissent, such as intensified censorship, systematic content filtering and mass surveillance, as well as the use of government-sponsored troll armies, cyberattacks and targeted surveillance against journalists and human rights defenders.” Online access is limited by the failure to invest in infrastructure or lack of individual resources. Private interests looking to leverage Internet infrastructure to solve commercial content problems result in overblocking of unrelated websites. Cyberattacks make even critical infrastructure inaccessible. Gatekeepers limit entry for business reasons, risking the silencing of those without financial or political clout.

If we want to maintain an Internet that is for everyone, we need to develop rules within companies that don’t take access to it for granted. Processes that could limit Internet access should be thoughtful and well-grounded in human rights principles.

The impact of free services

Cloudflare is unique among our competitors because we offer a variety of services that entities can sign up for free online. Our free services make it possible for everyone – nonprofits, small businesses, developers, and vulnerable voices around the world – to have access to security services they otherwise might be unable to afford.

Cloudflare’s approach of providing free and low cost security services online is consistent with human rights and the push for greater access to the Internet for everyone. Having a free plan removes barriers to the Internet. It means you don’t have to be a big company, a government, or an organization with a popular cause to protect yourself from those who might want to silence you through a cyberattack.

Making access to security services easily available for free also has the potential to relegate DDoS attacks to the dustbin of history. If we can stop DDoS from being an effective means of attack, we may yet be able to divert attackers from using them. Ridding the world of the scourge of DDoS attacks would benefit everyone. In particular, though, it would benefit vulnerable entities doing good for the world who do not otherwise have the means to defend themselves.

But that same free services model that empowers vulnerable groups and has the potential to eliminate DDoS attacks once and for all means that we at Cloudflare are often not picking our customers; they are picking us. And that comes with its own risk. For every dissenting voice challenging an oppressive regime that signs up for our service, there may also be a bad actor doing things online that are inconsistent with our values.

To reflect that reality, we need an abuse framework that satisfies our goals of expanding access to the global Internet and getting rid of cyberattacks, while also finding ways, both as a company and together with the broader Internet community, to address human rights harms.

Applying the UNGP framework to online activity

As we’ve described before, the UNGPs assign businesses and governments different obligations when it comes to human rights. Governments are required to protect human rights within their territories, taking appropriate steps to prevent, investigate, punish and redress harms. Companies, on the other hand, are expected to respect human rights. That means that companies should conduct due diligence to avoid taking actions that would infringe on the rights of others, and remedy any harms that do occur.

It can be challenging to apply that UNGP protect/respect/remedy framework to online activities. Because the Internet serves as an enabler of a variety of human rights, decisions that alter access to the Internet – from serving a particular market to changing access to particular services – can affect the rights of many different people, sometimes in competing ways.

Access to the Internet is also not typically provided by a single company. When you visit a website online, you’re experiencing the services of many different providers. Just for that single website, there’s probably a website owner who created the website, a website host storing the content, a domain name registrar providing the domain name, a domain name registry running the top level domain like .com or .org, a reverse proxy helping keep the website online in case of attack, a content delivery network improving the efficiency of Internet transmissions, a transit provider transmitting the website content across the Internet, the ISPs delivering the content to the end user, and a browser to make the website’s content intelligible to you.

And that description doesn’t even include the captcha provider that helps make sure the site is visited by humans rather than bots, the open source software developer whose code was used to build the site, the various plugins that enable the site to show video or accept payments, or the many other providers online who might play an important role in your user experience. So our ability to exercise our human rights online is dependent on the actions of many providers, acting as part of an ecosystem to bring us the Internet.

Trying to understand the appropriate role for companies is even more complicated when it comes to questions of online abuse. Online abuse is not generally caused by one of the many infrastructure providers who facilitate access to the Internet; the harm is caused by a third party. Because of the variety of providers mentioned above, a company may have limited options at its disposal to do anything that would help address the online harm in a targeted way, consistent with human rights principles. For example, blocking access to parts of the Internet, or stepping aside to allow a site to be subjected to a cyberattack, has the potential to have profound negative impact on others’ access to the Internet and thus human rights.

To help work through those competing human rights concerns, Cloudflare strives to build processes around online abuse that incorporate human rights principles. Our approach focuses on three recognized human rights principles: (1) fair process for both complainants and users, (2) proportionality, and (3) transparency. And we have engaged, and continue to engage, extensively with human rights focused groups like the Global Network Initiative and the UN’s B-Tech Project, as well as our Project Galileo partners and many other stakeholders, to understand the impact of our policies.

Fair abuse processes – Grievance mechanisms for complainants

Human rights law, and the UNGPs in particular, stress that individuals and communities who are harmed should have mechanisms for remediation of the harm. Those mechanisms – which include both legal processes like going to court and more informal private processes – should be applied equitably and fairly, in a predictable and transparent way. A company like Cloudflare can help by establishing grievance mechanisms that give people an opportunity to raise their concerns about harm, or to challenge deprivation of rights.

To address online abuse by entities that might be using Cloudflare services, Cloudflare has an abuse reporting form that is open to anyone online. Our website includes a detailed description of how to report problematic activity. Individuals worried about retaliation, such as those submitting complaints of threatening or harassing behavior, can choose to submit complaints anonymously, although it may limit the ability to follow up on the complaint.

Cloudflare uses the information we receive through that abuse reporting process to respond to complaints about online abuse based on the types of services we may be providing as well as the nature of the complaint.

Because of the way Cloudflare protects entities from cyberattack, a complainant may not know who is hosting the content that is the source of the alleged harm. To make sure that someone who might have been harmed has an opportunity to remediate that harm, Cloudflare has created an abuse process to get complaints to the right place. If the person submitting the complaint is seeking to remove content, something that Cloudflare cannot do if it is providing only performance or security services, Cloudflare will forward the complaint to the website owner and hosting provider for appropriate action.

Fair abuse processes – Notice and Appeal for Cloudflare users

Trying to build a fair policy around abuse requires understanding that complaints are not always submitted in good faith, and that abuse processes can themselves be abused. Cloudflare, for example, has received abuse complaints that appear to be intended to intimidate journalists reporting on government corruption, to silence political opponents, and to disrupt competitors.

A fair abuse process therefore also means being fair to Cloudflare users or website owners who might suffer consequences of a complaint. Cloudflare generally provides notice to our users of potential complaints so that they can respond to allegations of abuse, although individual circumstances and anonymous complaints sometimes make that difficult.

We also strive to provide users with notice of potential actions we might take, as well as an opportunity to provide additional information that might inform our decisions about appropriate action. Users can also seek reconsideration of decisions.

Proportionality – Differentiating our products

Proportionality is a core principle of human rights. In human rights law, proportionality means that any interference with rights should be as limited and narrow as possible in seeking to address the harm. In other words, the goal of proportionality is to minimize the collateral effect of an action on other human rights.

Proportionality is an important principle for Internet infrastructure because of the dependencies among different providers required to access the Internet. A government demand that a single ISP shut off or throttle access to the Internet can have dramatic real-life effects,“depriving thousands or even millions of their only means of reaching their loved ones, continuing their work or participating in political debates or decision-making.” Voluntary action by individual providers can have a similar broad cascading effect, completely eliminating access to certain services or swaths of content.

To avoid these kinds of consequences, we apply the concept of proportionality to address abuse on our network, particularly when a complaint implicates other rights, like freedom of expression. Complaints about content are best addressed by those able to take the most targeted action possible. A complaint about a single image or post, for example, should not result in an entire website being taken down.

The principle of proportionality is the basis for our use of different approaches to address abuse for different types of products. If we’re hosting content with products like Cloudflare Pages, Cloudflare Images, or Cloudflare Stream, we’re able to take more granular, specific action. In those cases, we have an acceptable hosting policy that enables us to take action on particular pieces of content. We give the Cloudflare user an opportunity to take down the content themselves before following notice and takedown, which allows them to contest the takedown if they believe it is inappropriate.

But when we’re only providing security services that prevent the site being removed from the Internet by a cyberattack, Cloudflare can’t take targeted action on particular pieces of content. Nor do we generally see termination of DDoS protection services as the right or most effective remedy for addressing a website with harmful content. Termination of security services only resolves the concerns if the site is removed from the Internet by DDoS attack, an act which is illegal in most jurisdictions. From a human rights standpoint, making content inaccessible through a vigilante cyber attack is not only inconsistent with the principle of proportionality, but with the principles of notice and due process. It also provides no opportunities for remediation of harm in the event of a mistake.

Likewise, when we’re providing core Internet technology services like DNS, we do not have the ability to take granular action. Our only options are blunt instruments.

In those circumstances, there are actors in the broader Internet ecosystem who can take targeted action, even if we can’t. Typically, that would be a website owner or hosting provider that has the ability to remove individual pieces of content. Proportionality therefore sometimes means recognizing that we can’t and shouldn’t try to solve every problem, particularly when we are not the right party to take action. But we can still play an important role in helping complainants identify the right provider, so they can have their concerns addressed.

The EU recently formally embraced the concept of proportionality in abuse processes in the Digital Services Act. They pointed out that when intermediaries must be involved to address illegal content, requests “should, as a general rule, be directed to the specific provider that has the technical and operational ability to act against specific items of illegal content, to prevent and minimize any possible negative effects on the availability and accessibility of information that is not illegal content.” [DSA, Recital 27]

Transparency – Reporting on abuse

Human rights law emphasizes the importance of transparency – from both governments and companies – on decisions that have an effect on human rights. Transparency allows for public accountability and improves trust in the overall system.

This human rights principle is one that has always made sense to us, because transparency is a core value to Cloudflare as well. And if you believe, as we do, that the way different providers tackle questions of abuse will have long term ripple effects, we need to make sure people understand the trade-offs with decisions we make that could impact human rights. We have never taken the easy option of making a difficult decision quietly. We try to blog about the difficult decisions we have made, and then use those blogs to engage with external stakeholders to further our own learning.

In addition to our blogs, we have worked to build up more systematic reporting of our evaluation process and decision-making. Last year, we published a page on our website describing our approach to abuse. We continue to take steps to expand information in our biannual transparency report about our full range of responses to abuse, from removal of content in our storage products to reports on child sexual abuse material to the National Center for Missing and Exploited Children (NCMEC).

Transparency – Reporting on the circumstances when we terminate services

We’ve also sought to be transparent about the limited number of circumstances where we will terminate even DDoS protection services, consistent with our respect for human rights and our view that opening a site up to DDoS attack is almost never a proportional response to address content. Most of the circumstances in which we terminate all services are tied to legal obligations, reflecting the judgment of policymakers and impartial decision makers about when barring entities from access to the Internet is appropriate.

Even in those circumstances, we try to provide users notice, and where appropriate, an opportunity to address the harm themselves. The legal areas that can result in termination of all services are described in more detail below.

Child Sexual Abuse Material: As described in more detail here, Cloudflare has a policy to report any allegation of child sexual abuse material (CSAM) to the National Center for Missing and Exploited Children (NCMEC) for additional investigation and response. When we have reason to believe, in conjunction with those working in child safety, that a website is solely dedicated to CSAM or that a website owner is deliberately ignoring legal requirements to remove CSAM, we may terminate services. We recently began reporting on those terminations in our biannual transparency report.

Sanctions: The United States has a legal regime that prohibits companies from doing business with any entity or individual on a public list of sanctioned parties, called the Specially Designated Nationals (SDN) list. US provides entities on the SDN list, which includes designated terrorist organizations, human rights violators, and others, notice of the determination and an opportunity to challenge the US designation. Cloudflare will terminate services to entities or individuals that it can identify as having been added to the SDN list.

The US sanctions regime also restricts companies from doing business with certain sanctioned countries and regions – specifically Cuba, North Korea, Syria, Iran, and the Crimea, Luhansk and Donetsk regions of Ukraine. Cloudflare may terminate certain services if it identifies users as coming from those countries or regions.  Those country and regional sanctions, however, generally have a number of legal exceptions (known as general licenses) that allow Cloudflare to offer certain kinds of services even when individuals and entities come from the sanctioned regions.

Court orders: Cloudflare occasionally receives third-party orders in the United States directing Cloudflare and other service providers to terminate services to websites due to copyright or other prohibited content. Because we have no ability to remove content from the Internet that we do not host, we don’t believe that termination of Cloudflare’s security services is an effective means for addressing such content. Our experience has borne that out. Because other service providers are better positioned to address the issues, most of the domains that we have been ordered to terminate are no longer using Cloudflare’s services by the time Cloudflare must take action. Cloudflare nonetheless may terminate services to repeat copyright infringers and others in response to valid orders that are consistent with due process protections and comply with relevant laws.

SESTA/FOSTA: In 2018, the United States passed the Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA), for the purpose of fighting online sex trafficking. The law’s broad establishment of criminal penalties for the provision of online services that facilitate prostitution or sex trafficking, however, means that companies that provide any online services to sex workers are at risk of breaking the law. To be clear, we think the law is profoundly misguided and poorly drafted. Research has shown that the law has had detrimental effects on the financial stability, safety, access to community and health outcomes of online sex workers, while being largely ineffective for addressing human trafficking. But to avoid the risk of criminal liability, we may take steps to terminate services to domains that appear to fall under the ambit of the law. Since the law’s passage, we have terminated services to a few domains due to SESTA/FOSTA. We intend to incorporate any SESTA/FOSTA terminations in our biannual transparency report.

Technical abuse: Cloudflare sometimes receives reports of websites involved in phishing or malware attacks using our services. As a security company, our preference when we receive those reports is to do what we can to prevent the sites from causing harm. When we confirm the abuse, we will therefore place a warning interstitial page to protect users from accidentally falling victim to the attack or to disrupt the attack. Potential phishing victims also benefit from learning that they nearly fell victim to a phishing attack. In cases when we believe a user to be intentionally phishing or distributing malware and the security interests appear to support additional action, however, we may opt to terminate services to the intentionally malicious domain.

Voluntary terminations: In three well-publicized instances, Cloudflare has taken steps to voluntarily terminate services or block access to sites whose users were intentionally causing harm to others. In 2017, we terminated the neo-Nazi troll site The Daily Stormer. In 2019, we terminated the conspiracy theory forum 8chan. And earlier this year, we blocked access to Kiwi Farms. Each of those circumstances had their own unique set of facts. But part of our consideration for the actions in those cases was that the sites had inspired physical harm to people in the offline world. And notwithstanding the real world threats and harm, neither law enforcement nor other service providers who could take more targeted action had effectively addressed the harm.

We continue to believe that there are more effective, long term solutions to address online activity that leads to real world physical threats than seeking to take sites offline by DDoS and cyberattack. And we have been heartened to see jurisdictions like the EU try to grapple with a regulatory response to illegal online activity that preserves human rights online. Looking forward, we hope to see a day when states have developed rights-respecting ways to successfully protect human rights offline based on online activity, and remedy does not depend on vigilante justice through cyberattack.

Continuous learning

Addressing abuse online is a long term and ever-shifting challenge for the entire Internet ecosystem. We continuously refine our abuse processes based on the reports we receive, the many conversations we have with stakeholders affected by online abuse, and our engagement with policymakers, other industry participants, and civil society. Make no mistake, the process can sometimes be a bumpy one, where perspectives on the right approach collide. But the one thing we can promise is that we will continue to try to engage, learn, and adapt. Because, together, we think we can build abuse frameworks that reflect respect for human rights and help build a better Internet.

How Cloudflare helps next-generation markets

Post Syndicated from David Tuber original https://blog.cloudflare.com/how-cloudflare-helps-next-generation-markets/

How Cloudflare helps next-generation markets

How Cloudflare helps next-generation markets

One of the many magical things about the Internet is that it doesn’t have a country. The Internet doesn’t go through customs, it doesn’t need a visa, and it doesn’t speak any one language. To reach the world’s greatest information innovation, a user – no matter what country they’re in – only needs a device with a connection. The Internet will take care of the rest. At Cloudflare, part of our role is to make sure every person on the planet with an Internet connection has a good experience, whether they’re in a next-generation market or a current-gen market. In this blog we’re going to talk about how we define next-generation markets, how we help people in these markets get faster access to the websites and applications they use on a daily basis, and how we make it easy for developers to deploy services geographically close to users in next-generation markets.

What are next-generation markets?

Next-generation markets are the future of the Internet. Not only are there billions of people who will use the Internet more, as affordable access increases, but the trends in application development already point towards the mobile-first, sometimes mobile-only, way of providing content and services. The Internet may look different (more desktop-centric) in the so-called Global North or countries the IMF defines as Advanced Economies, but those differences will shrink as application developers build products for all markets, not just current-generation markets. We call these markets next-generation markets as opposed to using the IMF or World Bank definitions because we want to classify markets by how users interact with the Internet as opposed to how their governments interact with the global economy. Compared to North America and Europe, where users access the Internet through a combination of desktop computers and mobile devices, users in next-generation markets access the Internet via mobile devices 50% of the time or more, sometimes even as high as 80%. Some examples of these markets are China, India, Indonesia, Thailand, and countries in Africa and the Middle East.

How Cloudflare helps next-generation markets

Most of this traffic is also using HTTP/S, which is the industry standard for secure, performant, reliable communication on the Internet. HTTP/S is used broadly across the Internet about 88% of the time. Countries and regions that have a higher percentage of mobile users will also have a higher percentage of traffic over HTTP/S, as shown in the table below. For example, countries in Africa and APJC use HTTP/S more than any other protocol, beating all other regions. By contrast, in North America, more traffic uses older protocols like SMTP, FTP, or RTMP.

Region % of traffic that is HTTP/S
Africa (AFR) 92%
Asia Pacific, Japan, and China (APJC) 92%
Western North America (WNAM) 90%
Eastern North America (ENAM) 89%
Oceania (OC) 89%
Eastern Europe (EEUR) 88%
Middle East (ME) 85%
Western Europe (WEUR) 83%
South America (SAM) 64%

The prevalence of mobile Internet connections is also represented by the types of applications developers are building in these regions: local models of popular applications designed specifically for local users in mind. For example, ecommerce companies like Carousell and ticketing companies like BookMyShow rely on mobile and app-based users for most of their business that is unique to the region they’re based in. Getting more broad, apps like Instagram and TikTok famously do not have web or desktop-based applications, and they encourage users to be mobile-only. These markets are next-generation because most of their users are using mobile devices and applications like Carousell, which are designed for a mobile, performant Internet.

In these markets there are two groups who have similar concerns but are different enough that we need to address them separately: users, and the application developers who build the apps for users. They both want one thing: to be fast. But being fast manifests itself in slightly different ways for users versus application developers. Let’s talk about each group and how Cloudflare helps solve their problems.

Next-generation users

Users in these markets care about observed experience: they want real-time interaction with their applications. This is no different from what users in other markets expect from the Internet, but achieving this is much harder over mobile networks, which tend to have higher latency, loss, and lower bandwidth.

Another challenge in next-generation markets is, roughly speaking, how geographically dispersed Internet connectivity is. Imagine you are sending a message to someone on the other side of a park, but you have to play telephone: the only way you can send the message is by telling someone next to you, and they tell it to the person next to them, and so on and so forth until the message reaches the other side of the park. That may look a little something like this:

How Cloudflare helps next-generation markets

If you’ve ever played Telephone, you know that this is optimistic: even when someone is right next to you, it’s unlikely that they’ll be able to get all the message you’re trying to send. But let’s say that the optimistic case is real: in this above scenario, you’re able to transmit the message between people end-to-end across the park. Now let’s say you take half of those people away, meaning that everyone who’s sending the message needs to shout twice as far. That’s when things can start to get a little more garbled:

How Cloudflare helps next-generation markets

In this case, the receiver of the message didn’t hear the message properly the first time, and asked for the sender to yell it again. This process, called retransmission, reduces the amount of data that can be sent at once over the Internet. Retransmission rates depend on the cellular density of wireless networks, the light signal of fiber optic cables, and on the broader Internet, the number of hops between the end user and the website or receiver of the connection.

Retransmission rates are impacted by something called packet loss, when some packets don’t make it to the receiver end due to things like poor signal transmission, or errors on devices in the path between sender and receiver. When packet loss occurs, protocols on the Internet like the Transmission Control Protocol (TCP) will reduce the amount of data that can be transmitted over the connection. The amount of data that can be sent at one time is called the congestion window, and the protocol will shrink the congestion window to help preserve the connection until TCP is sure that the connection won’t drop packets again. This process of shrinking the congestion window is called backoff, and the congestion window will shrink exponentially when packet loss is first detected, and then will increase linearly over time. This means that connections and networks with high retransmission rates can seriously impact how users interact with websites and applications on the Internet.

The Edge Partner Program gets us closer to users

Since most users in next-generation markets are mobile, getting closer to users is paramount for a fast experience. Mobile devices tend to be slower because interference with the radio waves can often add additional instability to the Internet connection, which can lead to poor performance. In next generation markets, there could be added challenges from issues like power consumption: if a power grid can’t support large radio towers, smaller ones with a smaller range are required, which can further add instability, increase retransmission, and add latency.

However, in addition to challenges in the local network, there’s another challenge with interconnecting these networks to the rest of the Internet. Networks in next-generation markets may not be able to reach as many peering points as larger networks and may need to optimize their peering by going into Internet Exchanges that have denser connectivity with more networks, even if they’re farther away. For example, places like Frankfurt, London, and Singapore are especially useful for interconnecting a large amount of networks in a few Internet Exchanges in regions like the Middle East, Africa, and Asia respectively.

The downside for end-users is that in order to connect to the Internet and the sites they care about, networks in these markets have to go a long way to get to the rest of the Internet. For content that is cacheable, meaning it doesn’t change often, sending requests for data (and the response) across oceans and continents is a poor use of Internet capacity. Worse, it leads to problems like congestion, retransmission, and packet loss, which in turn cause poor performance.

One area where we see latency directly impact Internet performance is in TLS, or Transport Layer Security. TLS ensures that an end-user interaction with an application is private. When TLS is established, it performs a three-way handshake that requires the end user to initiate a connection, the server to respond, and the end-user to acknowledge the response before any data can be sent. The farther away an end-user is from a website or CDN that performs this handshake, the longer it will take, and the worse performance will be:

How Cloudflare helps next-generation markets

Getting close to users often improves not just end-user performance, but the basic stability of an Internet experience on the network. Cloudflare helps solve this through our Edge Partner Program (EPP), which allows ISPs to integrate their networks physically and locally with Cloudflare, bringing us as close as possible to their users. When we embed a Cloudflare node in an ISP, we shorten the physical distance between end-users and Cloudflare, and by extension, the amount of time end-users’ data requests spend on the backbone of the Internet. Over the past four years, 80% of our 107 new cities have been in next-generation markets to help improve our cached and dynamic performance.

Another additional benefit of having the content and services delivered close to end users: we can use our network intelligence to route traffic out of your last mile network and where it needs to go, helping improve the user experience out to the rest of the Internet as well. On average, Argo Smart Routing helps improve dynamic and uncached content performance by over 30%, which is especially valuable if the content users need to fetch is far away from their devices.

How Cloudflare helps next-generation markets

Now that we’ve talked about why the Edge Partner Program is important and how it can theoretically help users, let’s talk about one set of those deployments in Saudi Arabia to show you how it actually helps users.

Edge Partner Program in Saudi Arabia

A great example of a place that can benefit greatly from the Edge Partner Program is Saudi Arabia, a country whose closest peering to Cloudflare was previously in Frankfurt. As we mentioned above, for many countries in the Middle East, Frankfurt is where these networks choose to peer with other networks despite Frankfurt being over 5,300 km away from Riyadh.

But by landing Cloudflare network hardware in the mobile network Mobily, we were able to improve median RTT by over 50% for their users. Before our deployment, end users on Mobily had a median RTT of 131ms via Frankfurt. Once we added three sites in Dammam, Riyadh, and Jeddah on this network, Mobily users saw a huge decrease in latency, to the point where the median RTT (131ms) before these deployments now became around the 85th percentile afterwards. Before, one out of every two requests took longer than 131ms, while afterward almost every request (85% of them) took less than that time. So users in Saudi Arabia get a faster path to the sites and services they care about through their ISP and Cloudflare. Everyone wins.

How Cloudflare helps next-generation markets

Staying local also helps reduce retransmission and the amount of data that has to be sent over these networks. Consider two data centers: one of our largest data centers in Los Angeles, California, and one of those new data centers in Jeddah, Saudi Arabia. Los Angeles takes traffic from all over the world: from places like China, Indonesia, Australia, as well as locally in the Los Angeles area. Take a look at the average retransmission rate for connections coming into Los Angeles from all over the world:

How Cloudflare helps next-generation markets

The average rate is quite high for Los Angeles, mostly due to users from all places like China, Indonesia, Taiwan, South Korea, and Japan coming to Los Angeles for their websites. But if you take a look at Jeddah, you’ll see a different story:

How Cloudflare helps next-generation markets

Users in Jeddah have a much lower, more constant retransmission rate because users on Mobily are terminating their connections closer to their devices. By being embedded in Mobily’s network, we decrease the number of hops that are needed and also make the hops that travel over less reliable paths shorter. Initial requests are more likely to succeed the first time and don’t need multiple tries to succeed.

WARP in next-generation markets

Cloudflare WARP is a great privacy-preserving tool for users in any market to help ensure a privacy-first, performant path to the Internet. While users around the world can use WARP, users in next-generation markets are ahead of the curve when it comes to WARP adoption. Here are the total year-to-date WARP downloads from the Apple App Store:

How Cloudflare helps next-generation markets

We’ve recently made changes to add WARP support to more Edge Partner locations, which provides a faster, more private experience to these locations. Now even more WARP users can see better performance in more locations.

WARP pairs well with the Cloudflare network to ensure a fast, private Internet experience. In a growing number of networks in next-generation markets, WARP users will connect to Cloudflare in the same location as their ISP before going out to the rest of the Internet. If the websites they are trying to connect to are protected by Cloudflare, then they get a fast path to the websites they care about through Cloudflare. If not, then the users can still get sent out through Cloudflare to the websites they need while preserving their privacy throughout the connection.

Next-generation developers

Let’s say you’re an app developer in Muscat, Oman, trying to make a new shopping app specific to your market. To compete with other existing apps, you not only need a differentiator, but you need an in-app performance experience that is on par with your competitors while also being able to deliver your service and make money. Global shopping apps offer a real-time browsing experience that your regional app also needs to meet, or beat. If outside competitors have a faster shopping app than you, it doesn’t really matter if your app is “the Amazon of Oman” if actual Amazon is faster in the country.

But in next-generation markets, performance is often a differentiator between their applications and incumbent applications — often because incumbent apps tend to not perform as well in these markets. This is often because incumbent applications will host using cloud providers that may not offer services in-region. For example, users in the APJC region may often see their traffic get sent to Hong Kong, Singapore, or even Los Angeles because that is the closest cloud datacenter to them. So when you’re making “the Amazon of Indonesia” and you need your app to be faster than Amazon’s in Indonesia, having your application be as local as possible to your users will help realize your app’s appeal: a specialized, high-performance experience for Indonesian users.

It’s worth noting that many cloud locations do offer local options for developers: if you’re in Oman, there is a local cloud datacenter to you where you can host your service. But most startup and smaller businesses built in next-generation markets will opt to host their app in larger, farther away locations to optimize for cost.

For example, localizing in the Middle East can be very costly compared to farther away options. Developers in the Middle East may be able to save 30% or more on their monthly data transfer costs simply by moving to Frankfurt; a region that is farther away from their users but is cheaper for them to serve out of. Application developers are constantly trying to balance cost with user experience, and may make some tradeoffs for user experience that allow them to optimize costs in the short term. So even though Cloudflare-protected developers are taking advantage of the local peering from the Edge Partner Program, developers in Oman may end up sending their users to Frankfurt anyways because that’s where they chose to host their services to save costs. In many cases, this is a tradeoff developers in these markets have to make: making your service slightly less performant to enable it to run more cheaply.

Cloudflare Workers in country

Luckily for these developers, Cloudflare’s developer platform allows application developers to build a distributed application that runs right where their users are, so they don’t have to choose between performance and cost savings. Taking the Saudi Arabia case, users on Mobily now get their traffic terminated locally in Jeddah. This is okay from an end-to-end perspective because it means that Cloudflare gets to find the fastest path through the Internet using technologies like Argo Smart Routing which will help them save 30% on their Time to First Byte if their users have to go out of the country. But what if users didn’t ever have to leave Jeddah at all?

By moving applications to Cloudflare, you can push more and more of your applications to these data centers in next-generation markets, ensuring that users get a better experience in-country. For example, let’s consider the same comparison data we used to evaluate ourselves versus Lambda@Edge during our Developer Week performance tests. The purpose of this comparison is to show how far your users have to travel if you’re hosting application compute on Cloudflare versus on AWS. When you compare us versus Lambda@Edge, we have a significant advantage for P95 TCP Connection time in next-generation markets.  This chart and table below show that in Africa and Asia Cloudflare Workers is about 3x as fast as Lambda@Edge from AWS:

How Cloudflare helps next-generation markets

P95 Connect (ms)
Africa
Asia
Lambda JS 358 330
Cloudflare JS 104 111

95th percentile TCP connect time (ms)


This means that operations and functions that get built into Cloudflare get executed closer to the user, ensuring better end-to-end performance. The Lambda@Edge scenarios are bad enough on their own, but  consider that not everything can be done in Lambda@Edge and may need to reach AWS instances that may sit even farther away than the AWS edge. Cloudflare’s supercloud looks especially attractive because we allow you to build everything you need in an application entirely local to end-users. This helps ensure next-generation markets see the same performance as the rest of the world for the applications they care about.

Making everyone faster everywhere

Cloudflare helps users in next-generation markets get connected to the Internet faster, get connected to the Internet more privately, and helps their applications get closer to where they are. Through initiatives like our Edge Partner Program, we can help bring applications closer to users in next-generation markets, and through our powerful developer platform, we can ensure that applications built for these markets have world-class performance.

If you’re an application developer, and you haven’t yet tried out our powerful developer platform and all it can do, try it today!

If you’re a network operator, and you want to have Cloudflare in your network to help bring a next-level experience to your users, check out our Edge Partner Program and let’s get connected.

Users in next-generation markets are the future of the Internet: they are how we expect most people on the Internet to act in the future. Cloudflare is uniquely positioned to ensure that all of these users and developers can have the Internet experience they expect.

How Cloudflare helps next-generation markets

A new, configurable and scalable version of Geo Key Manager, now available in Closed Beta

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/configurable-and-scalable-geo-key-manager-closed-beta/

A new, configurable and scalable version of Geo Key Manager, now available in Closed Beta

A new, configurable and scalable version of Geo Key Manager, now available in Closed Beta

Today, traffic on the Internet stays encrypted through the use of public and private keys that encrypt the data as it’s being transmitted. Cloudflare helps secure millions of websites by managing the encryption keys that keep this data protected. To provide lightning fast services, Cloudflare stores these keys on our fleet of data centers that spans more than 150 countries. However, some compliance regulations require that private keys are only stored in specific geographic locations.

In 2017, we introduced Geo Key Manager, a product that allows customers to store and manage the encryption keys for their domains in different geographic locations so that compliance regulations are met and that data remains secure. We launched the product a few months before General Data Protection Regulation (GDPR) went into effect and built it to support three regions: the US, the European Union (EU), and a set of our top tier data centers that employ the highest security measures. Since then, GDPR-like laws have quickly expanded and now, more than 15 countries have comparable data protection laws or regulations that include restrictions on data transfer across and/or data localization within a certain boundary.

At Cloudflare, we like to be prepared for the future. We want to give our customers tools that allow them to maintain compliance in this ever-changing environment. That’s why we’re excited to announce a new version of Geo Key Manager — one that allows customers to define boundaries by country, ”only store my private keys in India”, by a region ”only store my private keys in the European Union”, or by a standard, such as “only store my private keys in FIPS compliant data centers” — now available in Closed Beta, sign up here!

Learnings from Geo Key Manager v1

Geo Key Manager has been around for a few years now, and we’ve used this time to gather feedback from our customers. As the demand for a more flexible system grew, we decided to go back to the drawing board and create a new version of Geo Key Manager that would better meet our customers’ needs.

We initially launched Geo Key Manager with support for US, EU, and Highest Security Data centers. Those regions were sufficient at the time, but customers wrestling with data localization obligations in other jurisdictions need more flexibility when it comes to selecting countries and regions. Some customers want to be able to set restrictions to maintain their private keys in one country, some want the keys stored everywhere except in certain countries, and some may want to mix and match rules and say “store them in X and Y, but not in Z”. What we learned from our customers is that they need flexibility, something that will allow them to keep up with the ever-changing rules and policies — and that’s what we set out to build out.

The next issue we faced was scalability.  When we built the initial regions, we included a hard-coded list of data centers that met our criteria for the US, EU, “high security” data center regions.  However, this list was static because the underlying cryptography did not support dynamic changes to our list of data centers. In order to distribute private keys to new data centers that met our criteria, we would have had to completely overhaul the system. In addition to that, our network significantly expands every year, with more than 100 new data centers since the initial launch. That means that any new potential locations that could be used to store private keys are currently not in use, degrading the performance and reliability of customers using this feature.

With our current scale, automation and expansion is a must-have. Our new system needs to dynamically scale every time we onboard or remove a data center from our Network, without any human intervention or large overhaul.

Finally, one of our biggest learnings was that customers make mistakes, such as defining a region that’s so small that availability becomes a concern. Our job is to prevent our customers from making changes that we know will negatively impact them.

Define your own geo-restrictions with the new version of Geo Key Manager

A new, configurable and scalable version of Geo Key Manager, now available in Closed Beta

Cloudflare has significantly grown in the last few years and so has our international customer base. Customers need to keep their traffic regionalized. This region can be as broad as a continent — Asia, for example. Or, it can be a specific country, like Japan.

From our conversations with our customers, we’ve heard that they want to be able to define these regions themselves. This is why today we’re excited to announce that customers will be able to use Geo Key Manager to create what we call “policies”.

A policy can be a single country, defined by two-letter (ISO 3166) country code. It can be a region, such as “EU” for the European Union or Oceania. It can be a mix and match of the two, “country:US or region: EU”.

Our new policy based Geo Key Manager allows you to create allowlist or blocklists of countries and supported regions, giving you control over the boundary in which your private key will be stored. If you’d like to store your private keys globally and omit a few countries, you can do that.

If you would like to store your private keys in the EU and US, you would make the following API call:

curl -X POST "https://api.cloudflare.com/client/v4/zones/zone_id/custom_certificates" \
     -H "X-Auth-Email: [email protected]" \
     -H "X-Auth-Key: auth-key" \
     -H "Content-Type: application/json" \
     --data '{"certificate":"certificate","private_key":"private_key","policy":"(country: US) or (region: EU)", "type": "sni_custom"}'

If you would like to store your private keys in the EU, but not in France, here is how you can define that:

curl -X POST "https://api.cloudflare.com/client/v4/zones/zone_id/custom_certificates" \
     -H "X-Auth-Email: [email protected]" \
     -H "X-Auth-Key: auth-key" \
     -H "Content-Type: application/json" \
     --data '{"certificate":"certificate","private_key":"private_key","policy": "region: EU and (not country: FR)", "type": "sni_custom"}'

Geo Key Manager can now support more than 30 countries and regions. But that’s not all! The superpower of our Geo Key Manager technology is that it doesn’t actually have to be “geo” based, but instead, it’s attribute based. In the future, we’ll have a policy that will allow our customers to define where their private keys are stored based on a compliance standard like FedRAMP or ISO 27001.

Reliability, resiliency, and redundancy

By giving our customers the remote control for Geo Key Manager, we want to make sure that customers understand the impact of their changes on both redundancy and latency.

On the redundancy side, one of our biggest concerns is allowing customers to choose a region small enough that if a data center is removed for maintenance, for example, then availability is drastically impacted. To protect our customers, we’ve added redundancy restrictions. These prevent our customers from setting regions with too few data centers, ensuring that all the data centers within a policy can offer high availability and redundancy.

Not just that, but in the last few years, we’ve significantly improved the underlying networking that powers Geo Key Manager. For more information on how we did that, keep an eye out for a technical deep dive inside Geo Key Manager.

Performance matters

A new, configurable and scalable version of Geo Key Manager, now available in Closed Beta

With the original regions (US, EU, and Highest Security Data Centers), we learned customers may overlook possible latency impacts that occur when defining the key manager to a certain region. Imagine your keys are stored in the US. For your Asia-based customers, there’s going to be some latency impact for the requests that go around the world. Now, with customers being able to define more granular regions, we want to make sure that before customers make that change, they see the impact of it.

If you’re an E-Commerce platform then performance is always top-of-mind. One thing that we’re working on right now is performance metrics for Geo Key Manager policies both from a regional point of view — “what’s the latency impact for Asia based customers?” and from a global point of view — “for anyone in the world, what is the average impact of this policy?”.

By seeing the latency impact, if you see that the impact is unacceptable, you may want to create a separate domain for your service that’s specific to the region that it’s serving.

Closed Beta, now available!

Interested in trying out the latest version of Geo Key Manager? Fill out this form.

Coming soon!

Geo Key Manager is only available via API at the moment. But, we are working on creating an easy-to-use UI for it, so that customers can easily manage their policies and regions. In addition, we’ll surface performance measurements and warnings when we see any degraded impact in terms of performance or redundancy to ensure that customers are mindful when setting policies.

We’re also excited to extend our Geo Key Manager product beyond custom uploaded certificates. In the future, certificates issued through Advanced Certificate Manager or SSL for SaaS will be allowed to add policy based restrictions for the key storage.

Finally, we’re looking to add more default regions to make the selection process simple for our customers. If you have any regions that you’d like us to support, or just general feedback or feature requests related to Geo Key Manager, make a note of it on the form. We love hearing from our customers!

Partnering with civil society to track Internet shutdowns with Radar Alerts and API

Post Syndicated from Jocelyn Woolbright original https://blog.cloudflare.com/partnering-with-civil-society-to-track-shutdowns/

Partnering with civil society to track Internet shutdowns with Radar Alerts and API

This post is also available in 简体中文, 繁體中文, 日本語, 한국어, Deutsch, Français and Español.

Partnering with civil society to track Internet shutdowns with Radar Alerts and API

Internet shutdowns have long been a tool in government toolboxes when it comes to silencing opposition and cutting off access from the outside world. The KeepItOn campaign by Access Now, a group that defends the digital rights of global Internet users, documented at least 182 Internet shutdowns in 34 countries in 2021. Many of these shutdowns occurred during public protests, elections, and wars as an extreme form of censorship in places like Afghanistan, Democratic Republic of the Congo, Ukraine, India, and Iran.

There are a range of ways governments block or slow communications, including throttling, IP blocking, DNS interference, mobile data shutoffs, and deep packet inspection, all with similar goals: exerting control over information.

Although Internet shutdowns are largely public, it is difficult to document and track the ways in which governments implement them. The shutdowns not only impact people’s ability to participate in civil and political life and the economy but also have grave consequences for trust in democratic institutions.

We have reported on these shutdowns in the past, and for Cloudflare Impact Week, we want to tell you more about how we work with civil society organizations to provide tools to track and document the scope of these disruptions. We want to support their critical work and provide the tools they need so they can demand accountability and condemn the use of shutdowns to silence dissent.

Radar Internet shutdown alerts for civil society

We launched Radar in 2020 to shine light on the Internet’s patterns, insights, threats, and trends based on aggregated data from our network. Once we launched Radar, we found that many civil society organizations and those who work in democracy-building use Radar to track trends in countries to better understand the rise and fall of Internet usage.

Internally, we had an alert system for potential Internet disruptions that we use as an early warning regarding shifts in network patterns and incidents. When we engaged with these organizations that use Radar to track Internet trends, we learned more about how our internal tool to identify traffic distributions could be useful for organizations that work with human rights defenders on the ground that are impacted by these shutdowns.

To determine the best way to provide a tool to alert organizations when Cloudflare has seen these disruptions, we spoke with organizations such as Access Now, Internews, The Carter Center, National Democratic Institute, Internet Society, and the International Foundation for Electoral Systems. After our conversations, we launched Radar Internet shutdown alerts in 2021 to provide alerts on when Cloudflare has detected significant drops in traffic with the hope that the information is used to document, track, and hold institutions accountable for these human rights violations.

Since 2021, we have been providing these alerts to civil society partners to track these shutdowns. As we have collected feedback to improve the alerts, we have seen many partners looking for more ways to integrate Radar and the alerts into their existing tracking mechanisms. With this, we announced Radar 2.0 with API access for free so academics, data sleuths, civil society, human rights organizations, and other web enthusiasts can analyze, visualize, and investigate Internet usage across the globe, based on data from our global network. In addition, we launched Cloudflare Radar Outage Center to archive Internet outages and make it easier for civil society organizations, journalists/news media, and impacted parties to track past shutdowns.

Highlighting the work of our civil society partners to track Internet shutdowns

We believe our job at Cloudflare is to build tools that improve privacy and security for a range of players on the Internet. With this, we want to highlight the work of our civil society partners. These organizations are pushing back against targeted shutdowns that inflict lasting damage to democracies around the world. Here are their stories.

Access Now

Partnering with civil society to track Internet shutdowns with Radar Alerts and API

Access Now’s #KeepItOn coalition was launched in 2016 to help unite and organize the efforts of activists and organizations across the world to end Internet shutdowns. It now represents more than 280 organizations from 105 countries across the globe. The goal of STOP Project (Shutdown Tracker Optimization Project) is ultimately to document and report shutdowns accurately, which requires diligent verification. Access Now regularly uses multiple sources to identify and understand the shutdown, the choice and combination of which depends on where and how the shutdown occurred.

The tracker uses both quantitative and qualitative data to record the number of Internet shutdowns in the world in a given year and to characterize the nature of the shutdowns, including their magnitude, scope, and causes.

Zach Rosson, #KeepItOn Data Analyst, Access Now, details, “Sometimes, we confirm an Internet shutdown through means such as technical measurement, while at other times we rely upon contextual information, such as news reports or personal accounts. We also work hard to document how a particular shutdown was ordered and how it impacted society, including why and how it happened.

On how Access Now’s #KeepItOn coalition uses Cloudflare Radar, Rosson says, We use Radar Internet shutdown alerts in both email and tweet form, as a trusted source to help verify a shutdown occurrence. These alerts and their underlying measurements are used as primary sources in our dataset when compiling shutdowns for our annual report, so they are used in an archival sense as well. Cloudflare Radar is sometimes the first place that we hear about a shutdown, which is quite useful in a rapid response context, since we can quickly mobilize to verify the shutdown and have strong evidence when advocating against it.

The recorded instances of shutdowns include events reported through local or international news sources that are included in the dataset, from local actors through Access Now’s Digital Security Helpline or the #KeepItOn Coalition email list, or directly from telecommunication and Internet companies.

Rosson notes, When it comes to Radar 2.0 and API, we plan to use these resources to speed up our response, verification, and publication of shutdown data as compiled from different sources. Thus, the Cloudflare Radar Outage Center (CROC) and related API endpoint will be very useful for us to access timely information on shutdowns, either through visual inspection of the CROC in the short term or through using the API to pull data into a centralized database in the long term.

Internet Society: ISOC

Partnering with civil society to track Internet shutdowns with Radar Alerts and API

On the Internet Society Pulse platform, Susannah Gray, Director, Communications, Internet Society, explains that they strive to curate meaningful information around a government-mandated Internet shutdown by using data from multiple trusted sources, and making it available to everyone, everywhere in an easy-to-understand manner. ISOC does this by monitoring Internet traffic using various tools, including Radar. When they see something that might indicate that an Internet shutdown is in progress, they check if the shutdown meets their  criteria. For a shutdown to appear on the Pulse Shutdowns Tracker it needs to meet all the following requirements. It must:

  • Be artificially induced, as evident from reputable sources, including government statements and orders.
  • Remove Internet access.
  • Affect access to a group of people.

Once ISOC is certain that a shutdown is the result of government action, and isn’t the result of technical errors, routing misconfigurations, or infrastructure failures, they prepare an incident page, collate related measurements from their trusted data partners, and then publish the information on the Pulse shutdowns tracker.

ISOC uses many resources to track shutdowns. Gray explains, Radar Internet shutdown alerts are incredibly useful for bringing incidents to our attention as they are happening. The easy access to the data provided helps us assess the nature of an outage. If an outage is established as a government-mandated shutdown, we often use screenshots of Radar charts on the Pulse shutdowns tracker incident page to help illustrate how traffic stopped flowing in and out of a country during the shutdown. We provide a link back to the Radar platform so that people interested in getting more in-depth data can find out more.

ISOC’s aim has never been to be the first to report a government-mandated shutdown: instead, their mission is to report accurate and meaningful information about the shutdown and explore its impact on the economy and society.

Gray adds, For Radar 2.0 and the API, we plan to use it as part of the data aggregation tool we are developing. This internal tool will combine several outage alert and monitoring tools and sources into one single system so that we are able to track incidents more efficiently.

Open Observatory of Network Interference: OONI

Partnering with civil society to track Internet shutdowns with Radar Alerts and API

OONI is a nonprofit that measures Internet censorship, including the blocking of websites, instant messaging apps, and circumvention tools. Cloudflare Radar is one of the main public data sources that they use when examining reported Internet connectivity shutdowns. For example, OONI relied on Radar data when reporting on shutdowns in Iran amid ongoing protests. In 2022, the team launched the Measurement Aggregation Toolkit (MAT), which enables the public to track censorship worldwide and create their own charts based on real-time OONI data. OONI also forms partnerships with multiple digital rights organizations that use OONI tools and data to monitor and respond to censorship events in their regions.

Maria Xynou, OONI Research and Partnerships Director, explains Cloudflare Radar is one of the main public data sources that OONI has referred to when examining reported internet connectivity shutdowns. Specifically, OONI refers to Cloudflare Radar to check whether the platform provides signals of a reported internet connectivity shutdown; compare Cloudflare Radar signals with those visible in other, relevant public data sources (such as IODA, and Google traffic data).

Tracking the shutdowns of tomorrow

As we work with more organizations in the human rights space and learn how our global network can be used for good, we are eager to improve and create new tools to protect human rights in the digital age.

If you would like to be added to Radar Internet Shutdown alerts, please contact [email protected] and follow the Cloudflare Radar alert Twitter page and Cloudflare Radar Outage Center (CROC). For access to the Radar API, please visit Cloudflare Radar.

Cloudflare is joining the AS112 project to help the Internet deal with misdirected DNS queries

Post Syndicated from Hunts Chen original https://blog.cloudflare.com/the-as112-project/

Cloudflare is joining the AS112 project to help the Internet deal with misdirected DNS queries

Cloudflare is joining the AS112 project to help the Internet deal with misdirected DNS queries

Today, we’re excited to announce that Cloudflare is participating in the AS112 project, becoming an operator of this community-operated, loosely-coordinated anycast deployment of DNS servers that primarily answer reverse DNS lookup queries that are misdirected and create significant, unwanted load on the Internet.

With the addition of Cloudflare global network, we can make huge improvements to the stability, reliability and performance of this distributed public service.

What is AS112 project

The AS112 project is a community effort to run an important network service intended to handle reverse DNS lookup queries for private-only use addresses that should never appear in the public DNS system. In the seven days leading up to publication of this blog post, for example, Cloudflare’s 1.1.1.1 resolver received more than 98 billion of these queries — all of which have no useful answer in the Domain Name System.

Some history is useful for context. Internet Protocol (IP) addresses are essential to network communication. Many networks make use of IPv4 addresses that are reserved for private use, and devices in the network are able to connect to the Internet with the use of network address translation (NAT), a process that maps one or more local private addresses to one or more global IP addresses and vice versa before transferring the information.

Your home Internet router most likely does this for you. You will likely find that, when at home, your computer has an IP address like 192.168.1.42. That’s an example of a private use address that is fine to use at home, but can’t be used on the public Internet. Your home router translates it, through NAT, to an address your ISP assigned to your home and that can be used on the Internet.

Here are the reserved “private use” addresses designated in RFC 1918.

Address block Address range Number of addresses
10.0.0.0/8 10.0.0.0 – 10.255.255.255 16,777,216
172.16.0.0/12 172.16.0.0 – 172.31.255.255 1,048,576
192.168.0.0/16 192.168.0.0 – 192.168.255.255 65,536

(Reserved private IPv4 network ranges)

Although the reserved addresses themselves are blocked from ever appearing on the public Internet, devices and programs in private environments may occasionally originate DNS queries corresponding to those addresses. These are called “reverse lookups” because they ask the DNS if there is a name associated with an address.

Reverse DNS lookup

A reverse DNS lookup is an opposite process of the more commonly used DNS lookup (which is used every day to translate a name like www.cloudflare.com to its corresponding IP address). It is a query to look up the domain name associated with a given IP address, in particular those addresses associated with routers and switches. For example, network administrators and researchers use reverse lookups to help understand paths being taken by data packets in the network, and it’s much easier to understand meaningful names than a meaningless number.

A reverse lookup is accomplished by querying DNS servers for a pointer record (PTR). PTR records store IP addresses with their segments reversed, and by appending “.in-addr.arpa” to the end. For example, the IP address 192.0.2.1 will have the PTR record stored as 1.2.0.192.in-addr.arpa. In IPv6, PTR records are stored within the “.ip6.arpa” domain instead of “.in-addr.arpa.”. Below are some query examples using the dig command line tool.

# Lookup the domain name associated with IPv4 address 172.64.35.46
# “+short” option make it output the short form of answers only
$ dig @1.1.1.1 PTR 46.35.64.172.in-addr.arpa +short
hunts.ns.cloudflare.com.

# Or use the shortcut “-x” for reverse lookups
$ dig @1.1.1.1 -x 172.64.35.46 +short
hunts.ns.cloudflare.com.

# Lookup the domain name associated with IPv6 address 2606:4700:58::a29f:2c2e
$ dig @1.1.1.1 PTR e.2.c.2.f.9.2.a.0.0.0.0.0.0.0.0.0.0.0.0.8.5.0.0.0.0.7.4.6.0.6.2.ip6.arpa. +short
hunts.ns.cloudflare.com.

# Or use the shortcut “-x” for reverse lookups
$ dig @1.1.1.1 -x 2606:4700:58::a29f:2c2e +short  
hunts.ns.cloudflare.com.

The problem that private use addresses cause for DNS

The private use addresses concerned have only local significance and cannot be resolved by the public DNS. In other words, there is no way for the public DNS to provide a useful answer to a question that has no global meaning. It is therefore a good practice for network administrators to ensure that queries for private use addresses are answered locally. However, it is not uncommon for such queries to follow the normal delegation path in the public DNS instead of being answered within the network. That creates unnecessary load.

By definition of being private use, they have no ownership in the public sphere, so there are no authoritative DNS servers to answer the queries. At the very beginning, root servers respond to all these types of queries since they serve the IN-ADDR.ARPA zone.

Over time, due to the wide deployment of private use addresses and the continuing growth of the Internet, traffic on the IN-ADDR.ARPA DNS infrastructure grew and the load due to these junk queries started to cause some concern. Therefore, the idea of offloading IN-ADDR.ARPA queries related to private use addresses was proposed. Following that, the use of anycast for distributing authoritative DNS service for that idea was subsequently proposed at a private meeting of root server operators. And eventually the AS112 service was launched to provide an alternative target for the junk.

The AS112 project is born

To deal with this problem, the Internet community set up special DNS servers called “blackhole servers” as the authoritative name servers that respond to the reverse lookup of the private use address blocks 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 and the link-local address block 169.254.0.0/16 (which also has only local significance). Since the relevant zones are directly delegated to the blackhole servers, this approach has come to be known as Direct Delegation.

The first two blackhole servers set up by the project are: blackhole-1.iana.org and blackhole-2.iana.org.

Any server, including DNS name server, needs an IP address to be reachable. The IP address must also be associated with an Autonomous System Number (ASN) so that networks can recognize other networks and route data packets to the IP address destination. To solve this problem, a new authoritative DNS service would be created but, to make it work, the community would have to designate IP addresses for the servers and, to facilitate their availability, an AS number that network operators could use to reach (or provide) the new service.

The selected AS number (provided by American Registry for Internet Numbers) and namesake of the project, was 112. It was started by a small subset of root server operators, later grown to a group of volunteer name server operators that include many other organizations. They run anycasted instances of the blackhole servers that, together, form a distributed sink for the reverse DNS lookups for private network and link-local addresses sent to the public Internet.

A reverse DNS lookup for a private use address would see responses like in the example below, where the name server blackhole-1.iana.org is authoritative for it and says the name does not exist, represented in DNS responses by NXDOMAIN.

$ dig @blackhole-1.iana.org -x 192.168.1.1 +nord

;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 23870
;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;1.1.168.192.in-addr.arpa.	IN	PTR

;; AUTHORITY SECTION:
168.192.in-addr.arpa.	10800	IN	SOA	168.192.in-addr.arpa. nobody.localhost. 42 86400 43200 604800 10800

At the beginning of the project, node operators set up the service in the direct delegation fashion (RFC 7534). However, adding delegations to this service requires all AS112 servers to be updated, which is difficult to ensure in a system that is only loosely-coordinated. An alternative approach using DNAME redirection was subsequently introduced by RFC 7535 to allow new zones to be added to the system without reconfiguring the blackhole servers.

Direct delegation

DNS zones are directly delegated to the blackhole servers in this approach.

RFC 7534 defines the static set of reverse lookup zones for which AS112 name servers should answer authoritatively. They are as follows:

  • 10.in-addr-arpa
  • 16.172.in-addr.arpa
  • 17.172.in-addr.arpa
  • 18.172.in-addr.arpa
  • 19.172.in-addr.arpa
  • 20.172.in-addr.arpa
  • 21.172.in-addr.arpa
  • 22.172.in-addr.arpa
  • 23.172.in-addr.arpa
  • 24.172.in-addr.arpa
  • 25.172.in-addr.arpa
  • 26.172.in-addr.arpa
  • 27.172.in-addr.arpa
  • 28.172.in-addr.arpa
  • 29.172.in-addr.arpa
  • 30.172.in-addr.arpa
  • 31.172.in-addr.arpa
  • 168.192.in-addr.arpa
  • 254.169.in-addr.arpa (corresponding to the IPv4 link-local address block)

Zone files for these zones are quite simple because essentially they are empty apart from the required  SOA and NS records. A template of the zone file is defined as:

  ; db.dd-empty
   ;
   ; Empty zone for direct delegation AS112 service.
   ;
   $TTL    1W
   @  IN  SOA  prisoner.iana.org. hostmaster.root-servers.org. (
                                  1         ; serial number
                                  1W      ; refresh
                                  1M      ; retry
                                  1W      ; expire
                                  1W )    ; negative caching TTL
   ;
          NS     blackhole-1.iana.org.
          NS     blackhole-2.iana.org.

IP addresses of the direct delegation name servers are covered by the single IPv4 prefix 192.175.48.0/24 and the IPv6 prefix 2620:4f:8000::/48.

Name server IPv4 address IPv6 address
blackhole-1.iana.org 192.175.48.6 2620:4f:8000::6
blackhole-2.iana.org 192.175.48.42 2620:4f:8000::42

DNAME redirection

Firstly, what is DNAME? Introduced by RFC 6672, a DNAME record or Delegation Name Record creates an alias for an entire subtree of the domain name tree. In contrast, the CNAME record creates an alias for a single name and not its subdomains. For a received DNS query, the DNAME record instructs the name server to substitute all those appearing in the left hand (owner name) with the right hand (alias name). The substituted query name, like the CNAME, may live within the zone or may live outside the zone.

Like the CNAME record, the DNS lookup will continue by retrying the lookup with the substituted name. For example, if there are two DNS zone as follows:

# zone: example.com
www.example.com.	A		203.0.113.1
foo.example.com.	DNAME	example.net.

# zone: example.net
example.net.		A		203.0.113.2
bar.example.net.	A		203.0.113.3

The query resolution scenarios would look like this:

Query (Type + Name) Substitution Final result
A www.example.com (no DNAME, don’t apply) 203.0.113.1
DNAME foo.example.com (don’t apply to the owner name itself) example.net
A foo.example.com (don’t apply to the owner name itself) <NXDOMAIN>
A bar.foo.example.com bar.example.net 203.0.113.2

RFC 7535 specifies adding another special zone, empty.as112.arpa, to support DNAME redirection for AS112 nodes. When there are new zones to be added, there is no need for AS112 node operators to update their configuration: instead, the zones’ parents will set up DNAME records for the new domains with the target domain empty.as112.arpa. The redirection (which can be cached and reused) causes clients to send future queries to the blackhole server that is authoritative for the target zone.

Note that blackhole servers do not have to support DNAME records themselves, but they do need to configure the new zone to which root servers will redirect queries at. Considering there may be existing node operators that do not update their name server configuration for some reasons and in order to not cause interruption to the service, the zone was delegated to a new blackhole server instead – blackhole.as112.arpa.

This name server uses a new pair of IPv4 and IPv6 addresses, 192.31.196.1 and 2001:4:112::1, so queries involving DNAME redirection will only land on those nodes operated by entities that also set up the new name server. Since it is not necessary for all AS112 participants to reconfigure their servers to serve empty.as112.arpa from this new server for this system to work, it is compatible with the loose coordination of the system as a whole.

The zone file for empty.as112.arpa is defined as:

   ; db.dr-empty
   ;
   ; Empty zone for DNAME redirection AS112 service.
   ;
   $TTL    1W
   @  IN  SOA  blackhole.as112.arpa. noc.dns.icann.org. (
                                  1         ; serial number
                                  1W      ; refresh
                                  1M      ; retry
                                  1W      ; expire
                                  1W )    ; negative caching TTL
   ;
          NS     blackhole.as112.arpa.

The addresses of the new DNAME redirection name server are covered by the single IPv4 prefix 192.31.196.0/24 and the IPv6 prefix 2001:4:112::/48.

Name server IPv4 address IPv6 address
blackhole.as112.arpa 192.31.196.1 2001:4:112::1

Node identification

RFC 7534 recommends every AS112 node also to host the following metadata zones as well: hostname.as112.net and hostname.as112.arpa.

These zones only host TXT records and serve as identifiers for querying metadata information about an AS112 node. At Cloudflare nodes, the zone files look like this:

$ORIGIN hostname.as112.net.
;
$TTL    604800
;
@       IN  SOA     ns3.cloudflare.com. dns.cloudflare.com. (
                       1                ; serial number
                       604800           ; refresh
                       60               ; retry
                       604800           ; expire
                       604800 )         ; negative caching TTL
;
            NS      blackhole-1.iana.org.
            NS      blackhole-2.iana.org.
;
            TXT     "Cloudflare DNS, <DATA_CENTER_AIRPORT_CODE>"
            TXT     "See http://www.as112.net/ for more information."
;

$ORIGIN hostname.as112.arpa.
;
$TTL    604800
;
@       IN  SOA     ns3.cloudflare.com. dns.cloudflare.com. (
                       1                ; serial number
                       604800           ; refresh
                       60               ; retry
                       604800           ; expire
                       604800 )         ; negative caching TTL
;
            NS      blackhole.as112.arpa.
;
            TXT     "Cloudflare DNS, <DATA_CENTER_AIRPORT_CODE>"
            TXT     "See http://www.as112.net/ for more information."
;

Helping AS112 helps the Internet

As the AS112 project helps reduce the load on public DNS infrastructure, it plays a vital role in maintaining the stability and efficiency of the Internet. Being a part of this project aligns with Cloudflare’s mission to help build a better Internet.

Cloudflare is one of the fastest global anycast networks on the planet, and operates one of the largest, highly performant and reliable DNS services. We run authoritative DNS for millions of Internet properties globally. We also operate the privacy- and performance-focused public DNS resolver 1.1.1.1 service. Given our network presence and scale of operations, we believe we can make a meaningful contribution to the AS112 project.

How we built it

We’ve publicly talked about the Cloudflare in-house built authoritative DNS server software, rrDNS, several times in the past, but haven’t talked much about the software we built to power the Cloudflare public resolver – 1.1.1.1. This is an opportunity to shed some light on the technology we used to build 1.1.1.1, because this AS112 service is built on top of the same platform.

A platform for DNS workloads

Cloudflare is joining the AS112 project to help the Internet deal with misdirected DNS queries

We’ve created a platform to run DNS workloads. Today, it powers 1.1.1.1, 1.1.1.1 for Families, Oblivious DNS over HTTPS (ODoH), Cloudflare WARP and Cloudflare Gateway.

The core part of the platform is a non-traditional DNS server, which has a built-in DNS recursive resolver and a forwarder to forward queries to other servers. It consists of four key modules:

  1. A highly efficient listener module that accepts connections for incoming requests.
  2. A query router module that decides how a query should be resolved.
  3. A conductor module that figures out the best way of exchanging DNS messages with upstream servers.
  4. A sandbox environment to host guest applications.

The DNS server itself doesn’t include any business logic, instead the guest applications run in the sandbox environment can implement concrete business logic such as request filtering, query processing, logging, attack mitigation, cache purging, etc.

The server is written in Rust and the sandbox environment is built on top of a WebAssembly runtime. The combination of Rust and WebAssembly allow us to implement high efficient connection handling, request filtering and query dispatching modules, while having the flexibility of implementing custom business logic in a safe and efficient manner.

The host exposes a set of APIs, called hostcalls, for the guest applications to accomplish a variety of tasks. You can think of them like syscalls on Linux. Here are few examples functions provided by the hostcalls:

  • Obtain the current UNIX timestamp
  • Lookup geolocation data of IP addresses
  • Spawn async tasks
  • Create local sockets
  • Forward DNS queries to designated servers
  • Register callback functions of the sandbox hooks
  • Read current request information, and write responses
  • Emit application logs, metric data points and tracing spans/events

The DNS request lifecycle is broken down into phases. A request phase is a point in processing at which sandboxed apps can be called to change the course of request resolution. And each guest application can register callbacks for each phase.

Cloudflare is joining the AS112 project to help the Internet deal with misdirected DNS queries

AS112 guest application

The AS112 service is built as a guest application written in Rust and compiled to WebAssembly. The zones listed in RFC 7534 and RFC 7535 are loaded as static zones in memory and indexed as a tree data structure. Incoming queries are answered locally by looking up entries in the zone tree.

A router setting in the app manifest is added to tell the host what kind of DNS queries should be processed by the guest application, and a fallback_action setting is added to declare the expected fallback behavior.

# Declare what kind of queries the app handles.
router = [
    # The app is responsible for all the AS112 IP prefixes.
    "dst in { 192.31.196.0/24 192.175.48.0/24 2001:4:112::/48 2620:4f:8000::/48 }",
]

# If the app fails to handle the query, servfail should be returned.
fallback_action = "fail"

The guest application, along with its manifest, is then compiled and deployed through a deployment pipeline that leverages Quicksilver to store and replicate the assets worldwide.

The guest application is now up and running, but how does the DNS query traffic destined to the new IP prefixes reach the DNS server? Do we have to restart the DNS server every time we add a new guest application? Of course there is no need. We use software we developed and deployed earlier, called Tubular. It allows us to change the addresses of a service on the fly. With the help of Tubular, incoming packets destined to the AS112 service IP prefixes are dispatched to the right DNS server process without the need to make any change or release of the DNS server itself.

Meanwhile, in order to make the misdirected DNS queries land on the Cloudflare network in the first place, we use BYOIP (Bringing Your Own IPs to Cloudflare), a Cloudflare product that can announce customer’s own IP prefixes in all our locations. The four AS112 IP prefixes are boarded onto the BYOIP system, and will be announced by it globally.

Testing

How can we ensure the service we set up does the right thing before we announce it to the public Internet? 1.1.1.1 processes more than 13 billion of these misdirected queries every day, and it has logic in place to directly return NXDOMAIN for them locally, which is a recommended practice per RFC 7534.

However, we are able to use a dynamic rule to change how the misdirected queries are handled in Cloudflare testing locations. For example, a rule like following:

phase = post-cache and qtype in { PTR } and colo in { test1 test2 } and qname-suffix in { 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa 254.169.in-addr.arpa } forward 192.175.48.6:53

The rule instructs that in data center test1 and test2, when the DNS query type is PTR, and the query name ends with those in the list, forward the query to server 192.175.48.6 (one of the AS112 service IPs) on port 53.

Because we’ve provisioned the AS112 IP prefixes in the same node, the new AS112 service will receive the queries and respond to the resolver.

It’s worth mentioning that the above-mentioned dynamic rule that intercepts a query at the post-cache phase, and changes how the query gets processed, is executed by a guest application too, which is named override. This app loads all dynamic rules, parses the DSL texts and registers callback functions at phases declared by each rule. And when an incoming query matches the expressions, it executes the designated actions.

Public reports

We collect the following metrics to generate the public statistics that an AS112 operator is expected to share to the operator community:

  • Number of queries by query type
  • Number of queries by response code
  • Number of queries by protocol
  • Number of queries by IP versions
  • Number of queries with EDNS support
  • Number of queries with DNSSEC support
  • Number of queries by ASN/Data center

We’ll serve the public statistics page on the Cloudflare Radar website. We are still working on implementing the required backend API and frontend of the page – we’ll share the link to this page once it is available.

What’s next?

We are going to announce the AS112 prefixes starting December 15, 2022.

After the service is launched, you can run a dig command to check if you are hitting an AS112 node operated by Cloudflare, like:

$ dig @blackhole-1.iana.org TXT hostname.as112.arpa +short

"Cloudflare DNS, SFO"
"See http://www.as112.net/ for more information."

Cloudflare achieves FedRAMP authorization to secure more of the public sector

Post Syndicated from Aron Nakazato original https://blog.cloudflare.com/cloudflare-achieves-fedramp-authorization/

Cloudflare achieves FedRAMP authorization to secure more of the public sector

This post is also available in Deutsch, Français and Español.

Cloudflare achieves FedRAMP authorization to secure more of the public sector

We are excited to announce our public sector suite of services, Cloudflare for Government, has achieved FedRAMP Moderate Authorization. The Federal Risk and Authorization Management Program (“FedRAMP”) is a US-government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. FedRAMP Moderate Authorization demonstrates Cloudflare’s continued commitment to customer trust, and Cloudflare for Government’s ability to secure and protect US public sector organizations.

Key differentiators

We believe public sector customers deserve the same experience as any other customer — so rather than building a separate platform, we leveraged our existing platform for Cloudflare for Government. Cloudflare’s platform protects and accelerates any Internet application without adding hardware, installing software, or changing a line of code. It’s also one of the largest and fastest global networks on the planet.

One of the things that distinguishes Cloudflare for Government from other FedRAMP cloud providers is the number of data centers we have in scope, with each able to run our full stack of FedRAMP Authorized services locally, with a single control plane on our private backbone. Networking and security services can only improve the user experience if they are run as close to the user as possible, even if the user doesn’t live on an east or west coast hub. While other cloud service providers may only have a handful of data centers within their FedRAMP environment, Cloudflare for Government includes over 30 of our US-based data centers. This provides Cloudflare for Government customers with the same speed, availability, and security that non-highly regulated customers have come to expect from us.

Cloudflare for Government services

Cloudflare for Government is a suite of services for U.S. government and public sector agencies, delivered from our global, highly resilient cloud network with built-in security and performance.

Cloudflare achieves FedRAMP authorization to secure more of the public sector

Application services

Web Application Firewall with API protection provides an intelligent, integrated and scalable solution to protect your critical web applications. Rate Limiting protects against denial of service attacks, brute force login attempts, and other abusive behavior that targets the application layer. Load Balancing improves application performance and availability by steering traffic from unhealthy origin servers and dynamically distributing it to the most available and responsive server pools.

Bot Management manages good and bad bots in real-time, helps prevent credential stuffing, content scraping, content spam, inventory hoarding, credit card stuffing, and application DDoS. CDN provides ultra-fast static and dynamic content delivery over our global network; it offers users the ability to exercise precise control over how content is cached, helps reduce bandwidth costs and take advantage of built-in unmetered DDoS protections. Enterprise grade DNS offers the fastest response time, unparalleled redundancy, and advanced security with built-in DDoS mitigation and DNSSEC.

Zero trust

Zero Trust Network Access creates secure boundaries for applications by allowing access to resources after verifying identity, context, and policy adherence for each specific request. Remote Browser Isolation provides a fast and reliable solution for remote browsing by running all browser code in the cloud. Secure Web Gateway protects users and data by inspecting user traffic, filtering and blocking malicious content, and identifying compromised devices.

Network services

Cloudflare for Government can replace your legacy WAN architecture with Cloudflare’s WAN-as-a-Service which provides expansive connectivity, cloud-based security, performance and control. L3/4 DDoS can protect your websites, applications, and network — Cloudflare blocks an average of 87 billion threats per day! Network Interconnect enables you to directly connect your on-premise networks and cloud hosted environments to Cloudflare for Government.

Developer platform

Workers provides a serverless execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure. Workers KV is a global, low-latency, key-value data store. It supports exceptionally high read volumes with low-latency, making it possible to build highly dynamic APIs and websites which respond as quickly as a cached static file would. Durable Objects provides low-latency coordination and consistent storage for the Workers platform through two features: global uniqueness and a transactional storage API.

What’s next for Cloudflare for Government

Our achievement of FedRAMP Moderate for our Cloudflare for Government suite of products is the first step in our journey to help secure government entities. As you may have read earlier this week, our focus hasn’t been only with the US public sector. Our Zero Trust products are being leveraged to protect critical infrastructure in Japan, Australia, Germany, Portugal, and the UK. We’re also securing organizations qualified under Project Galileo and Athenian with our Cloudflare One Zero Trust suite at no cost.  We will expand the Cloudflare for Government suite to allow governments all over the world to have the opportunity to use our services to protect their assets and users.

We aim to help agencies build stronger cybersecurity, without compromising the customer experience of the government services that all US citizens rely on. We invite all our Cloudflare for Government public and private partners to learn more about our capabilities and work with us to develop solutions to the rapidly evolving security demands required in complex environments. Please reach out to us at [email protected] with any questions.

For more information on Cloudflare’s FedRAMP status, please visit the FedRAMP Marketplace.

Independent report shows: moving to Cloudflare can cut your carbon footprint

Post Syndicated from Patrick Day original https://blog.cloudflare.com/independent-report-shows-moving-to-cloudflare-cuts-your-carbon-footprint/

Independent report shows: moving to Cloudflare can cut your carbon footprint

This post is also available in 简体中文, Français and Español.

Independent report shows: moving to Cloudflare can cut your carbon footprint

In July 2021, Cloudflare described that although we did not start out with the goal to reduce the Internet’s environmental impact, that has changed. Our mission is to help build a better Internet, and clearly a better Internet must be sustainable.

As we continue to hunt for efficiencies in every component of our network hardware, every piece of software we write, and every Internet protocol we support, we also want to understand in terms of Internet architecture how moving network security, performance, and reliability functions like those offered by Cloudflare from on-premise solutions to the cloud affects sustainability.

To that end, earlier this year we commissioned a study from the consulting firm Analysys Mason to evaluate the relative carbon efficiency of network functions like firewalls, WAF, SD-WAN, DDoS protection, content servers, and others that are provided through Cloudflare against similar on-premise solutions.

Although the full report will not be available until next year, we are pleased to share that according to initial findings:

Cloudflare Web Application Firewall (WAF) “generates up to around 90% less carbon than on-premises appliances at low-medium traffic demand.”

Needless to say, we are excited about the possibilities of these early findings, and look forward to the full report which early indications suggest will show more ways in which moving to Cloudflare will help reduce your infrastructure’s carbon footprint. However, like most things at Cloudflare, we see this as only the beginning.

Fixing the Internet’s energy/emissions problem

The Internet has a number of environmental impacts that need to be addressed, including raw material extraction, water consumption by data centers, and recycling and e-waste, among many others. But, none of those are more urgent than energy and emissions.

According to the United Nations, energy generation is the largest contributor to greenhouse gas emissions, responsible for approximately 35% of global emissions. If you think about all the power needed to run servers, routers, switches, data centers, and Internet exchanges around the world, it’s not surprising that the Boston Consulting Group found that 2% of all carbon output, about 1 billion metric tons per year, is attributable to the Internet.

Conceptually, reducing emissions from energy consumption is relatively straightforward — transition to zero emissions energy sources, and use energy more efficiently in order to speed that transition.  However, practically, applying those concepts to a geographically distributed, disparate networks and systems like the global Internet is infinitely more difficult.

To date, much has been written about improving efficiency or individual pieces of network hardware (like Cloudflare’s deployment of more efficient Arm CPUs) and the power usage efficiency or “PUE” of hyperscale data centers. However, we think there are significant efficiency gains to be made throughout all layers of the network stack, as well as the basic architecture of the Internet itself. We think this study is the first step in investigating those underexplored areas.

How is the study being conducted?

Because the final report is still being written, we’ll have more information about its methodology upon publication. But, here is what we know so far.

To estimate the relative carbon savings of moving enterprise network functions, like those offered by Cloudflare, to the cloud, the Analysys Mason team is evaluating a wide range of enterprise network functions. These include firewalls, WAF, SD-WAN, DDoS protection, and content servers. For each function they are modeling a variety of scenarios, including usage, different sizes and types of organizations, and different operating conditions.

Information relating to the power and capacity of each on-premise appliance is being sourced from public data sheets from relevant vendors. Information on Cloudflare’s energy consumption is being compiled from internal datasets of total power usage of Cloudflare servers, and the allocation of CPU resources and traffic between different products.

Final report — coming soon!

According to the Analysys Mason team, we should expect the final report sometime in early 2023. Until then, we do want to mention again that the initial WAF results described above may be subject to change as the project continues, and assumptions and methodology are refined. Regardless, we think these are exciting developments and look forward to sharing the full report soon!

Sign up for Cloudflare today!

Independent report shows: moving to Cloudflare can cut your carbon footprint

A more sustainable end-of-life for your legacy hardware appliances with Cloudflare and Iron Mountain

Post Syndicated from May Ma original https://blog.cloudflare.com/sustainable-end-of-life-hardware/

A more sustainable end-of-life for your legacy hardware appliances with Cloudflare and Iron Mountain

A more sustainable end-of-life for your legacy hardware appliances with Cloudflare and Iron Mountain

Today, as part of Cloudflare’s Impact Week, we’re excited to announce an opportunity for Cloudflare customers to make it easier to decommission and dispose of their used hardware appliances sustainably. We’re partnering with Iron Mountain to offer preferred pricing and discounts for Cloudflare customers that recycle or remarket legacy hardware through its service.

Replacing legacy hardware with Cloudflare’s network

Cloudflare’s products enable customers to replace legacy hardware appliances with our global network. Connecting to our network enables access to firewall (including WAF and Network Firewalls, Intrusion Detection Systems, etc), DDoS mitigation, VPN replacement, WAN optimization, and other networking and security functions that were traditionally delivered in physical hardware. These are served from our network and delivered as a service. This creates a myriad of benefits for customers including stronger security, better performance, lower operational overhead, and none of the headaches of traditional hardware like capacity planning, maintenance, or upgrade cycles. It’s also better for the Earth: our multi-tenant SaaS approach means more efficiency and a lower carbon footprint to deliver those functions.

But what happens with all that hardware you no longer need to maintain after switching to Cloudflare?

A more sustainable end-of-life for your legacy hardware appliances with Cloudflare and Iron Mountain

The life of a hardware box

The life of a hardware box begins on the factory line at the manufacturer. These are then packaged, shipped and installed at the destination infrastructure where they provide processing power to run front-end products or services, and routing network traffic. Occasionally, if the hardware fails to operate, or its performance declines over time, it will get fixed or will be returned for replacement under the warranty.

When none of these options work, the hardware box is considered end-of-life and it “dies”. This hardware must be decommissioned by being disconnected from the network, and then physically removed from the data center for disposal.

The useful lifespan of hardware depends on the availability of newer generations of processors which help realize critical efficiency improvements around cost, performance, and power. In general, the industry standard of hardware decommissioning timeline is between three and six years after installation. There are additional benefits to refreshing these physical assets at the lower end of the hardware lifespan spectrum, keeping your infrastructure at optimal performance.

In the instance where the hardware still works, but is replaced by newer technologies, it would be such a waste to discard this gear. Instead, there could be recoverable value in this outdated hardware. And simply tossing unwanted hardware into the trash indiscriminately, which will eventually become part of the landfill, causes devastating consequences as these electronic devices contain hazardous materials like lithium, palladium, lead, copper and cobalt or mercury, and those could contaminate the environment. Below, we explain sustainable alternatives and cost-beneficial practices one can pursue to dispose of your infrastructure hardware.

Option 1: Remarket / Reuse

For hardware that still works, the most sustainable route is to sanitize it of data, refurbish, and resell it in the second-hand market at a depreciated cost. Some IT asset disposition firms would also repurpose used hardware to maximize its market value. For example, harvesting components from a device to build part of another product and selling that at a higher price. For working parts that have very little resale value, companies can also consider reusing them to build a spare parts inventory for replacing failed parts later in the data centers.

The benefits of remarket and reuse are many. It helps maximize a hardware’s return of investment by including any reclaimed value at end-of-life stage, offering financial benefits to the business. And it reduces discarded electronics, or e-waste and their harmful efforts on our environment, helping socially responsible organizations build a more sustainable business. Lastly, it provides alternatives to individuals and organizations that cannot afford to buy new IT equipment.

Option 2: Recycle

For used hardware that is not able to be remarketed, it is recommended to engage an asset disposition firm to professionally strip it of any valuable and recyclable materials, such as precious metal and plastic, before putting it up for physical destruction. Similar to remarketing, recycling also reduces environmental impact, and cuts down the amount of raw materials needed to manufacture new products.

A key factor in hardware recycling is a secure chain of custody. Meaning, a supplier has the right certification, preferably its own fleet and secure facilities to properly and securely process the equipment.

Option 3: Destroy

From a sustainable point of view, this route should only be used as a last resort. When hardware does not operate as it is intended to, and has no remarketed nor recycled value, an asset disposition supplier would remove all the asset tags and information from it in preparation for a physical destruction. Depending on disposal policies, some companies would choose to sanitize and destroy all the data bearing hardware, such as SSD or HDD, for security reasons.

To further maximize recycling value and reduce e-waste, it is recommended to keep security policy up to date on discarded IT equipment and explore the option of reusing working devices after a professional data wiping as much as possible.

At Cloudflare, we follow an industry-standard capital depreciation timeline, which culminates in recycling actions through the engagement of IT asset disposition partners including Iron Mountain. Through these partnerships, besides data bearing hardware which follows the security policy to be sanitized and destroyed, approximately 99% of the rest decommissioned IT equipment from Cloudflare is sold or recycled.

Partnering with Iron Mountain to make sustainable goals more accessible

Hardware discomission can be a burden on a business, from operational strain to complex processes, a lack of streamlined execution to the risk of a data breach. Our experience shows that partnering with an established firm like Iron Mountain who is specialized in IT asset disposition would help kick-start one’s hardware recycling journey.

Iron Mountain has more than two decades of experience working with Hyperscale technology and data centers. A market leader in decommissioning, data security and remarketing capabilities. It has a wide footprint of facilities to support their customers’ sustainability goals globally.

Today, Iron Mountain has generated more than US$1.5 billion through value recovery and has been continually developing new ways to sell mass volumes of technology for their best use. Other than their end-to-end decommission offering, there are two additional value adding services that Iron Mountain provides to their customers that we find valuable. They offer a quarterly survey report which presents insights in the used market, and a sustainability report that measures the environmental impact based on total hardware processed with their customers.

Get started today

Get started today with Iron Mountain on your hardware recycling journey and sign up from here. After receiving the completed contact form, Iron Mountain will consult with you on the best solution possible. It has multiple programs to support including revenue share, fair market value, and guaranteed destruction with proper recycling. For example, when it comes to reselling used IT equipment, Iron Mountain would propose an appropriate revenue split, namely how much percentage of sold value will be shared with the customer, based on business needs. Iron Mountain’s secure chain of custody with added solutions such as redeployment, equipment retrieval programs, and onsite destruction can ensure it can tailor the solution that works best for your company’s security and environmental needs.

And in collaboration with Cloudflare, Iron Mountain offers additional two percent on your revenue share of the remarketed items and a five percent discount on the standard fees for other IT asset disposition services if you are new to Iron Mountain and choose to use these services via the link in this blog.

Historical emissions offsets (and Scope 3 sneak preview)

Post Syndicated from Patrick Day original https://blog.cloudflare.com/historical-emissions-offsets-and-scope-3-sneak-preview/

Historical emissions offsets (and Scope 3 sneak preview)

Historical emissions offsets (and Scope 3 sneak preview)

In July 2021, Cloudflare committed to removing or offsetting the historical emissions associated with powering our network by 2025. Earlier this year, after a comprehensive analysis of our records, we determined that our network has emitted approximately 31,284 metric tons (MTs) of carbon dioxide equivalent (CO2e) since our founding.

Today, we are excited to announce our first step toward offsetting our historical emissions by investing in 6,060 MTs’ worth of reforestation carbon offsets as part of the Pacajai Reduction of Emissions from Deforestation and forest Degradation (REDD+) Project in the State of Para, Brazil.

Generally, REDD+ projects attempt to create financial value for carbon stored in forests by using market approaches to compensate landowners for not clearing or degrading forests. From 2007 to 2016, approximately 13% of global carbon emissions from anthropogenic sources were the result of land use change, including deforestation and forest degradation. REDD+ projects are considered a low-cost policy mechanism to reduce emissions and promote co-benefits of reducing deforestation, including biodiversity conservation, sustainable management of forests, and conservation of existing carbon stocks. REDD projects were first recognized as part of the 11th Conference of the Parties (COP) of the United Nations Framework Convention on Climate Change in 2005, and REDD+ was further developed into a broad policy initiative and incorporated in Article 5 of the Paris Agreement.

The Pacajai Project is a Verra verified REDD+ project designed to stop deforestation and preserve local ecosystems. Specifically, to implement sustainable forest management and support socioeconomic development of riverine communities in Para, which is located in Northern Brazil near the Amazon River. The goal of the project is to train village families in land use stewardship to protect the rainforest, as well as agroforestry techniques that will help farmers transition to crops with smaller footprints to reduce the need to burn and clear large sections of adjacent forest.

If you follow sustainability initiatives at Cloudflare, including on this blog, you may know that we have also committed to purchasing renewable energy to account for our annual energy consumption. So how do all of these commitments and projects fit together? What is the difference between renewable energy (credits) and carbon offsets? Why did we choose offsets for our historical emissions? Great questions; here is a quick recap.

Cloudflare sustainability commitments

Last year, Cloudflare announced two sustainability commitments. First we committed to powering our operations with 100% renewable energy. Meaning, each year we will purchase the same amount of zero emissions energy (wind, solar, etc.) as we consume in all of our data centers and facilities around the world. Matching our energy consumption annually with renewable energy purchases ensures that under carbon accounting standards like the Greenhouse Gas Protocol (GHG), Cloudflare’s annual net emissions (or “market-based emissions”) from purchased electricity will be zero. This is important because it accounts for about 99.9% of Cloudflare’s 2021 emissions.

Renewable energy purchases help make sure Cloudflare accounts for its emissions from purchased electricity moving forward; however, it does not address emissions we generated prior to our first renewable energy purchase in 2018 (what we are calling “historical emissions”).

To that end, our second commitment was to “remove or offset all of our historical emissions resulting from powering our network by 2025.” For this initiative, we purposefully chose to use carbon removals or offsets, like the Pacajai REDD+ Project, rather than more renewable energy purchases (also called renewable energy credits, renewable energy certificates, or RECs).

Renewable energy vs. offsets and removals

Renewable energy certificates (RECs) and carbon offsets are both used by organizations to help mitigate their emissions footprint, but they are fundamentally different instruments.

Renewable energy certificates are created by renewable energy generators, like wind and solar farms, and represent a unit (e.g. 1 megawatt-hour) of low or zero emissions energy delivered to a local power grid. Individuals, organizations, and governments are able to purchase those units of energy, and legally claim their environmental benefits, even if the actual power they consume is from the standard electrical grid.

Historical emissions offsets (and Scope 3 sneak preview)
Source: U.S. Environmental Protection Agency, Offsets and RECs: What’s the Difference?

A carbon offset, according to the World Resources Institute (WRI), is “a unit of carbon dioxide-equivalent (CO2e) that is reduced, avoided, or sequestered.” Offsets can include a wide variety of projects, including reforestation, procurement of more efficient cookstoves in developing nations, avoidance of methane from municipal solid waste sites, and purchasing electric and hybrid vehicles for public transportation.

Carbon removals are a type of carbon offsets that involve actual removal of an amount of carbon from the atmosphere. According to WRI, carbon removal projects include “natural strategies like tree restoration and agricultural soil management; high-tech strategies like direct air capture and enhanced mineralization; and hybrid strategies like enhanced root crops, bioenergy with carbon capture and storage, and ocean-based carbon removal.”

As the climate crisis accelerates, carbon removals are an increasingly important part of global net zero efforts. For example, a recent analysis by the U.S. National Academy of Sciences and the Intergovernmental Panel on Climate Change (IPCC) found that even with rapid investment in emissions reductions (like increasing renewable energy supply), the United States must remove 2 gigatons of CO2 per year by midcentury to reach net zero.

Historical emissions offsets (and Scope 3 sneak preview)
Source: World Resources Institute, Carbon Removal

RECs, offsets, and removals are all important tools for individuals, organizations, and governments to help lower their emissions footprint, and each has a specific purpose. As the U.S. Environmental Protection Agency puts it, “think of offsets and RECs as two tools in your sustainability tool box — like a hammer and a saw.” For example, RECs can only be used to account for emissions from an organization’s purchased electricity (Scope 2 emissions). Whereas offsets can be used to account for emissions from combustion engines and other direct emissions (Scope 1), purchased electricity (Scope 2), or carbon emitted by others, including supply chain and logistics emissions (Scope 3). In addition, some sustainability initiatives, like the Science Based Targets Initiative (SBTi) Net-Zero Standard, require the use of removals rather than other types of offsets.

Why did Cloudflare choose offsets or removals to account for its historical emissions?

We decided on a combination of offsets and removals for two reasons. The first reason is technical and relates to RECs and vintage years. Every REC produced by a renewable generator must include the date and time it was delivered to the local electrical grid. So, for example, RECs associated with renewable energy generation by a wind facility during the 2022 calendar year are considered 2022 vintage. Most green energy or renewable energy standards require organizations to purchase RECs from the same vintage year as the energy they are seeking to offset. Therefore, finding RECs to account for energy used by our network in 2012 or 2013 would be difficult, if not impossible, and purchasing current year RECs would be inconsistent with most standards.

The second reason we chose offsets and removals is that it gives us more flexibility to support different types of projects. As mentioned above, offset projects can be incredibly diverse and can be purchased all over the world. This gives Cloudflare the opportunity to support a variety of carbon reduction, avoidance, and sequestration projects that also contribute to other sustainable development goals like decent work and economic growth, gender equality and reduced inequalities, and life on land and below water.

How did we calculate historical emissions?

Once we decided how we planned to offset our historical emissions, we needed to determine how much to offset. Earlier this year our Infrastructure team led a comprehensive review of all historical asset records to create an annual picture of what hardware we deployed, the number of servers, the energy consumption of each model and configuration, and total energy consumption.

We also cross-checked our hardware deployment records with a review of all of our blog posts and other public statements documenting our network growth over the years. It was actually a pretty interesting exercise. Not only to see the cover art from some of our early blogs (our New Jersey data center announcement is a favorite), but more importantly to relive the amazing growth of our network, step by step, from three data centers in 2010 to more than 275 cities in over 100 countries! Pretty cool.

Finally, we converted those annual energy totals to emissions using a global average emissions factor from the International Energy Agency (IEA).

Energy (kWh) x Emissions Factor (gCO2e/kWh) = Carbon Emissions (gCO2e)

In total, we estimated that based on total power consumption, our network produced 31,284 MTs of CO2e prior to our first renewable energy purchase in 2018. We are proud to invest in offsets to mitigate the first 6,060 MTs this year; only 25,224 MTs to go.

Scope 3 emissions — sneak preview

Now that we have a firm understanding, reporting, and accounting for our current and past Scope 1 and Scope 2 emissions — we think it is time to focus on Scope 3.

Cloudflare published its first company-wide emissions inventory in 2020. Since then, we have focused our reporting and mitigating on our Scope 1 and Scope 2 emissions, as required under the GHG Protocol. However, although Scope 3 emissions reporting remains optional, we think it is an increasingly important part of all organizations’ responsibility to understand their total carbon footprint.

To that end, earlier this year we started a comprehensive internal assessment of all of our potential Scope 3 emissions sources. Like most things at Cloudflare we are starting with our network. Everything from embodied carbon in the hardware we buy, to shipping and logistics for moving our data center and server equipment around the world, to how we decommission and responsibly dispose of our assets.

Developing processes to quantify those emissions is one of our top objectives for 2023, and we plan to have more information to share soon. Stay tuned!

How we redesigned our offices to be more sustainable

Post Syndicated from Caroline Quick original https://blog.cloudflare.com/sustainable-office-design/

How we redesigned our offices to be more sustainable

How we redesigned our offices to be more sustainable

At Cloudflare, we are working hard to ensure that we are making a positive impact on the surrounding environment, with the goal of building the most sustainable network. At the same time, we want to make sure that the positive changes that we are making are also something that our local Cloudflare team members can touch and feel, and know that in each of our actions we are having a positive impact on the environment around us. This is why we make sustainability one of the underlying goals of the design, construction, and operations of our global office spaces.

To make this type of pervasive change we have focused our efforts in three main areas: working with sustainable construction materials, efficient operations, and renewable energy purchasing (using clean sources like sunlight and wind). We believe that sustainable design goes far beyond just purchasing recycled and regenerative products. If you don’t operate your space with efficiency and renewables in mind, we haven’t fully accounted for all of our environmental impact.

Sustainability in office design & construction

How we redesigned our offices to be more sustainable
“The Retreat” in the San Francisco Cloudflare office, featuring preserved moss and live plants‌‌

Since 2020, we have been redefining how our teams work together, and how work takes place in physical spaces. You may have read last year about how we are thinking about the future of work at Cloudflare – and the experimentation that we are doing within our physical environments. Sustainable and healthy spaces are a major element to this concept.

We are excited to highlight a few of the different products and concepts that are currently being used in the development of our workplaces – both new locations and in the reimagination of our existing spaces. While experimenting with the way that our teams work together in person, we also consider our new and updated spaces a sort of sustainability learning lab. As we get more and more data on these different systems, we plan to expand these concepts to other global locations as we continue to think through the future of the in-office experience at Cloudflare.

How we redesigned our offices to be more sustainable
An example of sustainable acoustic baffles as seen in our San Francisco office

Baffling baffles, fishing nets and more

It’s our goal to have the products, furniture, and systems that make up our offices be sustainable in a way that is pleasantly (and surprisingly) pervasive. Their materials, construction, and transportation should have either a minimal, or regenerative, impact on the environment or the waste stream while also meeting high performance standards. A great example of this is the acoustic sound baffling used in our recent San Francisco and London redesign and currently being installed at our newest office, which is under construction.

If you’ve ever worked in an open office, you know that effective sound management is critical, regardless of if the space is for collaborative or focus work. In order to help with this challenge, we use a substantial number of acoustic baffles to help significantly reduce sound transfer. Traditionally, baffles are made out of tightly woven synthetic fibers. Unfortunately, a majority of baffles on the market today generate new plastic in the waste stream.

We chose to move away from traditional baffles by installing FilaSorb acoustic baffles by AcouFelt. The fibers in FilaSorb are made from post-consumer plastic beverage bottles diverted from landfills. Every square foot of our FilaSorb felt contains the regenerated fibers made from over 10, 20oz recycled bottles. Each panel has a useful life of over twenty years, and at the end of its life the panel can be recycled again.

The International Institute of Living Futures has certified that this product is acceptable for the Living Building Challenge, which is the most rigorous regenerative building standard in the world.

Similarly to FilaSorb, we also installed BAUX Acoustic Wood Wool paneling to provide additional sound dampening and a vibrant acoustic wall treatment. Designed using a process that focuses on recarbonation, BAUX Wood Wool panels absorb over 6.9 kg per meter squared of carbon dioxide. That’s a little over 70% of the total measured CO2 released during the entire manufacturing life cycle of the panel. Beyond their acoustic benefits, Wood Wool panels resist heat and are ideal insulators. This enables us to use less energy in heating and cooling to maintain a stable temperature in fluctuating weather.

How we redesigned our offices to be more sustainable
Interface’s Net Effect Carpet Collection uses discarded fishing nets in their construction

Flooring is also a significant focus of our design team. We wanted to find a high wearing material that had brilliant color that also had strong regenerative properties across the full manufacturing lifecycle. We were very fortunate to have found Interface’s Net Effect Collection. Interface is one of the few fully certified carbon-neutral flooring materials providers.

Their Net Effect collection is made with 100% recycled content nylon, including postconsumer nylon from discarded fishing nets gathered through their Net-Works® partnership. Net-Works provides a source of income for small fishing villages in the Philippines while cleaning up their beaches and waters. The collected nets are sold to Aquafil, who, in turn, converts them into yarn for Interface carpet tile.

Furniture in landfills? Oh, my!

One shocking stat specifically has stood out to our team over the past two and half years as we have been rethinking our office spaces. 8.5 million tons of office furniture ends up in the landfill per year. That number was before the global pandemic completely redefined how companies think about their real estate footprints and shuttered a massive amount of office space in the United States. Major US cities like San Francisco and New York City still have commercial office vacancy rates upwards of 30% at the time of publishing. To do our part to keep furniture out of landfills, we are ensuring that we are reusing (and in some cases completely repurposing) our existing furniture portfolio as much as possible in every one of our projects.

We have taken it a step further to include our employees working from home. We commonly lend out office chairs and other unused office furniture to home office workers so that they don’t have to purchase new office furniture.

Sustainability in Office Operations

How we redesigned our offices to be more sustainable
Rainwater harvesting system at our San Francisco office

We haven’t just been thinking about how our construction materials can have a more positive impact on the environment. We’ve also been incredibly focused on trialing a number of different sustainable operations concepts within our spaces.

For instance, we have installed a 500 gallon rainwater harvesting system above our outdoor bike storage in our San Francisco office, designed to support our internal gray water needs. We understand the importance of natural light and plants within our spaces to help encourage the health and wellbeing of our teammates, thus we have a vast amount of plants in our San Francisco office. While we chose our plants for their low water consumption, they still require water. Our rain water capture system provides the water for all of our plants.

Additionally, we are focused on cultural changes amongst our staff to reduce our waste streams (which was no small feat amongst our die-hard LaCroix fans!). We have adopted Bevi sparkling and flavored water dispensing machines alongside traditional soda fountains to fully remove bottled water from our facilities. We also shifted to bulk snacks to further reduce the packaging entering recycling centers and landfills.

How we redesigned our offices to be more sustainable

Renewable energy purchasing

Our San Francisco office is also giving us direct on the ground exposure to the complexities of renewable power sourcing in a shared grid environment. In order to guarantee we are using all renewable energy, we purchase our power through Pacific Gas and Electric’s Supergreen Service. But we don’t just stop there. To ensure that our energy usage is totally based on renewable power, we take our efforts a step further and separately purchase renewable energy as if we didn’t already have sustainable power.

Coming soon: bees!

How we redesigned our offices to be more sustainable

We are just getting started on our sustainability journey at Cloudflare. Over the next few years, we will continue to design, develop, and deploy a variety of different solutions to help make our offices as regenerative as possible. To leave you with a taste of where we are headed in 2023, I am excited to introduce you to a project that we are all very excited about: EntroBees. As you have likely heard, the global bee population has dropped dramatically, and a quarter of the bee species are at risk of extinction. We want to do our part to help encourage bees to thrive in urban environments.

Slated for installation at one of our global office locations, EntroBees will be fully managed onsite honey bee colonies. These colonies will provide a much-needed habitat for urban bees, produce honey for our local employees, and also serve as an additional source of entropy for our LavaRand system that provides the source of randomness for Cloudflare’s entire encryption system.

How we’re making Cloudflare’s infrastructure more sustainable

Post Syndicated from Rebecca Weekly original https://blog.cloudflare.com/extending-the-life-of-hardware/

How we’re making Cloudflare’s infrastructure more sustainable

How we’re making Cloudflare’s infrastructure more sustainable

Whether you are building a global network or buying groceries, some rules of sustainable living remain the same: be thoughtful about what you get, make the most out of what you have, and try to upcycle your waste rather than throwing it away. These rules are central to Cloudflare — we take helping build a better Internet seriously, and we define this as not just having the most secure, reliable, and performant network — but also the most sustainable one.

With incredible growth of the Internet, and the increased usage of Cloudflare’s network, even linear improvements to sustainability in our hardware today will result in exponential gains in the future. We want to use this post to outline how we think about the sustainability impact of the hardware in our network, and what we’re doing to continually mitigate that impact.

Sustainability in the realm of servers

The total carbon footprint of a server is approximately 6 tons of Carbon Dioxide equivalent (CO2eq) when used in the US. There are four parts to the carbon footprint of any computing device:

  1. The embodied emissions: source materials and production
  2. Packing and shipping
  3. Use of the product
  4. End of life.

The emissions from the actual operations and use of a server account for the vast majority of the total life-cycle impact. The secondary impact is embodied emissions (which is the carbon footprint from the creation of the device in the first place), which is about 10% overall.

Use of Product Emissions

It’s difficult to reduce the total emissions for the operation of servers. If there’s a workload that needs computing power, the server will complete the workload and use the energy required to complete it. What we can do, however, is consistently seek to improve the amount of computing output per kilo of CO2 emissions — and the way we do that is to consistently upgrade our hardware to the most power-efficient designs. As we switch from one generation of server to the next, we often see very large increases in computing output, at the same level of power consumption. In this regard, given energy is a large cost for our business, our incentives of reducing our environmental impact are naturally aligned to our business model.

Embodied Emissions

The other large category of emissions — the embodied emissions — are a domain where we actually have a lot more control than the use of the product. Reminder from before: the embodied carbon means the sources of emissions generated outside of equipments’ operation. How can we reduce the embodied emissions involved in running a fleet of servers? Turns out, there are a few ways: modular design, relying on open vs proprietary standards to enable reuse, and recycling.

Modular Design

The first big opportunity is through modular system design. Modular systems are a great way of reducing embodied carbon, as they result in fewer new components and allow for parts that don’t have efficiency upgrades to be leveraged longer. Modular server design is essentially decomposing functions of the motherboard onto sub-boards so that the server owner can selectively upgrade the components that are required for their use cases.

How much of an impact can modular design have? Well, if 30% of the server is delivering meaningful efficiency gains (usually CPU and memory, sometimes I/O), we may really need to upgrade those in order to meet efficiency goals, but creating an additional 70% overhead in embodied carbon (i.e. the rest of the server, which often is made up of components that do not get more efficient) is not logical. Modular design allows us to upgrade the components that will improve the operational efficiency of our data centers, but amortize carbon in the “glue logic” components over the longer time periods for which they can continue to function.

Previously, many systems providers drove ridiculous and useless changes in the peripherals (custom I/Os, outputs that may not be needed for a specific use case such as VGA for crash carts we might not use given remote operations, etc.), which would force a new motherboard design for every new CPU socket design. By standardizing those interfaces across vendors, we can now only source the components we need, and reuse a larger percentage of systems ourselves. This trend also helps with reliability (sub-boards are more well tested), and supply assurance (since standardized subcomponent boards can be sourced from more vendors), something all of us in the industry have had top-of-mind given global supply challenges of the past few years.

Standards-based Hardware to Encourage Re-use

But even with modularity, components need to go somewhere after they’ve been deprecated — and historically, this place has been a landfill. There is demand for second-hand servers, but many have been parts of closed systems with proprietary firmware and BIOS, so repurposing them has been costly or impossible to integrate into new systems. The economics of a circular economy are such that service fees for closed firmware and BIOS support as well as proprietary interconnects or ones that are not standardized can make reuse prohibitively expensive. How do you solve this? Well, if servers can be supported using open source firmware and BIOS, you dramatically reduce the cost of reusing the parts — so that another provider can support the new customer.

Recycling

Beyond that, though, there are parts failures, or parts that are simply no longer economical to be run, even in the second hand market. Metal recycling can always be done, and some manufacturers are starting to invest in programs there, although the energy investment for extracting the usable elements sometimes doesn’t make sense. There is innovation in this domain, Zhan, et al. (2020) developed an environmentally friendly and efficient hydrothermal-buffering technique for the recycling of GaAs-based ICs, achieving gallium and arsenic recovery rates of 99.9 and 95.5% respectively. Adoption is still limited — most manufacturers are discussing water recycling and renewable energy vs. full-fledged recycling of metals — but we’re closely monitoring the space to take advantage of any further innovation that happens.

What Cloudflare is Doing To Reduce Our Server Impact

It is great to talk about these concepts, but we are doing this work today. I’d describe them as being under two main banners: taking steps to reduce embodied emissions through modular and open standards design, and also using the most power-efficient solutions for our workloads.

Gen 12: Walking the Talk

Our next generation of servers, Gen 12, will be coming soon. We’re emphasizing modular-driven design, as well as a focus on open standards, to enable reuse of the components inside the servers.

A modular-driven design

Historically, every generation of server here at Cloudflare has required a massive redesign. An upgrade to a new CPU required a new motherboard, power supply, chassis, memory DIMMs, and BMC. This, in turn, might mean new fans, storage, network cards, and even cables. However, many of these components are not changing drastically from generation to generation: these components are built using older manufacturing processes, and leverage interconnection protocols that do not require the latest speeds.

To help illustrate this, let’s look at our Gen 11 server today: a single socket server is ~450W of power, with the CPU and associated memory taking about 320W of that (potentially 360W at peak load). All the other components on that system (mentioned above) are ~100W of operational power (mostly dominated by fans, which is why so many companies are exploring alternative cooling designs), so they are not where the optimization efforts or newer ICs will greatly improve the system’s efficiency. So, instead of rebuilding all those pieces from scratch for every new server and generating that much more embodied carbon, we are reusing them as often as possible.

By disaggregating components that require changes for efficiency reasons from other system-level functions (storage, fans, BMCs, programmable logic devices, etc.), we are able to maximize reuse of electronic components across generations. Building systems modularly like this significantly reduces our embodied carbon footprint over time. Consider how much waste would be eliminated if you were able to upgrade your car’s engine to improve its efficiency without changing the rest of the parts that are working well, like the frame, seats, and windows. That’s what modular design is enabling in data centers like ours across the world.

A Push for Open Standards, Too

We, as an industry, have to work together to accelerate interoperability across interfaces, standards, and vendors if we want to achieve true modularity and our goal of a 70% reduction in e-waste. We have begun this effort by leveraging standard add-in-card form factors (OCP 2.0 and 3.0 NICs, Datacenter Secure Control Module for our security and management modules, etc.) and our next server design is leveraging Datacenter Modular Hardware System, an open-source design specification that allows for modular subcomponents to be connected across common buses (regardless of the system manufacturer). This technique allows us to maintain these components over multiple generations without having to incur more carbon debt on parts that don’t change as often as CPUs and memory.

In order to enable a more comprehensive circular economy, Cloudflare has made extensive and increasing use of open-source solutions, like OpenBMC, a requirement for all of our vendors, and we work to ensure fixes are upstreamed to the community. Open system firmware allows for greater security through auditability, but the most important factor for sustainability is that a new party can assume responsibility and support for that server, which allows systems that might otherwise have to be destroyed to be reused. This ensures that (other than data-bearing assets, which are destroyed based on our security policy) 99% of hardware used by Cloudflare is repurposed, reducing the number of new servers that need to be built to fulfill global capacity demand. Further details about the specifics of how that happens – and how you can join our vision of reducing e-waste – you can find in this blog post.

Using the most power-efficient solutions for our workloads

The other big way we can push for sustainability (in our hardware) while responding to our exponential increase in demand without wastefully throwing more servers at the problem is simple in concept, and difficult in practice: testing and deploying more power-efficient architectures and tuning them for our workloads. This means not only evaluating the efficiency of our next generation of servers and networking gear, but also reducing hardware and energy waste in our fleet.

Currently, in production, we see that Gen 11 servers can handle about 25% more requests than Gen 10 servers for the same amount of energy. This is about what we expected when we were testing in mid-2021, and is exciting to see given that we continue to launch new products and services we couldn’t test at that time.

System power efficiency is not as simple a concept as it used to be for us. Historically, the key metric for assessing efficiency has been requests per second per watt. This metric allowed for multi-generational performance comparisons when qualifying new generations of servers, but it was really designed with our historical core product suite in mind.

We want – and, as a matter of scaling, require – our global network to be an increasingly intelligent threat detection mechanism, and also a highly performant development platform for our customers. As anyone who’s looked at a benchmark when shopping for a new computer knows, fast performance in one domain (traditional benchmarks such as SpecInt_Rate, STREAM, etc.) does not necessarily mean fast performance in another (e.g. AI inference, video processing, bulk object storage). The validation testing process for our next generation of server needs to take all of these workloads and their relative prevalence into account — not just requests. The deep partnership between hardware and software that Cloudflare can have is enabling optimization opportunities that other companies running third party code cannot pursue. I often say this is one of our superpowers, and this is the opportunity that makes me most excited about my job every day.

The other way we can be both sustainable and efficient is by leveraging domain-specific accelerators. Accelerators are a wide field, and we’ve seen incredible opportunities with application-level ones (see our recent announcement on AV1 hardware acceleration for Cloudflare Stream) as well as infrastructure accelerators (sometimes referred to as Smart NICs). That said, adding new silicon to our fleet is only adding to the problem if it isn’t as efficient as the thing it’s replacing, and a node-level performance analysis often misses the complexity of deployment in a fleet as distributed as ours, so we’re moving quickly but cautiously.

Moving Forward: Industry Standard Reporting

We’re pushing by ourselves as hard as we can, but there are certain areas where the industry as a whole needs to step up.

In particular: there is a woeful lack of standards about emissions reporting for server component manufacturing and operation, so we are engaging with standards bodies like the Open Compute Project to help define sustainability metrics for the industry at large. This post explains how we are increasing our efficiency and decreasing our carbon footprint generationally, but there should be a clear methodology that we can use to ensure that you know what kind of businesses you are supporting.

The Greenhouse Gas (GHG) Protocol initiative is doing a great job developing internationally accepted GHG accounting and reporting standards for business and to promote their broad adoption. They define scope 1 emissions to be the “direct carbon accounting of a reporting company’s operations” which is somewhat easy to calculate, and quantify scope 3 emissions as “the indirect value chain emissions.” To have standardized metrics across the entire life cycle of generating equipment, we need the carbon footprint of the subcomponents’ manufacturing process, supply chains, transportation, and even the construction methods used in building our data centers.

Ensuring embodied carbon is measured consistently across vendors is a necessity for building industry-standard, defensible metrics.

Helping to build a better, greener, Internet

The carbon impact of the cloud has a meaningful impact on the Earth–by some accounts, the ICT footprint will be 21% of global energy demand by 2030. We’re absolutely committed to keeping Cloudflare’s footprint on the planet as small as possible. If you’ve made it this far through, and you’re interested in contributing to building the most global, efficient, and sustainable network on the Internet — the Hardware Systems Engineering team is hiring. Come join us.

More bots, more trees

Post Syndicated from Adam Martinetti original https://blog.cloudflare.com/more-bots-more-trees/

More bots, more trees

More bots, more trees

Once a year, we pull data from our Bot Fight Mode to determine the number of trees we can donate to our partners at One Tree Planted. It’s part of the commitment we made in 2019 to deter malicious bots online by redirecting them to a challenge page that requires them to perform computationally intensive, but meaningless tasks. While we use these tasks to drive up the bill for bot operators, we account for the carbon cost by planting trees.

This year when we pulled the numbers, we saw something exciting. While the number of bot detections has gone significantly up, the time bots spend in the Bot Fight Mode challenge page has gone way down. We’ve observed that bot operators are giving up quickly, and moving on to other, unprotected targets. Bot Fight Mode is getting smarter at detecting bots and more efficient at deterring bot operators, and that’s a win for Cloudflare and the environment.

What’s changed?

We’ve seen two changes this year in the Bot Fight Mode results. First, the time attackers spend in Bot Fight Mode challenges has reduced by 166%. Many bot operators are disconnecting almost immediately now from Cloudflare challenge pages. We expect this is because they’ve noticed the sharp cost increase associated with our CPU intensive challenge and given up. Even though we’re seeing individual bot operators give up quickly, Bot Fight Mode is busier than ever. We’re issuing six times more CPU intensive challenges per day compared to last year, thanks to a new detection system written using Cloudflare’s ruleset engine, detailed below.

How did we do this?

When Bot Fight Mode launched, we highlighted one of our core detection systems:

“Handwritten rules for simple bots that, however simple, get used day in, day out.”

Some of them are still very simple. We introduce new simple rules regularly when we detect new software libraries as they start to source a significant amount of traffic. However, we started to reach the limitations of this system. We knew there were sophisticated bots out there that we could identify easily, but they shared enough overlapping traits with good browser traffic that we couldn’t safely deploy new rules to block them safely without potentially impacting our customers’ good traffic as well.

To solve this problem, we built a new rules system written on the same highly performant Ruleset Engine that powers the new WAF, Transform Rules, and Cache Rules, rather than the old Gagarin heuristics engine that was fast but inflexible. This new framework gives us the flexibility we need to write highly complex rules to catch more elusive bots without the risk of interfering with legitimate traffic. The data gathered by these new detections are then labeled and used to train our Machine Learning engine, ensuring we will continue to catch these bots as their operators attempt to adapt.

What’s next?

We’ve heard from Bot Fight Mode customers that they need more flexibility. Website operators now expect a significant percentage of their legitimate traffic to come from automated sources, like service to service APIs. These customers are waiting to enable Bot Fight Mode until they can tell us what parts of their website it can run on safely. In 2023, we will give everyone the ability to write their own flexible Bot Fight Mode rules, so that every Cloudflare customer can join the fight against bots!

Update: Mangroves, Climate Change & economic development

More bots, more trees
Source: One Tree Planted

We’re also pleased to report the second tree planting project from our 2021 bot activity is now complete! Earlier this year, Cloudflare contributed 25,000 trees to a restoration project at Victoria Park in Nova Scotia.

For our second project, we donated 10,000 trees to a much larger restoration project on the eastern shoreline of Kumirmari island in the Sundarbans of West Bengal, India. In total, the project included more than 415,000 trees along 7.74 hectares of land in areas that have been degraded or deforested. The types of trees planted included Bain, Avicennia officianalis, Kalo Bain, and eight others.

The Sundarbans are located on the delta of the Ganges, Brahmaptura, and Meghna rivers on the Bay of Bengal, and are home to one of the world’s largest mangrove forests. The forest is not only a UNESCO World Heritage site, but also home to 260 bird species as well as a number of threatened species like the Bengal tiger, the estuarine crocodile, and Indian python. According to One Tree Planted, the Sundarbans are currently under threat from rising sea levels, increasing salinity in the water and soil, cyclonic storms, and flooding.

The Intergovernmental Panel on Climate Change (IPCC) has found that mangroves are critical to mitigating greenhouse gas (GHG) emissions and protecting coastal communities from extreme weather events caused by climate change. The Sundarbans mangrove forest is one of the world’s largest carbon sinks (an area that absorbs more carbon than it emits). One study suggested that coastal mangrove forests sequester carbon at a rate of two to four times that of a mature tropical or subtropical forest region.

One of the most exciting parts of this project was its focus on hiring and empowering local women. According to One Tree Planted, 75 percent of those involved in the project were women, including 85 women employed to monitor and manage the planting site over a five-month period. Participants also received training in the seed collection process with the goal of helping local residents lead mangrove planting from start to finish in the future.

More bots stopped, more trees planted!

Thanks to every Cloudflare customer who’s enabled Bot Fight Mode so far. You’ve helped make the Internet a better place by stopping malicious bots, and you’ve helped make the planet a better place by reforesting the Earth on bot operators’ dime. The more domains that use Bot Fight Mode, the more trees we can plant, so sign up for Cloudflare and activate Bot Fight Mode today!