Tag Archives: Impact Week

Cloudflare’s Athenian Project Expands Internationally

Post Syndicated from Jocelyn Woolbright original https://blog.cloudflare.com/cloudflares-athenian-project-expands-internationally/

Cloudflare's Athenian Project Expands Internationally

Cloudflare's Athenian Project Expands Internationally

Over the course of the past few years, we’ve seen a wide variety of different kinds of online threats to democratically-held elections around the world. These threats range from attempts to restrict the availability of information, to efforts to control the dialogue around elections, to full disruptions of the voting process.

Some countries have shut down the Internet completely during elections. In 2020, Access Now’s #KeepItOn Campaign reported at least 155 Internet shutdowns in 29 countries such as Togo, Republic of the Congo, Niger and Benin. In 2021, Uganda’s government ordered the “Suspension Of The Operation Of Internet Gateways” the day before the country’s general election.

Even outside a full Internet shutdown, election reporting and registration websites can face attacks from other nations and from parties seeking to disrupt the administration of the election or undermine trust in the electoral process. These cyberattacks target not only electronic voting or election technologies, but access to information and communications tools such as voter registration and websites that host election results. In 2014, a series of cyberattacks including DDoS, malware and phishing attacks were launched against Ukraine’s Central Election Commission ahead of the presidential election. These sophisticated attacks attempted to infiltrate the internal voting system and spread malware to deliver fake election results. Similar attacks were seen again in 2019 as Ukraine accused Russia of launching a DDoS attack against the CEC a month before the presidential election. These types of attacks that target electoral management agencies’ communication tools and public facing websites have been on the rise in countries ranging from Indonesia, North Macedonia, Georgia, and Estonia.  

Three and a half years ago, Cloudflare launched the Athenian Project to provide free Enterprise level services to state and local election websites in the United States. Through this project we have protected over 292 websites with information about voter registration, voting and polling places, as well as sites publishing final results across 30 states at no cost to the entities administering them. However, due to the growing trend of cyberattacks targeting election infrastructure, election security is not a US-specific issue, and since we launched the Athenian Project in the United States many people have asked: why don’t you extend these cybersecurity protections to election entities around the world?

Challenges, Solutions and Partnerships

The short answer is we weren’t entirely sure whether Cloudflare, a US based company, could provide a free set of upgraded security services to foreign election entities. Cloudflare is a global company with 16 offices around the world and a global network that spans over 100 countries to provide security and performance tools. We are proud to create new and innovative products to enhance user privacy and security, but understanding the intricacies of local elections, the regulatory environment, and political players is complicated, to say the least.

When we started the Athenian Project in 2017, we understood the environment and gaps in coverage for state and local governments in the United States. The United States has a decentralized election administrative system, which means that local election administrators may conduct elections differently in every state. Because of the funding challenges that come with a decentralized system, state and local governments in all 50 states could benefit from free Enterprise-level services. Fast-forward to more than three years after we launched the project, we have learned a great deal about what types of threats election entities face, what products election entities need for securing their web infrastructure, and how to build trust with state and local governments in need of these types of protections.

As the Athenian Project and Cloudflare for Campaigns grew in the United States, we received inquiries from foreign election bodies, political parties and campaigns on whether they were eligible for protection under one of these projects. We turned to our Project Galileo partners for their advice and guidance.

Under Project Galileo, we partner with more than 40 civil society organizations to protect a range of sensitive players on the Internet including human rights organizations, journalism and independent media, and organizations that focus on strengthening democracy in 111 countries. Many of these civil society partners work on election-related matters such as capacity building, strengthening democratic institutions, supporting civil society organizations to equipping these groups with the tools they need to be safe and secure online. These partners, many of whom have local representatives on the ground, understand the intricacies of the election landscape and delicate nature of trust building between local election administrations, political parties and organizations with personnel directly on the ground in many of these regions to provide direct support and expertise when it comes to safeguarding elections.

After many discussions and years in the making, we are excited to announce our collaboration with The International Foundation for Electoral Systems, National Democratic Institute, the International Republican Institute and to provide free Enterprise Cloudflare services to groups working on election reporting and to election management agencies to provide the tools, resources and expertise to help them stay online in the face of large scale cyber attacks.

Partnership with International Foundation for Electoral Systems

Cloudflare's Athenian Project Expands Internationally

As we work with civil society organizations on issues in the election space and extending protections outside the United States, we frequently heard organizations bring up IFES, the International Foundation for Electoral Systems, due to their expertise in promoting and protecting democracy. The International Foundation for Electoral Systems is a nonpartisan, nonprofit organization that has worked in more than 145 countries, from developing to mature democracies.

Founded in 1987, IFES’ work in promoting democracy and genuine elections has evolved to meet the challenges of today and tomorrow. IFES offers research, innovation and technical assistance to support democratic elections, human rights, combat corruption, promote equal political participation, and ensure that information and technology advance, not undermine, democracy and elections.

One of the many reasons we wanted to work with IFES on expanding our election offering was due to the organizations’ unique position in terms of technical expertise, understanding of the political landscapes in which they operate, and fundamental knowledge of the types of protections these election management bodies (EMBs) need in preparing and conducting elections. Building trust in the election space is critical when providing support to EMBs. Due to years of hard work from IFES assisting with the implementation of election operations as well as direct assistance to support democratic developments, and the trust IFES has correspondingly developed with EMBs, they were a logical partner.

IFES’ Center for Technology & Democracy, in collaboration with IFES program teams worldwide, provides cybersecurity and ICT assistance to EMBs and civil society organizations (CSOs). IFES uses leading cybersecurity and ICT practices and standards incorporated into its Holistic Exposure and Adaptation Testing (HEAT) methodology with the aim of increasing EMBs and CSOs digital transformation while mitigating associated risks.

“Cloudflare has played an integral role in helping EMBs and CSOs protect their websites, prevent website defacement, and ensure that they are accessible during peak traffic spikes. This has allowed EMBs and CSOs to build internal and external stakeholder confidence while gaining access and building local capacity on cutting-edge cybersecurity solutions and good practices.”
Stephen Boyce, Senior Global Election Technology & Cybersecurity Advisor at IFES.

As part of the partnership with IFES, Cloudflare provides its highest level of services to EMBs working with IFES and equips them with the cybersecurity tools for their web infrastructure and internal teams to promote electoral integrity and stronger democracies. Along with cybersecurity tools, Cloudflare will work closely with IFES on training and direct assistance to these election bodies, so they have the knowledge and expertise to conduct a free, fair, and safe elections.  In the past, Cloudflare has worked with IFES to provide services in support of elections in Georgia, and we look forward to extending these protections to other EMBs in the future.

Partnership with National Democratic Institute, the International Republican Institute and the Design 4 Democracy Coalition

Cloudflare's Athenian Project Expands Internationally

The National Democratic Institute and The International Republican Institute are two of the many Project Galileo partners that we have worked with to provide cybersecurity tools to organizations that work building and strengthening democratic institutions and increasing civic participation all around the world. As we worked together on Project Galileo, our conversations often focused on the best way to extend these types of security tools to groups in the election space.

Cloudflare is excited to announce that we are partnering with the National Democractic Institute (NDI), the International Republican Institute (IRI) and the Design 4 Democracy Coalition (D4D) to expand our election support efforts. Through this initiative, Cloudflare will provide free service to vulnerable groups working on elections, as identified by NDI and IRI. Our combined expertise in cybersecurity and elections administration will enable us to be mutually beneficial in navigating this space. As part of protecting a new set of election groups, Cloudflare will work with NDI and IRI to understand the global threats faced by democratic election institutions.

“Elections are being undermined by a wide range of malign actors. Through our partnership with Cloudflare, IRI has been able to ensure that the civil society and independent media organizations we support globally are able to defend themselves against cyber attacks and massive increases in web traffic – keeping them safe and online at the most critical moments for democratic integrity. We are excited to be working with Cloudflare, NDI, and the D4D Coalition to expand those offerings to election management bodies, political parties, and political campaigns – a critical step toward ensuring that political competition is fought in the sphere of policy and governance delivery, and not through information and cyber warfare.”
Amy Studdart, Senior Advisor for Digital Democracy, Center for Global Impact at the International Republican Institute.

As part of our new initiative, when Cloudflare tests new products which would be particularly useful for election groups we will work with NDI, IRI and D4D to encourage these groups to adopt the new services. This might include passing along information and documentation on how to deploy them, offering webinars, and providing other specialized support. Piloting new products with this audience will also provide us with the opportunity to learn about needs and pain points for these groups.

“Safe, reliable access to the internet is fundamental to a free, open, and democratic electoral process in the modern era. Cloudflare’s sophisticated protections against various forms of cyberattack have provided invaluable support to at-risk campaigns and civic organizations through NDI and the D4D Coalition. This new initiative will go further to supporting one of the most fundamental of human rights: the vote.”
Chris Doten,  Chief Innovation Officer at the National Democratic Institute.

Extending Protection to State Parties in the United States with Defending Digital Campaigns

Cloudflare's Athenian Project Expands Internationally

We didn’t forget our friends in the United States. I am excited to announce that we are extending our support to provide a suite of Cloudflare products to eligible state parties in the United States with our partnership with Defending Digital Campaigns (DDC). In January 2020, we announced our partnership with Defending Digital Campaigns, a nonprofit, nonpartisan organization that provides access to cybersecurity products, services, and information to eligible federal campaigns.

We have reported on the regulatory challenges of providing free or discounted services to political campaigns in the past. Due to campaign finance regulations in the United States, private corporations are prohibited from providing any contributions of either money or services to federal candidates or political party organizations. We partnered with DDC, who was granted special permission by the Federal Election Commission to provide eligible federal campaigns with free or reduced-cost cybersecurity services due to the enhanced threat of foreign cyberattacks against party and candidate committees.

Since the start of our partnership, we have provided products to protect Presidential, Senate and House campaigns with tools like DDoS protection, web application firewall, SSL encryption, and bot protection. We have also offered campaigns cybersecurity tools to protect their internal networks, offering Cloudflare Access and Gateway to more than 75 campaigns in the 2020 U.S. election.

After the 2020 U.S. election, DDC extended their offering to protect state parties in select states.

“One of DDC’s core recommendations for any campaign or an organization like a State Party is protecting their websites from attacks or defacements,” said Michael Kaiser, President and CEO of Defending Digital Campaigns. “Our partnership with Cloudflare is critical to bringing this core protection to eligible entities and protecting our democracy.”

We are excited to be furthering our partnership with Defendering Digital Campaigns to provide our free suite of services to eligible state parties to better secure themselves from cyber attacks.

For more information on eligibility for these services under DDC and the next steps, please visit cloudflare.com/campaigns/usa.

To the future…

Recognizing the global nature of cyberthreats targeting election-related technologies, we are excited to be working with these groups to help players in the election space stay secure online. In addition to the goals already laid out, Cloudflare intends to build on these partnerships in the future. Eventually, we hope to assist with each of these partners’ programs as mentors and trainers, perhaps directly participating in assessments and training around critical elections. These groups’ expertise makes them fantastic partners in this space, and we look forward to the opportunity to expand our work with their guidance.

Project Galileo and The Global Cyber Alliance Cybersecurity Toolkit for Journalists

Post Syndicated from Jocelyn Woolbright original https://blog.cloudflare.com/project-galileo-and-the-global-cyber-alliance-cybersecurity-toolkit-for-journalists/

Project Galileo and The Global Cyber Alliance Cybersecurity Toolkit for Journalists

Project Galileo and The Global Cyber Alliance Cybersecurity Toolkit for Journalists

Cloudflare started Project Galileo in 2014 to provide a set of free security products to a range of groups on the Internet that are targeted by cyberattacks due to their critical work. These groups include human rights defenders, independent media and journalists, and organizations that work in strengthening democracy. Seven year later, Project Galileo currently protects more than 1,500 organizations in 111 countries.

A majority of the organizations protected under Project Galileo work in independent media and journalism, and are targeted both physically and online as a result of reporting critical events around the world. From July 2020 to March 2021, there were more than seven billion cyberattacks against Project Galileo journalism and media sites, equating to over 30 million attacks per day against this group. We reported many of these findings for the 7th anniversary of Project Galileo’s Radar Dashboard.

Global Cyber Alliance

Project Galileo and The Global Cyber Alliance Cybersecurity Toolkit for Journalists

We have reported on the cyber threats to independent journalists and media organizations in the past, with the goal of creating best practices on how to protect these groups online. As we shared these insights, we started to collaborate with organizations that provide support and resources to improve journalists’ cybersecurity capabilities and respond to threats. One of these organizations that we were excited to engage with was the Global Cyber Alliance.

The Global Cyber Alliance (GCA​) is an international, cross-sector nonprofit dedicated to confronting systemic cyber risks and improving our connected world. GCA develops free, easy and accessible tools to a range of stakeholders on the Internet including small businesses, journalists and, election officials around the world. Each toolkit is curated with tools and guidance on managing passwords, encrypting your data, backing up data, secure email, and browsing, anti-virus, DNS Security and more.

“As journalism increasingly, if not exclusively, relies on connected resources to investigate and report news, these capabilities offer tremendous benefit, particularly as newsrooms face budget constraints. At the same time, connected resources if not secured properly can unknowingly risk journalists, their sources, and the developments they cover,” said Megan Stifel, Global Policy Officer and Capacity & Resilience Program Director at the Global Cyber Alliance. “Resources such as Project Galileo play an important role in helping journalists protect themselves and their work, enabling them to report the news on their terms. GCA is pleased to add this resource to our free Cybersecurity Toolkit for Journalists, which is one of three toolkits available through our Capacity & Resilience Program.”

Project Galileo and the GCA Cybersecurity Toolkit for Journalists

Cloudflare is thrilled to have Project Galileo included in the GCA Cybersecurity Toolkit for Journalists to provide the tools and resources for journalists in order to be safer online. The free tools in the toolkit include:

  • DNS Security with WARP: Cloudflare VPN (WARP) on devices, or their router, to Cloudflare’s DNS Resolver (1.1.1.2) With 1.1.1.2 it automatically blocks known malware before your browser has a chance to load it.
  • End-to-End Encryption with Cloudflare SSL: Trust is essential for journalists and their public facing websites as they are a source of truth to their audience. With Cloudflare SSL, they can ensure that information is private and secure for visitors who engage with these websites. SSL also stops certain kinds of cyberattacks as it authenticates web servers, which is important because attackers will often try to set up fake websites to trick users and steal data.
  • Cloudflare for Teams products Access & Gateway: To assist media organizations, Cloudflare for Team’s products Access & Gateway makes remote works safer for teams around the world with protecting internal applications and DNS filtering to ensure that journalists keep their sensitive information secure and do not fall victim to a cyberattack. Read more on how a local news outlet in New Jersey uses Gateway to filter and block malicious attacks and phishing attempts.

We are excited to be working with the Global Cyber Alliance and look forward to further collaboration on guidance, tools, and resources to improve security for individuals and organizations.

Cloudflare’s Human Rights Commitments

Post Syndicated from Alissa Starzak original https://blog.cloudflare.com/cloudflare-human-rights-commitments/

Cloudflare's Human Rights Commitments

Cloudflare's Human Rights Commitments

Last year, we announced our commitment to the UN Guiding Principles on Business and Human Rights, and our partnership with Global Network Initiative (GNI). As part of that announcement, Cloudflare committed to developing a human rights policy in order to ensure that the responsibility to respect human rights is embedded throughout our business functions. We spent much of the last year talking to those inside and outside the company about what a policy should look like, the company’s expectations for human rights-respecting behavior, and how to identify activities that might affect human rights.

Today, we are releasing our first human rights policy. The policy sets out our commitments and the way we implement them.

Why would Cloudflare develop a human rights policy?

Cloudflare’s mission — to help build a better Internet — reflects a long-standing belief that we can help make the Internet better for everyone. We believe that everyone should have access to an Internet that is faster, more reliable, more private, and more secure. To earn our customers’ trust, we also strive to live up to our core values of being principled, curious, and transparent. The actions that we have taken over the years reflect our mission and values.

From introducing Universal SSL so that every Cloudflare customer would be able to easily secure their sites, to developing protocols to encrypt DNS and SNI in order to protect the privacy of metadata, we’ve taken steps to make the Internet more private. We’ve sought to rid the world of the scourge of DDoS attacks with free, unmetered DDoS mitigation, and consistently strive to make beneficial new technologies available to more people, more quickly and less expensively. We’ve been transparent about our actions and our activities, publicly documenting the requests we get from governments, the difficult choices we face, and the mistakes we sometimes make. We’ve tried to think about the way products can be abused, and provide mechanisms for addressing those concerns. We’ve launched projects like Project Galileo, the Cloudflare for Campaigns, and Project Fair Shot to make sure that vulnerable populations who need extra security or resources can get them for free.

Although being thoughtful about the ways the company’s actions affect people and the Internet at large is part of Cloudflare’s DNA, as we grow as a company it is critical to have frameworks that help us more thoroughly and systematically evaluate the risks posed by our activities to people and communities. The United Nations Guiding Principles on Business and Human Rights (UNGPs) were designed to provide businesses with exactly that type of guidance.

UN Guiding Principles on Business and Human Rights

The UNGPs, unanimously endorsed by the UN Human Rights Council in 2011, are based on a framework developed by Harvard Professor John Ruggie, distinguishing the state responsibility to protect human rights from the business responsibility to respect human rights. The responsibility to respect human rights means that businesses should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved. The UNGPs also expect companies to develop grievance mechanisms for individuals or communities adversely impacted by their activities.

So what are human rights? The idea, enshrined in the Universal Declaration of Human Rights that was adopted by the UN General Assembly in 1948, is that we all have certain rights, independent of any state, that are universal and inalienable. As described by the UN Human Rights Office of the High Commissioner, these rights “range from the most fundamental — the right to life — to those that make life worth living, such as the rights to food, education, work, health and liberty.” These interdependent rights must not be taken away except in specific and well-defined situations and according to due process.

Companies comply with their responsibility to respect human rights by stating their commitment to human rights, and by developing policies and processes to identify, prevent and mitigate the risk of causing or contributing to human rights harm. Consistent with the UNGPs, these policies typically require companies to conduct human rights due diligence to consider whether their business activities will cause or contribute to harm, to find ways to reduce the risk of any potential harms that are identified, and to remediate harms that have occurred. Companies are expected to prioritize addressing severe harms — meaning harms of significant scope or scale or harms that cannot be easily remedied — that are most at risk from the company’s activities.

Developing Cloudflare’s Human Rights Policy

To develop our human rights policy, we’ve had conversations both within the company, so that we could better understand the scope of Cloudflare activities that might affect human rights, and with human rights experts outside the company.

From an internal standpoint, we realized that, because of our company culture and values, we had been talking for years about the aspects of the company’s business that could have significant implications for people, although we rarely framed our discussions through a human rights lens. Our goal in developing a policy was therefore to build on the good work that had already been done, and fill in additional gaps as necessary.

On the external expert side, the last few years have brought increasing recognition of the challenges and importance of applying human rights frameworks to digital technologies. In 2017, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression prepared a report looking at the way certain actors in the technology sector, including content delivery networks, implicate freedom of expression. That report emphasized the importance of private actors as a “bulwark against government and private overreach” and specifically described content delivery networks as being “strategically positioned on the Internet infrastructure to counter malicious attacks that disrupt access.” The report provided recommendations on conducting due diligence, incorporating human rights safeguards like reducing the collection of information by design, engaging with stakeholders, and improving transparency, among other things.

Recognizing the significance of technology for human rights, the UN Office of the High Commissioner on Human Rights launched the B-Tech project in 2019 to develop practical guidance and recommendations on the UNGPs for companies operating in the tech sector. Cloudflare has benefited from participating in regular working groups with other companies in the ICT space through both the B-Tech project and through GNI on how to apply and advance the UN guiding principles, including sharing best practices and policies among similar companies. We also engage with our Project Galileo partners to discuss topical human rights issues, and how Cloudflare can apply its human rights policy to specific situations.

Cloudflare’s human rights policy is the first step in turning those discussions into something concrete. The policy formally states our commitment to the UNGPs and provides additional details on how we plan to implement our commitments. We will continue to refine this policy over time, and seek input on how to improve it.

What’s next?

Building a human rights program is a dynamic process, and we anticipate that our policies will continue to grow and change. We look forward to continuing to learn from experts, engage with Cloudflare’s stakeholders, and refine our assessment of our salient human rights issues. A better Internet is one built on respect for human rights.

Certifying our Commitment to Your Right to Information Privacy

Post Syndicated from Emily Hancock original https://blog.cloudflare.com/certifying-our-commitment-to-your-right-to-information-privacy/

Certifying our Commitment to Your Right to Information Privacy

Certifying our Commitment to Your Right to Information Privacy

Cloudflare recognizes privacy in personal data as a fundamental human right and has taken a number of steps, including certifying to international standards, to demonstrate our commitment to privacy.

Privacy has long been recognized as a fundamental human right. The United Nations included a right to privacy in its 1948 Universal Declaration of Human Rights (Article 12) and in the 1976 International Covenant on Civil and Political Rights (Article 17). A number of other jurisdiction-specific laws and treaties also recognize privacy as a fundamental right.

Cloudflare shares the belief that privacy is a fundamental right. We believe that our mission to help build a better Internet means building a privacy-respecting Internet, so people don’t feel they have to sacrifice their personal information — where they live, their ages and interests, their shopping habits, or their religious or political beliefs — in order to navigate the online world.

But talk is cheap. Anyone can say they value privacy. We show it. We demonstrate our commitment to privacy not only in the products and services we build and the way we run our privacy program, but also in the examinations we perform of our processes and products  to ensure they work the way we say they do.

Certifying to International Privacy and Security Standards

Cloudflare has a multi-faceted privacy program that incorporates critical privacy principles such as being transparent about our privacy practices, practicing privacy by design when we build our products and services, using the minimum amount of personal data necessary for our services to work, and only processing personal data for the purposes specified. We were able to demonstrate our holistic approach to privacy when, earlier this year, Cloudflare became one of the first organizations in our industry to certify to a new international privacy standard for protecting and managing the processing of personal data — ISO/IEC 27701:2019.

This standard took the concepts in global data protection laws like the EU’s watershed General Data Protection Regulation (“GDPR”) and adapted them into an international standard for how to manage privacy. This certification provides assurance to our customers that a third party has independently verified that Cloudflare’s privacy program meets GDPR-aligned industry standards. Having this certification helps our customers have confidence in the way we handle and protect our customer information, as both processor and controller of personal information.

The standard contains 31 controls identified for organizations that are personal data controllers, and 18 additional controls identified for organizations that are personal data processors.[1] The controls are essentially a set of best practices that data controllers and processors must meet in terms of data handling practices and transparency about those practices, documenting a legal basis for processing and for transfer of data to third countries (outside the EU), and handling data subject rights, among others.

For example, the standard requires that an organization maintain policies and document specific procedures related to the international transfer of personal data.

Cloudflare has implemented this requirement by maintaining an internal policy restricting the transfer of personal data between jurisdictions unless that transfer meets defined criteria. Customers, whether free or paid, enter into a standard Data Processing Addendum with Cloudflare which is available on the Cloudflare Customer Dashboard and which sets out the restrictions we must adhere to when processing personal data on behalf of customers, including when transferring personal data between jurisdictions. Additionally, Cloudflare publishes a list of sub-processors that we may use when processing personal data, and in which countries or jurisdictions that processing may take place.

The standard also requires that organizations should maintain documented personal data minimization objectives, including what mechanisms are used to meet those objectives.

Personal data minimization objective

Cloudflare maintains internal policies on how we manage data throughout its full lifecycle, including data minimization objectives. In fact, our commitment to privacy starts with the objective of minimizing personal data. That’s why, if we don’t have to collect certain personal data in order to deliver our service to customers, we’d prefer not to collect it at all in the first place. Where we do have to, we collect the minimum amount necessary to achieve the identified purpose and process it for the minimum amount necessary, transparently documenting the processing in our public privacy policy.

We’re also proud to have developed a Privacy by Design policy, which rigorously sets out the high-standards and evaluations that must be undertaken if products and services are to collect and process personal data. We use these mechanisms to ensure our collection and use of personal data is limited and transparently documented.

Demonstrating our adherence to laws and policies designed to protect the privacy of personal information is only one way to show how we value the people’s right to privacy. Another critical element of our privacy approach is the high level of security we apply to the data on our systems in order to keep that data private. We’ve demonstrated our commitment to data security through a number of certifications:

  • ISO 27001:2013: This is an industry-wide accepted information security certification that focuses on the implementation of an Information Security Management System (ISMS) and security risk management processes. Cloudflare has been ISO 27001 certified since 2019.
  • SOC 2 Type II:  Cloudflare has undertaken the AICPA SOC 2 Type II certification to attest that Security, Confidentiality, and Availability controls are in place in accordance with the AICPA Trust Service Criteria. Cloudflare’s SOC 2 Type II report covers security, confidentiality, and availability controls to protect customer data.
  • PCI DSS 3.2.1: Cloudflare maintains PCI DSS Level 1 compliance and has been PCI compliant since 2014. Cloudflare’s Web Application Firewall (WAF), Cloudflare Access, Content Delivery Network (CDN), and Time Service are PCI compliant solutions. Cloudflare is audited annually by a third-party Qualified Security Assessor (QSA).
  • BSI Qualification: Cloudflare has been recognized by the German government’s Federal Office for Information Security as a qualified provider of DDoS mitigation services.

More information about these certifications is available on our Certifications and compliance resources page.

In addition, we are continuing to look for other opportunities to demonstrate our compliance with data privacy best practices. For example, we are following the European Union’s approval of the first official GDPR codes of conduct in May 2021, and we are considering other privacy standards, such as the ISO 27018 cloud privacy certification.

Building Tools to Deliver Privacy

We think one of the most impactful ways we can respect people’s privacy is by not collecting or processing unnecessary personal data in the first place. We not only build our own network with this principle in mind, but we also believe in empowering individuals and entities of all sizes with technological tools to easily build privacy-respecting applications and minimize the amount of personal information transiting the Internet.

One such tool is our 1.1.1.1 public DNS resolver — the Internet’s fastest, privacy-first public DNS resolver. When we launched our 1.1.1.1 resolver, we committed that we would not retain any personal data about requests made using our 1.1.1.1 resolver. And because we baked anonymization best practices into the 1.1.1.1 resolver when we built it, we were able to demonstrate that we didn’t have any personal data to sell when we asked independent accountants to conduct a privacy examination of the 1.1.1.1 resolver. While we haven’t made changes to how the product works since then, if we ever do so in the future, we’ll go back and commission another examination to demonstrate that when someone uses our public resolver, we can’t tell who is visiting any given website.

In addition to our 1.1.1.1 resolver, we’ve built a number of other privacy-enhancing technologies, such as:

  • Cloudflare’s Web Analytics, which does not use any client-side state, such as cookies or localStorage, to collect usage metrics, and never ‘fingerprints’ individual users.
  • Supporting Oblivious DoH (ODoH), a proposed DNS standard — co-authored by engineers from Cloudflare, Apple, and Fastly — that separates IP addresses from DNS queries, so that no single entity can see both at the same time. In other words, ODoH means, for example, that no single entity can see that IP address 198.51.100.28 sent an access request to the website example.com.
  • Universal SSL (now called Transport Layer Security), which we made available to all of our customers, paying and free. Supporting SSL means that we support encrypting the content of web pages, which had previously been sent as plain text over the Internet. It’s like sending your private, personal information in a locked box instead of on a postcard.

Building Trust

Cloudflare’s subscription-based business model has always been about offering an incredible suite of products that help make the Internet faster, more efficient, more secure, and more private for our users. Our business model has never been about selling users’ data or tracking individuals as they go about their digital lives. We don’t think people should have to trade their private information just to get access to Internet applications. We work every day to earn and maintain our users’ trust by respecting their right to privacy in their personal data as it transits our network, and by being transparent about how we handle and secure that data. You can find out more about the policies, privacy-enhancing technologies, and certifications that help us earn that trust by visiting the Cloudflare Trust Hub at www.cloudflare.com/trust-hub.


[1] The GDPR defines a “data controller” as the “natural or legal person (…) or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data”; and a “data processor” as “a natural or legal person (…) which processes personal data on behalf of the controller.”

Cloudflare and COVID-19: Project Fair Shot Update

Post Syndicated from Brian Batraski original https://blog.cloudflare.com/cloudflare-and-covid-19-project-fair-shot-update/

Cloudflare and COVID-19: Project Fair Shot Update

Cloudflare and COVID-19: Project Fair Shot Update

In February 2021, Cloudflare launched Project Fair Shot — a program that gave our Waiting Room product free of charge to any government, municipality, private/public business, or anyone responsible for the scheduling and/or dissemination of the COVID-19 vaccine.

By having our Waiting Room technology in front of the vaccine scheduling application, it ensured that:

  • Applications would remain available, reliable, and resilient against massive spikes of traffic for users attempting to get their vaccine appointment scheduled.
  • Visitors could wait for their long-awaited vaccine with confidence, arriving at a branded queuing page that provided accurate, estimated wait times.
  • Vaccines would get distributed equitably, and not just to folks with faster reflexes or Internet connections.

Since February, we’ve seen a good number of participants in Project Fair Shot. To date, we have helped more than 100 customers across more than 10 countries to schedule approximately 100 million vaccinations. Even better, these vaccinations went smoothly, with customers like the County of San Luis Obispo regularly dealing with more than 20,000 appointments in a day.  “The bottom line is Cloudflare saved lives today. Our County will forever be grateful for your participation in getting the vaccine to those that need it most in an elegant, efficient and ethical manner” — Web Services Administrator for the County of San Luis Obispo.

We are happy to have helped not just in the US, but worldwide as well. In Canada, we partnered with a number of organizations and the Canadian government to increase access to the vaccine. One partner stated: “Our relationship with Cloudflare went from ‘Let’s try Waiting Room’ to ‘Unless you have this, we’re not going live with that public-facing site.’” — CEO of Verto Health. In another country in Europe, we saw over three million people go through the Waiting Room in less than 24 hours, leading to a significantly smoother and less stressful experience. Cities in Japan, — working closely with our partner, Classmethod — have been able to vaccinate over 40 million people and are on track to complete their vaccination process across 317 cities. If you want more stories from Project Fair Shot, check out our case studies.

Cloudflare and COVID-19: Project Fair Shot Update
A European customer seeing very high amounts of traffic during a vaccination event

We are continuing to add more customers to Project Fair Shot every day to ensure we are doing all that we can to help distribute more vaccines. With the emergence of the Delta variant and others, vaccine distribution (and soon, booster shots) is still very much a real problem to keep everyone healthy and resilient. Because of these new developments, Cloudflare will be extending Project Fair Shot until at least July 1, 2022. Though we are not excited to see the pandemic continue, we are humbled to be able to provide our services and be a critical part in helping us collectively move towards a better tomorrow.

Working with those who protect human rights around the world

Post Syndicated from Joao Sousa Botto original https://blog.cloudflare.com/working-with-those-who-protect-human-rights-around-the-world/

Working with those who protect human rights around the world

Working with those who protect human rights around the world

Over the past few years, we’ve seen an increasing use of Internet shutdowns and cyberattacks that restrict the availability of information in communities around the world. In 2020, Access Now’s #KeepItOn coalition documented at least 155 Internet shutdowns in 29 countries. During the same period, Cloudflare witnessed a five-fold increase in cyberattacks against the human rights, journalism, and non-profit websites that benefit from the protection of Project Galileo.

These disruptive measures, which put up barriers to those looking to use the Internet to express themselves, earn a livelihood, gather and disseminate information, and participate in public life,  affect the lives of millions of people around the world.

As described by the UN Human Rights Council (UNHRC), the Internet is not only a key means by which individuals exercise their rights to freedom of opinion and expression, it “facilitates the realization of a range of other human rights” including “economic, social and cultural rights, such as the right to education and the right to take part in cultural life and to enjoy the benefits of scientific progress and its applications, as well as civil and political rights, such as the rights to freedom of association and assembly.” The effect of Internet disruptions are particularly profound during elections, as they disrupt the dissemination and sharing of information about electoral contests and undermine the integrity of the democratic process.

At Cloudflare, we’ve spent time talking to human rights defenders who push back on governments that shut down the Internet to stifle dissent, and on those who help encourage fair, democratic elections around the world. Although we’ve long protected those defenders from cyberattacks with programs like Project Galileo, we thought we could do more. That is why today, we are announcing new programs to help our civil society partners track and document Internet shutdowns and protect democratic elections around the world from cyberattacks.

Radar Alerts

Internet shutdowns intended to prevent or disrupt access to or dissemination of information online are widely condemned, and have been described as “measures that can never be justified under human rights law.” Nonetheless, the UN Special Rapporteur on the rights to freedom of peaceful assembly and of association recently reported that Internet shutdowns have increased in length, scale, and sophistication, and have become increasingly challenging to detect. From January 2019 through May 2021, the #KeepItOn coalition documented at least 79 incidents of protest-related shutdowns, including in the context of elections.

Cloudflare runs one of the world’s largest networks, with data centers in more than 100 countries worldwide and one billion unique IP addresses connecting to Cloudflare’s network. That global network gives us exceptional visibility into Internet traffic patterns, including the variations in traffic that signal network anomalies. To help provide insight to these Internet trends, Cloudflare launched Radar in 2020, a platform that helps anyone see how the Internet is being used around the globe. In Radar one can visually identify significant drops in traffic, typically associated with an Internet shutdown, but these trend graphs are most helpful when one is already looking for something specific.

Working with those who protect human rights around the world
Radar chart for Internet Traffic in Uganda, showing a significant change for January 13-15

Internally Cloudflare has had an alert system for potential Internet disruptions, that we use as an early warning to shifts in network patterns and incidents. This internal system allows us to see these disruptions in real-time, and after many conversations with civil society groups that track and report these shutdowns, such as The Carter Center, the International Foundation for Electoral Systems, Internet Society, Internews, The National Democratic Institute and Access Now, it was clear that they would benefit from such a system, fine-tuned to report Internet traffic drops quickly and reliably. We then built an additional validation layer and a notification system that sends notifications through various channels, including e-mail and social media.

“In the fight to end internet shutdowns, our community needs accurate reports on internet disruptions at a global scale. When leading companies like Cloudflare share their data and insights, we can make more timely interventions. Together with civil society, Cloudflare will help #KeepItOn.”
Peter Micek, General Counsel, Access Now

“Internet shutdowns undermine election integrity by restricting the right of access to information and freedom of expression. When shutdowns are enacted, reports of their occurrence are often anecdotal, piecemeal, and difficult to substantiate. Radar Alerts provide The Carter Center with real-time information about the occurrence, breadth, and impact of shutdowns on an election process. This information enables The Carter Center to issue evidence-backed statements to substantiate harms to election integrity and demand the restoration of fundamental human rights.”
Michael Baldassaro, Senior Advisor, Digital Threats to Democracy at The Carter Center.

“Internet censorship, throttling and shutdowns are threats to an open Internet and to the ability of people to access and produce trustworthy information. Internews is excited to see Cloudflare share its data to help raise the visibility of shutdowns around the world.”
Jon Camfield, Director of Global Technological Strategy, Internews

Working with those who protect human rights around the world

Now, as we detect these drops in traffic, we may still not have the expertise, backstory or sense of what is happening on the ground when this occurs — at least not in as much detail as our partners. We are excited to be working with these organizations to provide alerts on when Cloudflare has detected significant drops in traffic with the hope that the information is used to document, track and hold institutions accountable for these human rights violations.

If you are an organization that tracks and reports on Internet shutdowns and would like to join the private beta, please contact [email protected] and follow the Cloudflare Radar alert Twitter page.

Crawler Hints: How Cloudflare Is Reducing The Environmental Impact Of Web Searches

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/crawler-hints-how-cloudflare-is-reducing-the-environmental-impact-of-web-searches/

Crawler Hints: How Cloudflare Is Reducing The Environmental Impact Of Web Searches

Crawler Hints: How Cloudflare Is Reducing The Environmental Impact Of Web Searches

Cloudflare is known for innovation, for needle-moving projects that help make the Internet better. For Impact Week, we wanted to take this approach to innovation and apply it to the environmental impact of the Internet. When it comes to tech and the environment, it’s often assumed that the only avenue tech has open to it is harm mitigation: for example, climate credits, carbon offsets,  and the like. These are undoubtedly important steps, but we wanted to take it further — to get into harm reduction. So we asked — how can the Internet at large use less energy and be more thoughtful about how we expend computing resources in the first place?

Cloudflare has a global view into the traffic of the Internet. More than 1 in 6 websites use our network, and we observe the traffic flowing to and from them continuously. While most people think of surfing the Internet as a very human activity, nearly half of all traffic on the global network is generated by automated systems.

We’ve analyzed this automated traffic, from so-called “bots,” in order to understand the environmental impact. Most of the bot traffic is malicious. Cloudflare protects our clients from this malicious traffic and, in doing so, mitigates their environmental impact. If these bots were not stopped by Cloudflare, they would generate database requests and force dynamic page generation on services far less efficient than Cloudflare’s network.

We even went a step further, committing to plant trees to offset the carbon cost of our bot mitigation services. While we’d love to be able to tell the bad actors to think of the environment and stop running their bots, we don’t think they’d listen. So, instead, we aim to mitigate them as efficiently as possible.

But there’s another type of bot that we don’t want to go away: good bots that index the web for useful reasons. These good bots represent more than 5% of global Internet traffic. The majority of this good bot traffic comes from what are known as search engine crawlers, and they are critical to making the web navigable.

Large-Scale Problems, Large-Scale Opportunities

Online search remains magical. Enter a query into a box on a search engine like Google, Bing, Yandex, or Baidu and, in a fraction of a second, get a list of web resources with information on whatever you’re looking for. To make this magic happen, search engines need to scour the web and, simplistically, make a copy of its contents that are stored and sorted on their own systems to be quickly retrieved whenever needed.

Companies that run search engines have worked hard to make the process as efficient as possible, pushing the state-of-the-art in terms of server and data center efficiency. But there remains one clear area of waste: excessive crawl.

At Cloudflare, we see traffic from all the major search crawlers. We’ve spent the last year studying how often these good bots revisit a page that hasn’t changed since they last saw it. Every one of these visits is a waste. And, unfortunately, our observation suggests that 53% of this good bot traffic is wasted.

The Boston Consulting Group estimates that running the Internet generated 2% of all carbon output, or about 1 billion metric tonnes per year. If 5% of all Internet traffic is good bots, and 53% of that traffic is wasted by excessive crawl, then finding a solution to reduce excessive crawl could help save as much as 26 million tonnes of carbon cost per year. According to the U.S. Environmental Protection Agency, that’s the equivalent of planting 31 million acres of forest, shutting down 6 coal-fired power plants forever, or taking 5.5 million passenger vehicles off the road.

Obviously, it’s not quite that simple. But suffice it to say there’s a big opportunity to make a meaningful impact on the environmental cost of the Internet if we are able to ensure that any search engine only crawls once or whenever it changes.

Recognizing this problem, we’ve been talking with the largest operators of good bots for the last several years to see if, together, we could address the issue.

Crawler Hints

Today, we’re excited to announce Crawler Hints. Crawler Hints provide high quality data to search engine crawlers on when content has been changed on sites using Cloudflare, allowing them to precisely time their crawling, avoid wasteful crawls, and generally reduce resource consumption of customer origins, crawler infrastructure, and Cloudflare infrastructure in the process. The cherry on top: because search engine crawlers now receive signals on when content is fresh, the search experiences powered by these “good bots” will improve, delighting Internet users at large with more relevant and useful content. Crawler Hints is a win for the Internet and a win for the Internet’s energy footprint.

With Crawler Hints, we expect to make crawling a bit more tractable by providing an additional heuristic to bot developers that will allow them to know when content has been changed or added to a site instead of relying on preferences or previous changes that might not reflect the true change cadence for a site.

How will this work?

At its simplest we want a way to proactively tell a search engine when a page has changed, rather than having to wait for the search engine to discover a change has happened. Search engines actually typically have a few ways to tell them about when an individual page or group of pages changes.

For example, you can ask Google to recrawl a website, and they’ll do so in “a few days to a few weeks”.

If you wanted to efficiently tell Google about changes you’d have to keep track of when Google last crawled the page and tell them to recrawl when a change happens. You wouldn’t want to tell Google every time a page changes as there’s a time delay between requesting a recrawl and the spider coming to visit. You could be telling Google to come back during the gap between the request and the spider coming to call.

And there isn’t just one search engine and new search crawlers get created. Trying to keep search engines up to date as your site changes, efficiently, would be messy and very difficult. This is, in part, because this model does not contain explicit information about when something changed.

This model just doesn’t work well. And that’s partly why search engine crawlers inevitably waste energy recrawling sites over and over again regardless of whether there is something new to find.

However, there is an existing mechanism used by search engines to discover the structure of websites that’s perfect: the sitemap. The sitemap is a well-defined, open protocol for telling a crawler about the pages on a site, when they last changed and how often they are likely to change.

Sitemaps have some limitations (on number of URLs and bytes) but do have a mechanism for large sites with millions of URLs. But building sitemaps can be complex and require special software. Getting a consistent, up to date sitemap for a website (especially one that uses different technologies) can be very hard.

That’s where Cloudflare comes in. We see what pages our customers are serving, we know which ones have changed (either by hash value or timestamp) and so can automatically build a complete record of when and which pages have changed.

And we can keep track of when a search crawler visited a particular page and only serve up exactly what changed since last time. Since we can keep track of this on a per-search engine basis it can be very efficient. Each search engine gets its own automagically updated list of URLs or sitemap of just what’s changed since their last visit.

And it adds absolutely no load to the origin website. Cloudflare can tell a search engine in almost real-time about a page’s modifications and provide a view of what changed since their last visit.

The sitemaps protocol also contains a priority for a page. Since we know how often a page is visited we can also hint to a search engine that a page is seen frequently by visitors and thus may be more important to add to the index than another page.

There are a few details to work out, such as how a search engine should identify itself to get its personalized list of URLs, but the protocol is open and in no way depends on Cloudflare. In fact, we hope that every host and Cloudflare-like service will consider implementing the protocol. We plan to continue to work with the search and hosting communities to refine the protocol in order to make it more efficient. Our goal is to ensure that search engines can have the freshest index, content creators will have their new content optimally indexed, and a big chunk of unnecessary Internet traffic, and the corresponding carbon cost, will disappear.

Conclusion

Crawler Hints doesn’t just benefit search engines. For our customers and origin owners, Crawler Hints will ensure that search engines and other bot-powered experiences will always have the freshest version of your content, translating into happier users and ultimately influencing search rankings. Crawler Hints will also mean less traffic hitting your origin, improving resource consumption and limiting carbon impact. Moreover, your site performance will be improved as well: your human customers will not be competing with bots!

And for Internet users? When you interact with bot-fed experiences — which we all do every day, whether we realize it or not, like search engines or pricing tools — these will now deliver more useful results from crawled data, because Cloudflare has signaled to the owners of the bots the moment they need to update their results.

Finally, and perhaps the one we’re most excited about, for the Internet more generally: it’s going to be greener. Energy usage across the web will be greatly reduced.

Win win win. The types of outcomes that bring us to work every day, and what we think of in helping to build a better Internet.

This is an exciting problem to solve, and we look forward to working with others that want to help the Internet be more efficient and performant while reducing needless energy consumption. We plan on having more news to share on this front soon. If you operate a bot that relies on content freshness and are interested in working with us on this project, please email [email protected].

Yandex prioritizes long-term sustainability over short-lived success, and joins the global community in its pursuit of climate change mitigation. As a part of its commitment to quality service and user experience, Yandex focuses on ensuring relevance and usability of search results. We believe that a Cloudflare’s solution will strengthen search performance by improving the accuracy of returned results, and look forward to partnering with Cloudflare on boosting the efficiency of valuable bots across the Internet.

“DuckDuckGo is supportive of anything that makes search more environmentally friendly and better for end users without harming privacy. We’re looking forward to working with Cloudflare on this proposal.”
Gabriel Weinberg, CEO and Founder, DuckDuckGo.

Nearly a year ago (the Internet Archive’s Wayback Machine partnered with Cloudflare) to help power their “Always Online” service and, in turn, to have the Internet Archive learn about high-quality Web URLs to archive. That win-win partnership has been a huge success for the Wayback Machine and, in turn, our partners, as it has helped ensure we better fulfill our mission to help make the Web more useful and reliable by backing up, and making available for future generations, much of the public Web. Building on that ongoing relationship with Cloudflare, the Internet Archive is thrilled to start using this new “Crawler Hints” service. With it, we expect to be able to do more with less. To be able to focus our server and bandwidth resources on more of the Web pages that have changed, and less on those that have not. We expect this will have a material impact on our work. The fact the service also promises to reduce the carbon impact of the Web overall makes it especially worthwhile and, as such, we are proud to be part of the effort.
Mark Graham, Director, the Wayback Machine at the Internet Archive

Introducing Smart Edge Revalidation

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/introducing-smart-edge-revalidation/

Introducing Smart Edge Revalidation

Introducing Smart Edge Revalidation

Today we’re excited to announce Smart Edge Revalidation. It was designed to ensure that compute resources are synchronized efficiently between our edge and a browser. Right now, as many as 30% of objects cached on Cloudflare’s edge do not have the HTTP response headers required for revalidation. This can result in unnecessary origin calls. Smart Edge Revalidation fixes this: it does the work to ensure that these headers are present, even when an origin doesn’t send them to us. The advantage of this? There’s less wasted bandwidth and compute for objects that do not need to be redownloaded. And there are faster browser page loads for users.

So What Is Revalidation?

Introducing Smart Edge Revalidation

Revalidation is one part of a longer story about efficiently serving objects that live on an origin server from an intermediary cache. Visitors to a website want it to be fast. One foundational way to make sure that a website is fast for visitors is to serve objects from cache. In this way, requests and responses do not need to transit unnecessary parts of the Internet back to an origin and, instead, can be served from a data center that is closer to the visitor. As such, website operators generally only want to serve content from an origin when content has changed. So how do objects stay in cache for as long as necessary?

One way to do that is with HTTP response headers.

When Cloudflare gets a response from an origin, included in that response are a number of headers. You can see these headers by opening any webpage, inspecting the page, going to the network tab, and clicking any file. In the response headers section there will generally be a header known as “Cache-Control.” This header is a way for origins to answer caching intermediaries’ questions like: is this object eligible for cache? How long should this object be in cache? And what should the caching intermediary do after that time expires?

How long something should be in cache can be specified through the max-age or s-maxage directives. These directives specify a TTL or time-to-live for the object in seconds. Once the object has been in cache for the requisite TTL, the clock hits 0 (zero) and it is marked as expired. Cache can no longer safely serve expired content to requests without figuring out if the object has changed on the origin or if it is the same.

If it has changed, it must be redownloaded from the origin. If it hasn’t changed, then it can be marked as fresh and continue to be served. This check, again, is known as revalidation.

We’re excited that Smart Edge Revalidation extends the efficiency of revalidation to everyone, regardless of an origin sending the necessary response headers

How is Revalidation Accomplished?

Two additional headers, Last-Modified and ETag, are set by an origin in order to distinguish different versions of the same URL/object across modifications. After the object expires and the revalidation check occurs, if the ETag value hasn’t changed or a more recent Last-Modified timestamp isn’t present, the object is marked “revalidated” and the expired object can continue to be served from cache. If there has been a change as indicated by the ETag value or Last-Modified timestamp, then the new object is downloaded and the old object is removed from cache.

Revalidation checks occur when a browser sends a request to a cache server using If-Modified-Since or If-None-Match headers. These request headers are questions sent from the browser cache about when an object has last changed that can be answered via the ETag or Last-Modified response headers on the cache server. For example, if the browser sends a request to a cache server with If-Modified-Since: Tue, 8 Nov 2021 07:28:00 GMT the cache server must look at the object being asked about and if it has not changed since November 8 at 7:28 AM, it will respond with a 304 status code indicating it’s unchanged. If the object has changed, the cache server will respond with the new object.

Introducing Smart Edge Revalidation

Sending a 304 status code that indicates an object can be reused is much more efficient than sending the entire object. It’s like if you ran a news website that updated every 24 hours. Once the content is updated for the day, you wouldn’t want to keep redownloading the same unchanged content from the origin and instead, you would prefer to make sure that the day’s content was just reused by sending a lightweight signal to that effect, until the site changes the next day.

The problem with this system of browser questions and revalidation responses is that sometimes origins don’t set ETag or Last-Modified headers, or they aren’t configured by the website’s admin, making revalidation impossible. This means that every time an object expires, it must be redownloaded regardless of if there has been a change or not, because we have to assume that the asset has been updated, or else risk serving stale content.

Introducing Smart Edge Revalidation

This is an incredible waste of resources which costs hundreds of GB/sec of needless bandwidth between the edge and the visitor. Meaning browsers are downloading hundreds of GB/sec of content they may already have. If our baseline of revalidation is around 10% of all traffic and in initial tests, Smart Edge Revalidation increased revalidation just under 50%, this means that without a user needing to configure anything, we can increase total revalidations by around 5%!

Such a large reduction in bandwidth use also comes with potential environmental benefits. Based on Cloudflare’s carbon emissions per byte, the needless bandwidth being used could amount to 2000+ metric tons CO2e/year, the equivalent of the CO2 emissions from more than 400 cars in a year.

Revalidation also comes with a performance improvement because it usually means a browser is downloading less than 1KB of data to check if the asset has changed or not, while pulling the full asset can be 100sKB. This can improve performance and reduce the bandwidth between the visitor and our edge.

How Smart Edge Revalidation Works

When both Last-Modified and Etag headers are absent from the origin server response, Smart Edge Revalidation will use the time the object was cached on Cloudflare’s edge as the Last-Modified header value. When a browser sends a revalidation request to Cloudflare using If-Modified-Since or If-None-Match, our edge can answer those revalidation questions using the Last-Modified header generated from Smart Edge Revalidation. In this way, our edge can ensure efficient revalidation even if the headers are not sent from the origin.

Smart Edge Revalidation will be enabled automatically for all Cloudflare customers over the coming weeks. If this behavior is undesired, you can always ensure that Smart Edge Revalidation is not activated by confirming your origin is sending ETag or Last-Modified headers when you want to indicate changed content. Additionally, you could have your origin direct your desired revalidation behavior by making sure it sets appropriate cache-control headers.

Smart Edge Revalidation is a win for everyone: visitors will get more content faster from cache, website owners can serve and revalidate additional content from Cloudflare efficiently, and the Internet will get a bit greener and more efficient.

Smart Edge Revalidation is the latest announcement to join the list of ways we’re making our network more sustainable to help build a greener Internet — check out posts from earlier this week to learn about our climate commitments, Green Compute with Workers, Carbon Impact Report, Pages x Green Web Foundation partnership, and crawler hints.

Helping build a green Internet

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/helping-build-a-green-internet/

Helping build a green Internet

Helping build a green Internet

When we started Cloudflare, we weren’t thinking about minimizing the environmental impact of the Internet. Frankly, I didn’t really think of the Internet as having much of an environmental impact. It was just this magical resource that gave access to information and services from anywhere.

But that was before I started racking servers in hyper-cooled data centers. Before Cloudflare started paying the bills to keep those servers powered up and cooled down. Before we became obsessed with maximizing the number of requests we could process per watt of power. And long before we started buying directly from renewable power suppliers to drive down the cost of electricity across our network.

Today, I have a very good understanding of how much power it takes to run the Internet. It therefore wasn’t surprising to read the Boston Consulting Group study which found that 2% of all carbon output, about 1 billion metric tons per year, is attributable to the Internet. That’s the equivalent of the entire aviation industry.

Cloudflare: Accidentally Environmentally Friendly By Design

While we didn’t set out to reduce the environmental impact of the Internet, Cloudflare has always had efficiency at its core. It comes from our ongoing fight with an old nemesis: the speed of light.

Because we knew we couldn’t beat the speed of light, in order to make our network fast we needed to get close to where Internet users were. In order to do that, we needed to partner directly with ISPs around the world so they’d allow us to install our gear directly inside their networks. In order to do that, we needed to make our gear as low power as possible. And we needed to invent network technology to spread load around our network to deal with spikes of traffic — whether because of a cyber attack or a sale on an exclusive new sneaker line — and to efficiently use all available capacity.

Fighting for Efficiency

When back in December 2012, just two years after we launched, I traveled to Intel’s Oregon Research Center to talk to their senior engineering team about how we needed server chips with more cores per watt, I wasn’t thinking we needed it to save the environment. Instead, I was trying to figure out how we could build equipment that was power efficient enough that ISPs wouldn’t object to installing it. Unfortunately, Intel told me that I was worrying about the wrong thing. So that’s when we started looking for alternatives, including the very power-efficient Arm.

But, it turns out, our obsession with efficiency has made Cloudflare the environmental choice in cloud computing. A 2015 study by Anders S. G. Andrae and Tomas Edler estimated the average cost of processing a byte of information online. Even accounting for the efficiency gains across the industry, based on the study’s data our best estimates are that Cloudflare data processing is more than 19 times more efficient.

Serve Local

The imperfect analogy that I like is buying from the local farmers’ market versus the big box retailer. By serving requests locally, and not backhauling them around the world to massive data centers, Cloudflare is able to reduce the environmental impact of our customers on the Internet. In 2020, we estimate that our customers reduced their carbon output by 550,000 metric tons versus if they had not used our services. That’s the equivalent of eliminating 635 million miles driven by passenger cars last year.

Helping build a green Internet

We’re proud of that, but it’s still a tiny percentage of the overall impact the Internet still has on the environment. As we thought about Impact Week, we set out to make reducing the environmental impact of the Internet a top priority. Given today more than 1 in 6 websites uses Cloudflare, we’re in a position where changes we make can have a meaningful impact.

We Can Do More

Starting today, we’re announcing four major initiatives to reduce Cloudflare’s environmental impact and help the Internet as a whole be more environmentally friendly.

First, we’re committing to be carbon neutral by 2022. We already extensively use renewable energy to power our global network, but we’re going to expand that usage to cover 100% of our energy use. But we’re going a step further. We’re going to look back over the 11 years since Cloudflare launched and purchase offsets to zero out all of Cloudflare’s historical carbon output from powering our global network. It’s not enough that we have less impact than others, we want to make sure Cloudflare since our beginning has been a net positive for the planet.

Second, we are ramping up our deployment of a new class of hyper-efficient servers. Based on Arm technology, these servers can perform the same amount of work while using half the energy. We are hopeful that by prioritizing energy efficiency in the server market we can help catalyze more chip manufacturers to release more efficient designs.

Third, we’re releasing a new option for Cloudflare Workers and Pages, our computing platform and JAMStack offering, which allows developers to choose to run their workloads in the most energy efficient data centers. We believe we are the first major cloud computing vendor to offer developers a way to optimize for the environment. The Green Workers option won’t cost anymore. The tradeoff will be that workloads may incur a bit of additional network latency, but we believe for many developers that’s a tradeoff they’ll be willing to make.

New Standards and Partnerships to Eliminate Excessive Emissions

Finally, and maybe most ambitiously, we’re working with a number of the leading search and crawl companies to introduce an open standard to minimize the amount of load from excessive crawl as possible. Nearly half of all Internet traffic is automated. The majority of that is malicious, and Cloudflare is designed to stop that as efficiently as possible.

But more than 5% of all Internet traffic is generated by legitimate crawlers which index the web in order to power services we all rely on like search. The problem is, more than half of that legitimate crawl traffic is redundant — reindexing pages that haven’t changed. If we can eliminate redundant crawl, it’d be the equivalent of planting a new 30 million acres of forest. That’s a goal worth striving for.

When we started Cloudflare we weren’t thinking about how we could reduce the Internet’s environmental impact. But that’s changed. Cloudflare’s mission is to help build a better Internet. And a better Internet is clearly a more environmentally friendly Internet.

Announcing Green Compute on Cloudflare Workers

Post Syndicated from Aly Cabral original https://blog.cloudflare.com/announcing-green-compute/

Announcing Green Compute on Cloudflare Workers

Announcing Green Compute on Cloudflare Workers

All too often we are confronted with the choice to move quickly or act responsibly. Whether the topic is safety, security, or in this case sustainability, we’re asked to make the trade off of halting innovation to protect ourselves, our users, or the planet. But what if that didn’t always need to be the case? At Cloudflare, our goal is to bring sustainable computing to you without the need for any additional time, work, or complexity.

Enter Green Compute on Cloudflare Workers.

Green Compute can be enabled for any Cron triggered Workers. The concept is simple: when turned on, we’ll take your compute workload and run it exclusively on parts of our edge network located in facilities powered by renewable energy. Even though all of Cloudflare’s edge network is powered by renewable energy already, some of our data centers are located in third-party facilities that are not 100% powered by renewable energy. Green Compute takes our commitment to sustainability one step further by ensuring that not only our network equipment but also the building facility as a whole are powered by renewable energy. There are absolutely no code changes needed. Now, whether you need to update a leaderboard every five minutes or do DNA sequencing directly on our edge (yes, that’s a real use case!), you can minimize the impact of any scheduled work, regardless of how complex or energy intensive.

How it works

Cron triggers allow developers to set time-based invocations for their Workers. These Workers happen on a recurring schedule, as opposed to being triggered by application users via HTTP requests. Developers specify a job schedule in familiar cron syntax either through wrangler or within the Workers Dashboard. To set up a scheduled job, first create a Worker that performs a periodic task, then navigate to the ‘Triggers’ tab to define a Cron Trigger.

Announcing Green Compute on Cloudflare Workers

The great thing about cron triggered Workers is that there is no human on the other side waiting for a response in real time. There is no end user we need to run the job close to. Instead, these Workers are scheduled to run as (often computationally expensive) background jobs making them a no-brainer candidate to run exclusively on sustainable hardware, even when that hardware isn’t the closest to your user base.

Cloudflare’s massive global network is logically one distributed system with all the parts connected, secured, and trusted. Because our network works as a single system, as opposed to a system with logically isolated regions, we have the flexibility to seamlessly move workloads around the world keeping your impact goals in mind without any additional management complexity for you.

Announcing Green Compute on Cloudflare Workers

When you set up a Cron Trigger with Green Compute enabled, the Cloudflare network will route all scheduled jobs to green energy hardware automatically, without any application changes needed. To turn on Green Compute today, signup for our beta.

Real world use

If you haven’t ever had the pleasure of writing a cron job yourself, you might be wondering — what do you use scheduled compute for anyway?

There are a wide range of periodic maintenance tasks necessary to power any application. In my working life, I’ve built a scheduled job that ran every minute to monitor the availability of the system I was responsible for, texting me if any service was unavailable. In another instance, a job ran every five mins, keeping the core database and search feature in sync by pulling all new application data, transforming it, then inserting into a search database. In yet another example, a periodic job ran every half hour to iterate over all user sessions and cleanup sessions that were no longer active.

Scheduled jobs are the backbone of real world systems. Now, with Green Compute on Cloudflare Workers all these real world systems and their computationally expensive background maintenance tasks, can take advantage of running compute exclusively on machines powered by renewable energy.

The Green Network

Our mission at Cloudflare is to help you tackle your sustainability goals. Today, with the launch of the Carbon Impact Report we gave you visibility into your environmental impact. The collaboration with the Green Web Foundation gave green hosting certification for Cloudflare Pages. And our launch of Green Compute on Cloudflare Workers allows you to exclusively run on hardware powered by renewable energy. And the best part? No additional system complexity is required for any of the above.

Cloudflare is focused on making it easy to hit your ambitious goals. We are just getting started.

Designing Edge Servers with Arm CPUs to Deliver 57% More Performance Per Watt

Post Syndicated from Nitin Rao original https://blog.cloudflare.com/designing-edge-servers-with-arm-cpus/

Designing Edge Servers with Arm CPUs to Deliver 57% More Performance Per Watt

Designing Edge Servers with Arm CPUs to Deliver 57% More Performance Per Watt

Cloudflare has millions of free customers. Not only is it something we’re incredibly proud of in the context of helping to build a better Internet — but it’s something that has made the Cloudflare service measurably better. One of the ways we’ve benefited is that it’s created a very strong imperative for Cloudflare to maintain a network that is as efficient as possible. There’s simply no other way to serve so many free customers.

In the spirit of this, we are very excited about the latest step in our energy-efficiency journey: turning to Arm for our server CPUs. It has been a long journey getting here — we started testing our first Arm CPUs all the way back in November 2017. It’s only recently, however, that the quantum of energy efficiency improvement from Arm has become clear. Our first Arm CPU was deployed in production earlier this month — July 2021.

Our most recently deployed generation of edge servers, Gen X, used AMD Rome CPUs. Compared with that, the newest Arm based CPUs process an incredible 57% more Internet requests per watt. While AMD has a sequel, Milan (and which Cloudflare will also be deploying), it doesn’t achieve the same degree of energy efficiency that the Arm processor does — managing only 39% more requests per watt than Rome CPUs in our existing fleet. As Arm based CPUs become more widely deployed, and our software is further optimized to take advantage of the Arm architecture, we expect further improvements in the energy efficiency of Arm servers.

Using Arm, Cloudflare can now securely process over ten times as many Internet requests for every watt of power consumed, than we did for servers designed in 2013.

(In the graphic below, for 2021, the perforated data point refers to x86 CPUs, whereas the bold data point refers to Arm CPUs)

Designing Edge Servers with Arm CPUs to Deliver 57% More Performance Per Watt

As Arm server CPUs demonstrate their performance and become more widely deployed, we hope this will inspire x86 CPUs manufacturers (such as Intel and AMD) to urgently take energy efficiency more seriously. This is especially important since, worldwide, x86 CPUs continue to represent the vast majority of global data center energy consumption.

Together, we can reduce the carbon impact of Internet use. The environment depends on it.

Cloudflare: 100% Renewable & Zeroing Out Emissions Back to Day 1

Post Syndicated from Patrick Day original https://blog.cloudflare.com/cloudflare-committed-to-building-a-greener-internet/

Cloudflare: 100% Renewable & Zeroing Out Emissions Back to Day 1

Cloudflare: 100% Renewable & Zeroing Out Emissions Back to Day 1

As we announced this week, Cloudflare is helping to create a clean slate for the Internet. Our goal is simple: help build a better, greener Internet with no carbon emissions that is powered by renewable energy.

To help us get there, Cloudflare is making two announcements. The first is that we’re committed to powering our network with 100% renewable energy. This builds on work we started back in 2018, and we think is clearly the right thing to do. We also believe it will ultimately lead to more efficient, more sustainable, and potentially cheaper products for our customers.

The second is that by 2025 Cloudflare aims to remove all greenhouse gases emitted as the result of powering our network since our launch in 2010. As we continue to improve the way we track and mitigate our carbon footprint, we want to help the Internet begin with a fresh start.

Finally, as part of our effort to track and mitigate our emissions, we’re also releasing our first annual carbon emissions inventory report. The report will provide detail on exactly how we calculate our carbon emissions as well as our renewable energy purchases. Transparency is one of Cloudflare’s core values. It’s how we work to build trust with our customers in everything we do, and that includes our sustainability efforts.

Purchasing Renewable Energy

Understanding Cloudflare’s commitment to power its network with 100% renewable energy requires some additional background on renewable energy markets, as well as international emissions accounting standards.

Companies that commit to powering their operations with 100% renewable energy are required to match their total energy used with electricity produced from renewable sources. The international standards that govern these types of commitments such as the Greenhouse Gas (GHG) Protocol and ISO 14064, are the same ones used by governments for quantifying their carbon emissions for global climate treaties like the Paris Climate Agreement. There are also additional industry best practices like RE100, which are voluntary guidelines established by companies working to support renewable energy development and eliminate carbon emissions.

Actually purchasing renewable energy consistent with those requirements can be done in several ways — through self-generation, like rooftop solar panels or wind turbines; through contracts with wind or solar farms via Power Purchase Agreements (PPA’s) or unbundled Renewable Energy Credits (RECs), or in some cases purchased through local utility companies like CleanPowerSF in San Francisco, CA.

The goal of providing so many options to purchase renewable energy is to leverage as much investment as possible in new renewable sources. As our colleague Jess Bailey described after our first renewable energy purchase in 2018, because of the way electricity flows through electrical grids, it’s impossible for the individual consumer to know whether they are using electricity from conventional or renewable sources. However, in order to allow customers of all sizes to invest in renewable energy generally, these standards and accounting systems allow individuals or organizations to track their investments and enjoy the benefits of supporting renewable energy, even if the actual power comes from the standard electrical grid.

According to IEA, in 2020 alone, global renewable energy capacity increased 45 percent, which was the largest annual increase since 1997. In addition, close to 50 percent of corporate renewable energy investment over the last five years has been by Internet Communications Technology (ICT) companies alone.

Cloudflare’s Renewable Energy

Cloudflare’s new commitment to power its network with renewable energy means that we will continue to match 100 percent of our global energy usage by purchasing energy from renewable sources. Although Cloudflare made its first renewable energy purchase in 2018, and matched its total global operations in both 2019 and 2020, we thought it was important to make a public, forward-looking commitment so that all of our stakeholders, including customers, investors, employees, and suppliers have confidence that we will continue to build our network on renewable energy moving forward.

To determine how much renewable energy to buy, we separate our total electrical usage into two types: network and facilities. For our network, we pull data from all of our servers and networking equipment located all over the world twice a year. For our facilities (or offices), per the GHG Protocol, we record our actual energy usage wherever we have access to utility bills. For offices located in larger buildings with multiple tenants, we use energy usage intensity (EUI) estimates calculated by the U.S. Energy Information Agency.

We also purchase renewable energy in two ways. The vast majority of our purchases are RECs, which we purchase through our partner 3Degrees to help make sure we are aligned with relevant standards like the GHG Protocol. In 2020, to match the usage of our network, Cloudflare purchased RECs, I-RECs, REGOs, and other energy attribute certificates from the United States, United Kingdom, Brazil, Chile, Columbia, India, Malaysia, Mexico, Hungary, Romania, Ukraine, Bulgaria, South Africa, and Turkey among others. Although Cloudflare has employed a regional purchasing strategy in the past, we also expect to be fully aligned with all RE100 criteria, including its market boundary criteria, by the end of 2021.

Removing our historic emissions

Cloudflare’s goal is to remove or offset all of our historical emissions resulting from powering our network by 2025. To meet that target, Cloudflare must first determine exactly how much carbon was emitted as the result of operating our network from 2010 to 2019, and then invest in carbon offsets or removals to match those emissions.

Determining carbon emissions from purchased electricity is a relatively straightforward calculation. In fact, it’s basically just a unit conversion:

Energy (KWH) x Emissions Factor (gC02e/KWH) = Carbon emissions (gC02e)

The key to accurate results is the emissions factors. Emissions factors are essentially measurements of the amount of GHGs emitted from a specific power supplier (e.g. power plant X in San Francisco) per unit of energy created. For our purposes, GHGs are those defined in the 1992 Kyoto Protocol (carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulphur hexafluoride). To help ease reporting, the six GHGs are often expressed as a single unit “carbon-dioxide equivalent” or “CO2e”, based on each gas’ Global Warming Potential (GWP). Emission factors from individual power sources are often combined and averaged to create grid average emissions factors for cities, regions, or countries. Per the GHG Protocol, Cloudflare uses emissions factors from the U.S. EPA, U.K. DEFRA, and IEA.

For our annual inventory report, which we are also releasing today, Cloudflare calculates carbon emissions scores for every single data center in our network. Cloudflare multiplies the actual energy used by the equipment by the applicable grid average emissions factors in each of the more than 100 countries where we have equipment.

For our historical calculations, we have data on our actual carbon emissions dating back to 2018, which was our first renewable energy purchase. Prior to 2018, we are combing through all of our purchasing, shipping, energy usage, and colocation agreements to reconstruct how much energy we consumed and when. It’s actually a pretty cool exercise to go back and watch our network grow. Although we do not have a final calculation to share yet, rest assured we will keep everyone posted, particularly as we get to the fun part of starting to work with organizations and companies working on carbon removal efforts.

Where we are going next

Although we’re proud of the steps we’re taking as a company with renewable energy and carbon emissions, we’re just getting started.

Cloudflare is also exploring new products and ideas that can help leverage the power of one of the world’s largest networks to drive better climate outcomes for our customers and for the Internet. To see a really cool example, check out our colleagues blog post from earlier today, on Green Compute on Cloudflare Workers, which is helping Cloudflare’s intelligent edge route some additional workloads to renewable energy facilities, or our Carbon Impact Reports, which are helping our customers optimize their carbon footprint.

Green Hosting with Cloudflare Pages

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/green-hosting-with-cloudflare-pages/

Green Hosting with Cloudflare Pages

At Cloudflare, we are continuing to expand our sustainability initiatives to build a greener Internet in more than one way. We are seeing a shift in attitudes towards eco-consciousness and have noticed that with all things considered equal, if an option to reduce environmental impact is available, that’s the one widely preferred by our customers. With Pages now Generally Available, we believe we have the power to help our customers reach their sustainability goals. That is why we are excited to partner with the Green Web Foundation as we commit to making sure our Pages infrastructure is powered by 100% renewable energy.

The Green Web Foundation

As part of Cloudflare’s Impact Week, Cloudflare is proud to announce its collaboration with the Green Web Foundation (GWF), a not-for-profit organization with the mission of creating an Internet that one day will run on entirely renewable energy. GWF maintains an extensive and globally categorized Green Hosting Directory with over 320 certified hosts in 26 countries! In addition to this directory, the GWF also develops free online tools, APIs and open datasets readily available for companies looking to contribute to its mission.

Green Hosting with Cloudflare Pages

What does it mean to be a Green Web Foundation partner?

All websites certified as operating on 100 percent renewable energy by GWF must provide evidence of their energy usage and renewable energy purchases. Cloudflare Pages have already taken care of that step for you, including by sharing our public Carbon Emissions Inventory report. As a result, all Cloudflare Pages are automatically listed on GWF’s  public global directory as official green hosts.

After these claims were approved by the team at GWF, what do I have to do to get certified?

If you’re hosting your site on Cloudflare Pages, absolutely nothing.

All existing and new sites created on Pages are automatically certified as “green” too! But don’t just take our word for it. With our partnership with GWF and as a Pages user, you can enter your own pages.dev or custom domain into the Green Web Check to verify your site’s green hosting status. Once the domain is shown as verified, you can display the Green Web Foundation badge on your webpage to showcase your contributions to a more sustainable Internet as a green-hosted site. You can obtain this badge by one of two ways:

  1. Saving the badge image directly.
  2. Adding the provided snippet of HTML to your existing code.
Green Hosting with Cloudflare Pages

Helping to Build a Greener Internet

Cloudflare is committed to helping our customers achieve their sustainability goals through the use of our products. In addition to our initiative with the Green Web Foundation for this year’s Impact Week, we are thrilled to announce the other ways we are building a greener Internet, such as our Carbon Impact Report and Green Compute on Cloudflare Workers.

We can all play a small part in reducing our carbon footprint. Start today by setting up your site with Cloudflare Pages!

“Cloudflare’s recent climate disclosures and commitments are encouraging, especially given how much traffic flows through their network. Every provider should be at least this transparent when it comes to accounting for the environmental impact of their services. We see a growing number of users relying on CDNs to host their sites, and they are often confused when their sites no longer show as green, because they’re not using a green CDN. It’s good to see another more sustainable option available to users, and one that is independently verified.” – Chris Adams, Co-director of The Green Web Foundation

Understand and reduce your carbon impact with Cloudflare

Post Syndicated from Natasha Wissmann original https://blog.cloudflare.com/understand-and-reduce-your-carbon-impact-with-cloudflare/

Understand and reduce your carbon impact with Cloudflare

Understand and reduce your carbon impact with Cloudflare

Today, as part of Cloudflare’s Impact Week, we’re excited to announce a new tool to help you understand the environmental impact of operating your websites, applications, and networks. Your Carbon Impact Report, available today for all Cloudflare accounts, will outline the carbon savings of operating your Internet properties on Cloudflare’s network.

Everyone has a role to play in reducing carbon impact and reversing climate change. We shared today how we’re approaching this, by committing to power our network with 100% renewable energy. But we’ve also heard from customers that want more visibility into the impact of the tools they use (also referred to as “Scope 3” emissions) — and we want to help!

The impact of running an Internet property

We’ve previously blogged about how Internet infrastructure affects the environment. At a high level, powering hardware (like servers) uses energy. Depending on its source, producing this energy may involve emitting carbon into the atmosphere, which contributes to climate change.

When you use Cloudflare, we use energy to power hardware to deliver content for you. But how does that energy we use compare to the energy it would take to deliver content without Cloudflare? As of today, you can go to the Cloudflare dashboard to see the (approximate) carbon savings from your usage of Cloudflare services versus Internet averages for your usage volume.

Understand and reduce your carbon impact with Cloudflare

Calculating the carbon savings of your Cloudflare use

Most of the energy that Cloudflare uses comes from powering the servers at our edge to serve your content. We’ve outlined how we quantify the carbon impact of this energy in our emissions report. To determine the percentage of this impact derived from your Cloudflare usage specifically, we’ve used the following method:

When you use Cloudflare, data from requests destined to your Internet property goes through our edge. Data transfer for your Internet properties roughly represents a fraction of the energy consumed at Cloudflare’s edge. If we sum up the data transfer for your Internet properties and multiply that number by the energy it takes to power each request (derived from our emissions report and overall usage data), we can approximate the total carbon impact of powering your Internet properties with Cloudflare.

We already knew that delivering content takes some energy and therefore has some carbon impact. So how much energy does Cloudflare actually save you? To determine what your usage would look like without Cloudflare, we’ve used the following method:

Using public information on average data center energy usage and the International Energy Agency’s global average emissions for energy usage, we can calculate the carbon cost of data transfer through average (non-Cloudflare) networks. We can then compare these numbers to arrive at your carbon savings from using Cloudflare.

With our new Carbon Impact Report, available for all plans/users, we’ve given you this value for your account. It represents the carbon dioxide equivalent (CO2e) that you’ve saved as a result of using Cloudflare to serve requests to your Internet properties in 2020.

This raw number is great, but it isn’t the easiest to understand. What does a gram of carbon dioxide equivalent actually mean in practice? It’s not a unit of measurement most of us are used to seeing in our day-to-day lives. To make this number a little easier to digest, we’ve also provided a comparison to light bulbs.

Standard light bulbs are 60 watts, so we know that turning on a light bulb for an hour uses 0.06 kilowatt-hours of energy. According to the EPA, that’s about 42 grams of carbon dioxide equivalent. That means that if your carbon dioxide equivalent saving is 126 grams, that’s approximately the same impact as turning off a light bulb for three hours.

How does using Cloudflare impact the environment?

As explained in more detail here, Cloudflare purchases Renewable Energy Credits to account for the energy used by our network. This means that your use of Cloudflare’s services is powered by renewable energy.

Additionally, using Cloudflare helps you reduce your overall carbon footprint. Using Cloudflare’s cloud security and performance services such as WAF, Network Firewall, and DDoS mitigation allow you to decommission specialized hardware and transfer those functions to software running efficiently at our edge. This reduces your carbon footprint by significantly decreasing the energy used to operate your network stack, and improves your security, performance, and reliability along the way.

Optimizing your website also reduces your carbon footprint by requiring less energy for your end users to load a page. Using Cloudflare’s Image Resizing for visual content on your site to properly resize images reduces the energy it takes each of your end users to load a page, thus reducing downstream carbon emissions.

Lastly, since Cloudflare is a certified green host, any content you host on Pages or Workers KV is hosted green and certified powered by renewable energy.

What’s next

This dashboard is just a first step in giving our customers transparent information on their carbon use, savings, and ideas for improvement with Cloudflare. Right now, you can view data on your carbon savings from 2020 (aligned with our 2020 emissions report). As we continue to iterate on how we measure carbon impact, we’re working toward providing dynamic information on carbon savings at a quarterly or even monthly granularity.

Have other ideas on what we can provide to help you understand and reduce the carbon impact of your Internet properties? Please reach out to us in the comments on this post or on social media!

We hope that this data helps you with your sustainability goals, and we’re excited to keep providing you with transparent information for 2021 and beyond.

Understanding Where the Internet Isn’t Good Enough Yet

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/understanding-where-the-internet-isnt-good-enough-yet/

Understanding Where the Internet Isn’t Good Enough Yet

Understanding Where the Internet Isn’t Good Enough Yet

Since March 2020, the Internet has been the trusty sidekick that’s helped us through the pandemic. Or so it seems to those of us lucky enough to have fast, reliable (and often cheap) Internet access.

With a good connection you could keep working (if you were fortunate enough to have a job that could be done online), go to school or university, enjoy online entertainment like streaming movies and TV, games, keep up with the latest news, find out vital healthcare information, schedule a vaccination and stay in contact with loved ones and friends with whom you’d normally be spending time in person.

Without a good connection though, all those things were hard or impossible.

Sadly, access to the Internet is not uniformly distributed. Some have cheap, fast, low latency, reliable connections, others have some combination of expensive, slow, high latency and unreliable connections, still others have no connection at all. Close to 60% of the world have Internet access leaving a huge 40% without it at all.

This inequality of access to the Internet has real-world consequences. Without good access it is so much harder to communicate, to get vital information, to work and to study. Inequality of access isn’t a technical problem, it’s a societal problem.

This week, Cloudflare is announcing Project Pangea with the goal of helping reduce this inequality. We’re helping community networks get onto the Internet cheaply, securely and with good bandwidth and latency. We can’t solve all the challenges of bringing fast, cheap broadband access to everyone (yet) but we can give fast, reliable transit to ISPs in underserved communities to help move in that direction. Please refer to our Pangea announcement for more details.

The Tyranny of Averages

To understand why Project Pangea is important, you need to understand how different the experience of accessing the Internet is around the world. From a distance, the world looks blue and green. But we all know that our planet varies wildly from place to place: deserts and rainforests, urban jungles and placid rural landscapes, mountains, valleys and canyons, volcanos, salt flats, tundra, and verdant, rolling hills.

Cloudflare is in a unique position to measure the performance and reach of the Internet over this vast landscape. We have servers in more than 200 cities in over 100 countries, we process 10s of trillions of Internet requests every month. Our network and customers and their users span the globe, every country in every network.

Zoom out to the level of a city, county, state, or country, and average Internet performance can look good — or, at least, acceptable. Zoom in, however, and the inequalities start to show. Perhaps part of a county has great performance, and another limps along at barely dial-up speeds — or worse. Or perhaps a city has some neighborhoods with fantastic fiber service, and others that are underserved and struggling with spotty access.

Inequality of Internet access isn’t a distant problem, it’s not limited to developing countries, it exists in the richest countries in the world as well as the poorest. There are still many parts of the world  where a Zoom call is hard or impossible to make. And if you’re reading this on a good Internet connection, you may be surprised to learn that places with poor or no Internet are not far from you at all.

Bandwidth and Latency in Eight Countries

For Impact Week, we’ve analyzed Internet data in the United States, Brazil, United Kingdom, Germany, France, South Africa, Japan, and Australia to build a picture of Internet performance.

Below, you’ll find detailed maps of where the Internet is fast and slow (focusing on available bandwidth) and far away from the end user (at least in terms of the latency between the client and server). We’d have loved to have used a single metric, however, it’s hard for a single number to capture the distribution of good, bad, and non-existent Internet traffic in a region. It’s for that reason that we’ve used two metrics to represent performance: latency and bandwidth (otherwise known as throughput). The maps below are colored to show the differences in bandwidth and latency and answer part of the question: “How good is the Internet in different places around the world?”

As we like to say, we’re just getting started with this — we intend to make more of this data and analysis available in the near future. In the meantime, if you’re a local official who wants to better understand their community’s relative performance, please reach out — we’d love to connect with you. Or, if you’re interested in your own Internet performance, you can visit speed.cloudflare.com to run a personalized test on your connection.

A Quick Refresher on Latency and Bandwidth

Before we begin, a quick reminder: latency (usually measured in milliseconds or ms) is the time it takes for communications to go to an Internet destination from your device and back, whereas bandwidth is the amount of data that can be transferred in a second (it’s usually measured in megabits per second or Mbps).

Both latency and bandwidth affect the performance of an Internet connection. High latency particularly affects things like online gaming where quick responses from servers are needed, but also shows up by slowing down the loading of complex web pages, and even interrupting some streaming video. Low bandwidth makes downloading anything slow: be it images on a webpage, the new app you want to try out on your phone, or the latest movie.

Blinking your eyes takes about 100ms; but you’ll begin to notice performance changes around 60ms of latency and below 30ms is gold class performance, seeing little to no delay in video streaming or gaming.

United States
United States median throughput: 50.27Mbps
US median latency: 46.69ms

The US government has long recognized the importance of improving the Internet for underserved communities, but the Federal Communications Commission (FCC), the US agency responsible for determining where investment is most needed, has struggled to accurately map Internet access across the country.  Although the FCC has embarked on a new data collection effort to improve the accuracy of existing maps, the US government still lacks a comprehensive understanding of the areas that would most benefit from broadband investment.

Cloudflare’s data confirms the overall concerns with inconsistent access to the Internet and helps fill in some of the current gaps.  A glance at the two maps of the US below will show that, even zoomed out to county level, there is inequality across the country. High latency and low bandwidth stand out as red areas.

Understanding Where the Internet Isn’t Good Enough Yet

US locations with the lowest latency (best) and highest latency (worst) are as follows.

Best performing geographies by latency Worst performing geographies by latency
La Habra, California Parrottsville, Tennessee
Midlothian, Texas Loganville, Wisconsin
Los Alamitos, California Mackinaw City, Michigan
St Louis, Missouri Reno, Nevada
Fort Worth, Texas Eva, Tennessee
Sugar Grove, North Carolina Milwaukee, Wisconsin
Rockwall, Texas Grove City, Minnesota
Justin, Texas Sacred Heart, Minnesota
Denton, Texas Scottsboro, Alabama
Hampton, Georgia Vesta, Minnesota

When thinking about bandwidth, 5 to 10Mbps are generally good enough for video conferencing, but ultra-HD TV watching might consume up to 20Mbps easily. For context, the Federal Communications Commission (FCC) defines the minimum bandwidth for “Advanced Service” at 25 Mbps.

Understanding Where the Internet Isn’t Good Enough Yet

The best performing (i.e., the highest bandwidth) in the US tells an interesting story. New York City comes out on top, but if you were to zoom in on the city you’d find pockets of inequality. You can read more about our partnership with NYC Mesh in the Project Pangea post and how they are helping bring better Internet to underserved parts of the Big Apple. Notice how the tyranny of averages can disguise a problem.

Best performing geographies by throughput Worst performing geographies by throughput
New York, New York Ozark, Missouri
Hartford, Connecticut Stanly, North Carolina
Avery, North Carolina Ellis, Kansas
Red Willow, Nebraska Marion, West Virginia
McLean, Kentucky Sedgwick, Kansas
Franklin, Alabama Calhoun, West Virginia
Montgomery, Pennsylvania Jasper, Georgia
Cook, Illinois Buchanan, Missouri
Montgomery, Maryland Wetzel, West Virginia
Monroe, Pennsylvania North Slope, Alaska

Contrary to popular discourse about access to the Internet as a product of the rural-urban divide, we found that poor performance was not unique to rural areas. Los Angeles, Milwaukee, Florida’s Orange County, Fairfax, San Bernardino, Knox County, and even San Francisco have pockets of uniformly poor performance, often while adjoining ZIP codes have stronger performance.

Even in areas with excellent Internet connectivity, the same connectivity to the same resources can cost wildly different amounts. Internet prices for end-users correlates with the number of ISPs in an area, i.e. the greater the consumer choice, the better the price. President Biden’s recent competition Executive Order, called out the lack of choice for broadband, noting “More than 200 million U.S. residents live in an area with only one or two reliable high-speed internet providers, leading to prices as much as five times higher in these markets than in markets with more options.”

The following cities have the greatest choice of Internet providers:

Geography
New York, New York
Los Angeles, California
Chicago, Illinois
Dallas, Texas
Washington, District of Columbia
Jersey City, New Jersey
Newark, New Jersey
Secaucus, New Jersey
Columbus, Ohio

One might expect less populated areas to have uniformly slower performance. There are, however, pockets of poor performance even in densely populated areas such as Los Angeles (California), Milwaukee (Wisconsin), Orange County (Florida), Fairfax (Virginia),  San Bernardino (California), Knox County (Tennessee), and even San Francisco (California).

In as many as 9% of ZIP codes, average latency exceeds 150ms, the acceptable threshold of performance to run a videoconferencing service such as Zoom.

Australia
Australia median throughput: 33.34Mbps
Australia median latency: 42.04ms

In general, Australia seems to suffer very poor broadband speeds, with speeds that are not capable of sustaining households watching video streaming, and possibly struggling with multiple video calls. The problem isn’t just a rural one either, while the inner cities showed good broadband speed, often with fiber-to-the-building Internet access, suburban areas suffered. Larger suburban areas like the Illawarra had similar speeds to more rural centers like Wagga Wagga, showing this is more than just an urban divide.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by throughput Worst performing geographies by throughput
Inner West Sydney, New South Wales West Tamar, Tasmania
Port Phillip, Victoria Bassendean, Western Australia
Woollahra, New South Wales Alexandrina, South Australia
Brimbank, Victoria Bayswater, Western Australia
Lake Macquarie, New South Wales Augusta-Margaret River, Western Australia
Hawkesbury, New South Wales Goulburn Mulwaree, New South Wales
Sydney, New South Wales Goyder, South Australia
Wentworth, New South Wales Kingborough, Tasmania
Hunters Hill, New South Wales Cottesloe, Western Australia
Blacktown, New South Wales Lithgow, New South Wales

The irony is that, from a latency perspective, Australia actually performs quite well.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by latency Worst performing geographies by latency
Port Phillip, Victoria Narromine, New South Wales
Mornington Peninsula, Victoria North Sydney, New South Wales
Whittlesea, Victoria Northern Midlands, Tasmania
Penrith, New South Wales Swan, Western Australia
Mid-Coast, New South Wales Wanneroo, Western Australia
Campbelltown, New South Wales Snowy Valleys, New South Wales
Northern Beaches, New South Wales Parkes, New South Wales
Strathfield, New South Wales Broome, Western Australia
Latrobe, Victoria Griffith, New South Wales
Surf Coast, Victoria Busselton, Western Australia

Japan
Japan median throughput: 61.4Mbps
Japan median latency: 31.89ms

Japan’s Internet has consistently low latency, including in distant areas such as Okinawa prefecture, 1,000 miles away from Tokyo.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by latency Worst performing geographies by latency
Nara Yamagata
Osaka Okinawa
Shiga Miyazaki
Kōchi Nagasaki
Kyoto Ōita
Tochigi Kagoshima
Tokushima Yamaguchi
Wakayama Tottori
Kanagawa Saga
Aichi Ehime

However, it’s a different story when it comes to bandwidth. Several prefectures in Kyushu Island, Okinawa Prefecture, and Western Honshu have performance falling behind the rest of the country. Unsurprisingly, the best Internet performance is seen in Tokyo, with the highest concentration of people and data centers.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by throughput Worst performing geographies by throughput
Osaka Tottori
Tokyo Shimane
Kanagawa Yamaguchi
Nara Okinawa
Chiba Saga
Aomori Miyazaki
Hyōgo Kagoshima
Kyoto Yamagata
Tokushima Nagasaki
Kōchi Fukui

United Kingdom
United Kingdom median throughput: 53.8Mbps
United Kingdom median latency: 34.12ms

The United Kingdom has good latency throughout most of the country, however bandwidth is a different story. The best performance is seen in inner London as well as some other larger cities like Manchester. London and Manchester are also the homes of the UK’s largest Internet exchange points. More effort to localize data into other cities, like Edinburgh, would be an important step to improving performance for those regions.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by latency Worst performing geographies by latency
Sutton Brent
Milton Keynes Ceredigion
Lambeth Westminster
Cardiff Scottish Borders
Harrow Shetland Islands
Hackney Middlesbrough
Islington Fermanagh and Omagh
Kensington and Chelsea Slough
Thurrock Highland
Kingston upon Thames Denbighshire

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by throughput Worst performing geographies by throughput
City of London Orkney Islands
Slough Shetland Islands
Lambeth Blaenau Gwent
Surrey Ceredigion
Tower Hamlets Isle of Anglesey
Coventry Fermanagh and Omagh
Wrexham Scottish Borders
Islington Denbighshire
Vale of Glamorgan Midlothian
Leicester Rutland

Germany
Germany median throughput: 48.79Mbps
Germany median latency: 42.1ms

Germany has some of the best performance centered on Frankfurt am Main, which is one of the major Internet hubs of the world, however what was formerly East Germany, has higher latency, and slower speeds, leaning to a poorer Internet performance.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by latency Worst performing geographies by latency
Erlangen Harz
Coesfeld Nordwestmecklenburg
Weißenburg-Gunzenhausen Saale-Holzland-Kreis
Heinsberg Elbe-Elster
Main-Taunus-Kreis Vorpommern-Greifswald
Main-Kinzig-Kreis Vorpommern-Rügen
Darmstadt Kyffhäuserkreis
Peine Barnim
Herzogtum Lauenburg Rostock
Segeberg Meißen

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by throughput Worst performing geographies by throughput
Weißenburg-Gunzenhausen Saale-Holzland-Kreis
Frankfurt am Main Weimarer Land
Kassel Vulkaneifel
Cochem-Zell Kusel
Dingolfing-Landau Spree-Neiße
Bodenseekreis Eisenach
Sankt Wendel Unstrut-Hainich-Kreis
Landshut Saale-Orla-Kreis
Ludwigsburg Weimar
Speyer Südliche Weinstraße

France
France median throughput: 48.51Mbps
France median latency: 54.2ms

Paris has long been the Internet hub in France. Marseille has started to grow as a hub, especially with the large number of submarine cables landing. Other interconnection hubs in Lyon and Bordeaux are where we’ll start to see growth as Internet hubs. These four cities are where we also see the best performance, with the highest speeds and lowest latencies, giving the best Internet performance.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by latency Worst performing geographies by latency
Antony Clamecy
Boulogne-Billancourt Beaune
Lyon Ambert
Lille Commercy
Versailles Vitry-le-François
Nogent-sur-Marne Villefranche-de-Rouergue
Bobigny Lure
Marseille Avranches
Saint-Germain-en-Laye Oloron-Sainte-Marie
Créteil Privas

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by throughput Worst performing geographies by throughput
Boulogne-Billancourt Clamecy
Antony Bellac
Marseille Issoudun
Lille Vitry-le-François
Nanterre Sarlat-la-Canéda
Paris Segré
Lyon Rethel
Bobigny Avallon
Versailles Privas
Saverne Sartène

Brazil
Brazil median throughput: 26.28Mbps
Brazil median latency: 49.25ms

Much of Brazil has good, low latency Internet performance, given geographic proximity to the major Internet hubs in São Paulo and Rio de Janeiro. Much of the Amazon has low speeds and high latency, for those parts that are actually connected to the Internet.

Campinas is one stand out, with some of the best performing Internet across Brazil, and is also the site of a recent Cloudflare data center launch.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by latency Worst performing geographies by latency
Vale do Paraiba Paulista Vale do Acre
Assis Sul Amazonense
Sudoeste Amazonense Marajo
Litoral Sul Paulista Vale do Jurua
Baixadas Sul de Roraima
Centro Fluminense Centro Amazonense
Sul Catarinense Madeira-Guapore
Vale do Paraiba Paulista Sul do Amapa
Noroeste Fluminense Metropolitana de Belem
Bauru Baixo Amazonas

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by throughput Worst performing geographies by throughput
Metropolitana do Rio de Janeiro Sudoeste Amazonense
Campinas Marajo
Metropolitana de São Paulo Norte Amazonense
Oeste Catarinense Baixo Amazonas
Marilia Sudeste Rio-Grandense
Vale do Itajaí Sul Amazonense
Sul Catarinense Centro-Sul Cearense
Sudoeste Paranaense Sudoeste Paraense
Grande Florianópolis Sertão Sergipano
Norte Catarinense Sertoes Cearenses

South Africa
South Africa median throughput: 6.4Mbps
South Africa median latency: 59.78ms

Johannesburg has been the historical hub for South Africa’s Internet. This is where many Internet giants have built data centers, and it shows in latency as distance from Johannesburg. South Africa has grown to have two more Internet hubs in Cape Town and Durban. Internet performance also follows these three cities. However, much of South Africa’s Internet performance lacks the ability for video streaming and video conferencing in high definition.

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by latency Worst performing geographies by latency
Siyancuma Dr Beyers Naude
uMshwathi Mogalakwena
City of Tshwane Ulundi
Breede Valley Modimolle/Mookgophong
City of Cape Town Maluti a Phofung
Overstrand Moqhaka
Local Municipality of Madibeng Thulamela
Metsimaholo Walter Sisulu
Stellenbosch Dawid Kruiper
Ekurhuleni Ga-Segonyana

Understanding Where the Internet Isn’t Good Enough Yet

Best performing geographies by throughput Worst performing geographies by throughput
Siyancuma Dr Beyers Naude
City of Cape Town Walter Sisulu
City of Johannesburg Lekwa-Teemane
Ekurhuleni Dr Nkosazana Dlamini Zuma
Drakenstein Emthanjeni
eThekwini Dawid Kruiper
Buffalo City Swellendam
uMhlathuze Merafong City
City of Tshwane Blue Crane Route
City of Matlosana Modimolle/Mookgophong

Case Study on ISP Concentration’s Impact on Performance: Alabama, USA

One question we had as we went through a lot of this data: does ISP concentration impact Internet performance?

On one hand, there’s a case to be made that more ISP competition results in no one vendor being able to invest sufficient resources to build out a fast network. On the other hand, well, classical economics would suggest that monopolies are bad, right?

To investigate the question further, we did a deep dive into Alabama in the United States, the 24th most populous state in the US. We tracked two key metrics across 65 counties: Internet performance as defined by average download speed, and ISP concentration, as measured by the largest ISP’s traffic share.

Here is the raw data:

County Avg. Download Speed Largest ISP’s Traffic Share County Avg. Download Speed Largest ISP’s Traffic Share
Marion 53.77 41% Franklin 32.01 83%
Escambia 29.14 43% Coosa 82.15 83%
Etowah 56.07 49% Crenshaw 44.49 84%
Jackson 37.77 52% Randolph 21.4 86%
Winston 59.25 56% Lamar 33.94 86%
Montgomery 79.5 58% Autuaga 65.55 86%
Baldwin 49.06 58% Choctaw 23.97 87%
Houston 73.73 61% Butler 29.86 90%
Dallas 86.92 62% Pike 50.54 92%
Marshall 59.93 62% Sumter 38.52 91%
Chambers 72.05 63% Pickens 43.76 92%
Jefferson 99.84 64% Marengo 42.89 92%
Elmore 71.05 66% Macon 12.69 92%
Fayette 41.7 68% Lawrence 62.87 92%
Lauderdale 62.87 69% Bullock 23.89 92%
Colbert 47.91 70% Chilton 17.13 95%
DeKalb 58.55 70% Wilcox 62.12 93%
Morgan 61.78 71% Monroe 20.74 96%
Washington 5.14 72% Dale 55.46 97%
Geneva 32.01 73% Coffee 58.18 97%
Lee 78.1 73% Conecuh 34.94 97%
Tuscaloosa 58.85 76% Cleburne 38.25 97%
Cullman 61.03 77% Clarke 38.14 97%
Covington 35.48 78% Calhoun 64.19 97%
Shelby 69.66 79% Lowndes 9.91 98%
St. Clair 33.05 79% Russell 49.48 98%
Blount 40.58 80% Henry 4.69 98%
Mobile 68.77 80% Limestone 71.6 98%
Walker 39.36 81% Bibb 70.14 98%
Barbour 51.48 82% Cherokee 17.13 99%
Tallapoosa 60 82% Greene 4.76 99%
Madison 99 83% Clay 3.42 100%

Across most of Alabama, we see very high ISP concentration. For the majority of counties, the largest ISP has 80% (or higher) share of traffic, while all the other ISPs combined operate at considerably smaller scale. In only three counties (Marion, Escambia and Etowah) does each ISP carry less than 50% of user traffic. Interestingly, Etowah is one of the best performing in the state, while Henry, a county where 98% of Internet traffic is concentrated behind a single ISP is the worst performing.

Where it gets interesting is when you plot the data, tracking the non-dominant ISP by traffic share (which is simply 100% less the traffic share of the dominant ISP) against the performance (as measured by download speed) and then use a linear line of best fit to find the relationship. Here’s what you get:

Understanding Where the Internet Isn’t Good Enough Yet

As you can see, there is a strong positive relationship between the non-dominant ISP’s traffic share and the average download speed. As the non-dominant ISP increases its traffic share, Internet speeds tend to improve. The conclusion is clear: if you want to improve Internet performance in a region, foster more competition between multiple Internet service providers.

The Other Performance Challenge: Limited ISP Exchanges, and Tromboning

There is more to the story, however, than just concentration. Alabama, like a lot of other regions that aren’t served well by ISPs, faces another performance challenge: poor routing, also sometimes known as “tromboning”.

Consider Tuskegee in Alabama, home to a local university.

In Tuskegee, choice is limited. Consumers only have a single choice for high-speed broadband. But even once an off-campus student has local access to the Internet, it isn’t truly local: Tuskegee students on a different ISP than their university will likely see their traffic detour all the way through Atlanta (two hours northeast by car!) before making its way back to school.

This doesn’t happen in isolation: today, the largest ISPs only exchange traffic with other networks in a handful of cities, notably Seattle, San Jose, Los Angeles, Dallas, Chicago, Atlanta, Miami, Ashburn, and New York City.

If you’re in one of these big cities, you’re unlikely to suffer from tromboning. But if you’re not? Your Internet traffic can often have to travel further away before looping back, similar to the shape of a trombone, reducing your Internet performance. Tromboning contributes to inefficiency and drives up the cost of Internet access. An increasing amount of traffic is wastefully carried to cities far away, instead of keeping the data local.

You can visualize how your Internet traffic is flowing, by using tools like traceroute.

As an example, we ran tests using RIPE Atlas probes to Facebook from Alabama, and unfortunately found extremes where traffic can sometimes take a highly circuitous route — traffic going to Atlanta, then Ashburn, Paris, Amsterdam, before making its way back to Alabama. The path begins on AT&T’s network and goes to Atlanta where it enters the network for Telia (an IP transit provider), crosses the Atlantic, meets Facebook, and then comes back.

Understanding Where the Internet Isn’t Good Enough Yet

Traceroute to 157.240.201.35 (157.240.201.35), 48 byte packets
1- 192.168.6.1 1.435ms 0.912ms 0.636ms
2-  99.22.36.1 99-22-36-1.lightspeed.dctral.sbcglobal.net AS7018 1.26ms 1.134ms 1.107ms
3-  99.173.216.214 AS7018 3.185ms 3.173ms 3.099ms
4-  12.122.140.70 cr84.attga.ip.att.net AS7018 11.572ms 13.552ms 15.038ms
5 - * * *
6- 192.205.33.42 AS7018 8.695ms 9.185ms 8.703ms
7-  62.115.125.129 ash-bb2-link.ip.twelve99.net AS1299 23.53ms 22.738ms 23.012ms
8-  62.115.112.243 prs-bb1-link.ip.twelve99.net AS1299 115.516ms 115.52ms 115.211ms
9-  62.115.134.96 adm-bb3-link.ip.twelve99.net AS1299 113.487ms 113.405ms 113.25ms
10-  62.115.136.195 adm-b1-link.ip.twelve99.net AS1299 115.443ms 115.703ms 115.45ms
11- 62.115.148.231 facebook-ic331939-adm-b1.ip.twelve99-cust.net AS1299 134.149ms 113.885ms 114.246ms
12- 129.134.51.84 po151.asw02.ams2.tfbnw.net AS32934 113.27ms 113.078ms 113.149ms
13-  129.134.48.101 po226.psw04.ams4.tfbnw.net AS32934 114.529ms 114.439ms 117.257ms
14-  157.240.38.227 AS32934 113.281ms 113.365ms 113.448ms
15- 157.240.201.35 edge-star-mini-shv-01-ams4.facebook.com AS32934 115.013ms 115.223ms 115.112ms

The intent here isn’t to shame AT&T, Telia, or Facebook — nor is this challenge unique to them. Facebook’s content is undoubtedly cached in Atlanta and the request from Alabama should go no further than that. While many possible conditions within and between these three networks could have caused this tromboning, in the end, the consumer suffers.

The solution? Have more major ISPs exchange in more cities and with more networks. Of course, there’d be an upfront cost involved in doing so, even if it would reduce cost more over the long run.

Conclusion

As William Gibson famously observed: the future is here, but it’s just not evenly distributed.

One of the clearest takeaways from the data and analysis presented here is that Internet access varies tremendously across geographies. But it’s not just a case of the developed world vs the developing, or even rural vs urban. There are underserved urban communities and regions of the developed world that do not score as highly as you might expect.

Furthermore, our case study of Alabama shows that the structure of the ISP market is incredibly important to promoting performance. We found a strong positive correlation between more competition and faster performance. Similarly, there’s a lot of opportunity for more networks to interconnect in more places, to avoid bad routing.

Finally, if we want to get the other 40% of the world online, we are going to need more initiatives that drive up access and drive down cost. There’s plenty of scope to help — and we’re excited to be launching Project Pangea to help.

Why I joined Cloudflare — and why I’m excited about Project Pangea

Post Syndicated from Roderick Fanou original https://blog.cloudflare.com/why-i-joined-cloudflare-and-why-im-excited-about-project-pangea/

Why I joined Cloudflare — and why I’m excited about Project Pangea

Why I joined Cloudflare — and why I’m excited about Project Pangea

If you are well-prepared to take up the challenge, you will get to experience a moment where you are stepping forward to help build a better world. Personally, I felt exactly that when about a month ago, after a long and (COVID) complicated visa process, I joined Cloudflare as a Systems Engineer in Austin, Texas.

In the early 2000s, I experienced while travelling throughout the Benin Republic (my home country) and West Africa more generally, how challenging accessing the Internet was. I recall that, as students, we were often connecting to the web from cybercafés through limited bandwidth purchased at high cost. It was a luxury to have a broadband connection at home. When access was free (say, from high school premises or at university) we still had bandwidth constraints, and often we could not connect for long. The Internet can efficiently help tackle issues encountered (in areas like education, health, communications, …) by populations in similar regions, but the lack of easy and affordable access, made it difficult to leverage. It is in such a context that I chose to pursue my studies in telecoms, with the hope of being able to somehow give back to the community by helping improve Internet access in the region.

My internship at Euphorbia Sarl, a local ISP, introduced me to the process of designing, finding, and deploying suitable technologies to satisfy the interconnection needs for the region. But more than that, it showed me first hand the day-to-day challenges encountered by network operators in Africa. It highlighted the need for more research on the Internet in developing regions, most notably measurements studies, to identify the root causes of the lack of connectivity in the (West) African region.

It was with this experience that I then pursued my doctoral studies at IMDEA Networks Institute and UC3M (Spain) and collaborated with stakeholders and researchers to investigate the characteristics and routing dynamics of the Internet in Africa; and then my postdoc at CAIDA/UCSD (US), looking at the occurrence of network congestion worldwide, and the impact of the SACS cable deployment between Angola and Brazil on Internet routing. While studying the network in those underserved and geographically large regions, we noticed that much of the web content was still served from the US and Europe. We also identified a lack of physical infrastructure and interconnections between local and global networks, alongside a lack of local content, as the root causes of packet-tromboning, high transit costs, and the persistently poor quality of service delivered to the users in the region.

Of course, local communities, network operators, stakeholders, and Internet bodies such as the Internet Society or Packet Clearing House have been working towards bridging this gap. But there is still much room for improvement. I believe this (hopefully soon) post-pandemic era — where more and more activities are shifting online — represents the best opportunity to solve this persistent issue. COVID has forced us to reflect, and one of the critical questions I asked myself was: after so many years of research, how can I — like a frontline doctor or nurse in the pandemic — actively and effectively help mitigate these connectivity issues, creating a better Internet for everyone, notably for those in underserved areas? The answer for me was to switch out of academia into tech. But which company?

As I progressed through the interview process with Cloudflare, it soon became clear that this was the answer to my question above. I discovered that Cloudflare’s values and mission were very much aligned with my own. I also loved the culture, how welcoming and diverse the team is, as well as how attentive and close to us the C-level is. I was impressed by the network footprint and, notably by its spread regardless of the Internet region, especially the growing number of data centers in Latin America and Africa. I had to travel back to West Africa during my visa process, and my experience there only reinforced what I already knew: we need more local content in developing regions, we need more support for local communities, and we need to better enable developing regions.

Fast-forward to my starting date, I was pleased to find out that Cloudflare frequently organizes innovation weeks — like Birthday Week — during which the company gives back to the community. There have been several noteworthy initiatives, including Project Fair Shot to  enable communities to vaccinate fairly, and Project Galileo, protecting at-risk public interest groups.

But what has me truly excited is Project Pangea, which launches today as part of Impact Week. Project Pangea helps improve security and connectivity for community networks at no cost. Cloudflare’s network spans 200+ cities worldwide; it has one of the largest number of interconnects/peers worldwide. It also delivers a state of the art DNS service with privacy in mind, and an intelligent routing system that constantly learns about the best and least congested Internet routes worldwide from and towards any region in the world. My research on Internet performance in developing regions makes me believe that community networks — and their end users — will benefit tremendously from such a partnership. It is so exciting to be part of such an amazing journey, which is why I am sharing my excitement through this post.

I would like to conclude by making an appeal to all stakeholders in developing regions — including network operators, and bodies such as the ISOC and the RIRs. Please do not hesitate to enquire about the Project Pangea. I truly believe that Cloudflare will be a tremendous partner to you, and your network — and your community — will benefit from using them.

Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free

Post Syndicated from Marwan Fayed original https://blog.cloudflare.com/pangea/

Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free

Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free

Half of the world’s population has no access to the Internet, with many more limited to poor, expensive, and unreliable connectivity. This problem persists despite large levels of public investment, private infrastructure, and effort by local organizers.

Today, Cloudflare is excited to announce Project Pangea: a piece of the puzzle to help solve this problem. We’re launching a program that provides secure, performant, reliable access to the Internet for community networks that support underserved communities, and we’re doing it for free1 because we want to help build an Internet for everyone.

What is Cloudflare doing to help?

Project Pangea is Cloudflare’s project to help bring underserved communities secure connectivity to the Internet through Cloudflare’s global and interconnected network.

Cloudflare is offering our suite of network services — Cloudflare Network Interconnect, Magic Transit, and Magic Firewall — for free to nonprofit community networks, local networks, or other networks primarily focused on providing Internet access to local underserved or developing areas. This service would dramatically reduce the cost for communities to connect to the Internet, with industry leading security and performance functions built-in:

  • Cloudflare Network Interconnect provides access to Cloudflare’s edge in 200+ cities across the globe through physical and virtual connectivity options.
  • Magic Transit acts as a conduit to and from the broader Internet and protects community networks by mitigating DDoS attacks within seconds at the edge.
  • Magic Firewall gives community networks access to a network-layer firewall as a service, providing further protection from malicious traffic.

We’ve learned from working with customers that pure connectivity is not enough to keep a network sustainably connected to the Internet. Malicious traffic, such as DDoS attacks, can target a network and saturate Internet service links, which can lead to providers aggressively rate limiting or even entirely shutting down incoming traffic until the attack subsides. This is why we’re including our security services in addition to connectivity as part of Project Pangea: no attacker should be able to keep communities closed off from accessing the Internet.

Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free

What is a community network?

Community networks have existed almost as long as commercial subscribership to the Internet that began with dial-up service. The Internet Society, or ISOC, describes community networks as happening “when people come together to build and maintain the necessary infrastructure for Internet connection.”

Most often, community networks emerge from need, and in response to the lack or absence of available Internet connectivity. They consistently demonstrate success where public and private-sector initiatives have either failed or under-deliver. We’re not talking about stop-gap solutions here, either — community networks around the world have been providing reliable, sustainable, high-quality connections for years.

Many will operate only within their communities, but many others can grow, and have grown, to regional or national scale. The most common models of governance and operation are as not-for-profits or cooperatives, models that ensure reinvestment within the communities being served. For example, we see networks that reinvest their proceeds to replace Wi-Fi infrastructure with fibre-to-the-home.

Cloudflare celebrates these networks’ successes, and also the diversity of the communities that these networks represent. In that spirit, we’d like to dispel myths that we encountered during the launch of this program — many of which we wrongly assumed or believed to be true — because the myths turn out to be barriers that communities so often are forced to overcome.  Community networks are built on knowledge sharing, and so we’re sharing some of that knowledge, so others can help accelerate community projects and policies, rather than rely on the assumptions that impede progress.

Myth #1: Only very rural or remote regions are underserved and in need. It’s true that remote regions are underserved. It is also true that underserved regions exist within 10 km (about six miles) of large city centers, and even within the largest cities themselves, as evidenced by the existence of some of our launch partners.

Myth #2: Remote, rural, or underserved is also low-income. This might just be the biggest myth of all. Rural and remote populations are often thriving communities that can afford service, but have no access. In contrast, the need for urban community networks are often egalitarian, and emerge because the access that is available is unaffordable to many.

Myth #3: Service is necessarily more expensive. This myth is sometimes expressed by statements such as, “if large service providers can’t offer affordable access, then no one can.”  More than a myth, this is a lie. Community networks (including our launch partners) use novel governance and cost models to ensure that subscribers pay rates similar to the wider market.

Myth #4: Technical expertise is a hard requirement and is unavailable. There is a rich body of evidence and examples showing that, with small amounts of training and support, communities can build their own local networks cheaply and reliably with commodity hardware and non-specialist equipment.

These myths aside, there is one truth: the path to sustainability is hard. The start and initial growth of community networks often consists of volunteer time or grant funding, which are difficult to sustain in the long-term. Eventually the starting models need to transition to models of “willing to charge and willing to pay” — Project Pangea is designed to help fill this gap.

What is the problem?

Communities around the world can and have put up Wi-Fi antennas and laid their own fibre. Even so, and however well-connected the community is to itself, Internet services are prohibitively expensive — if they can be found at all.

Two elements are required to connect to the Internet, and each incurs its own cost:

  • Backhaul connections to an interconnection point — the connection point may be anything from a local cabinet to a large Internet exchange point (IXP).
  • Internet Services are provided by a network that interfaces with the wider Internet, and agrees to route traffic to and from on behalf of the community network.

These are distinct elements. Backhaul service carries data packets along a physical link (a fibre cable or wireless medium). Internet service is separate and may be provided over that link, or at its endpoint.

The cost of Internet service for networks is both dominant and variable (with usage), so in most cases it is cheaper to purchase both as a bundle from service providers that also own or operate their own physical network. Telecommunications and energy companies are prime examples.

However, the operating costs and complexity of long-distance backhaul is significantly lower than the costs of Internet service. If reliable, high capacity service were affordable, then community networks could extend their knowledge and governance models sustainably to also provide their own backhaul.

For all that community networks can build, establish, and operate, the one element entirely outside their control is the cost of Internet service — a problem that Project Pangea helps to solve.

Why does the problem persist?

On this subject, I — Marwan — can only share insights drawn from prior experience as a computer science professor, and a co-founder of HUBS c.i.c., launched with talented professors and a network engineer. HUBS is a not-for-profit backhaul and Internet provider in Scotland. It is a cooperative of more than a dozen community networks — some that serve communities with no roads in or out — across thousands of square kilometers along Scotland’s West Coast and Borders regions. As is true of many community networks, not least some of Pangea’s launch partners, HUBS’ is award-winning, and engages in advocacy and policy.

During that time my co-founders and I engaged with research funders, economic development agencies, three levels of government, and so many communities that I lost track. After all that, the answer to the question is still far from clear. There are, however, noteworthy observations and experiences that stood out, and often came from surprising places:

  • Cables on the ground get chewed by animals that, small or large, might never be seen.
  • Burying power and Ethernet cables, even 15 centimeters below soil, makes no difference because (we think) animals are drawn by the electrical current.
  • Property owners sometimes need to be convinced that 8 to 10 square meters to build a small tower in exchange for free Internet and community benefit is a good thing.
  • The raising of small towers, even that no one will see, is sometimes blocked by legislation or regulation that assumes private non-residential structures can only be a shed, or never taller than a shed.
  • Private fibre backbone installations installed with public funds are often inaccessible, or are charged by distance even though the cost to light 100 meters of fibre is identical to the cost of lighting 1 km of fibre.
  • Civil service agencies may be enthusiastic, but are also cautious, even in the face of evidence. Be patient, suffer frustration, be more patient, and repeat. Success is possible.
  • If and where possible, it’s best to avoid attempts to deliver service where national telecommunications companies have plans to do so.
  • Never underestimate tidal fading — twice a day, wireless signals over water will be amazing, and will completely disappear. We should have known!

All anecdotes aside, the best policies and practices are non-trivial — but because of so many prior community efforts, and organizations such as ISOC, the APC, the A4AI, and more, the challenges and solutions are better understood than ever before.

How does a community network reach the Internet?

First, we’d like to honor the many organisations we’ve learned from who might say that there are no technical barriers to success. Connections within the community networks may be shaped by geographical features or regional regulations. For example, wireless lines of sight between antenna towers on personal property are guided by hills or restricted by regulations. Similarly, Ethernet cables and fibre deployments are guided by property ownership, digging rights, and the presence or migration of grazing animals that dig into soil and gnaw at cables — yes, they do, even small rabbits.

Once the community establishes its own area network, the connections to meet Internet services are more conventional, more familiar. In part, the choice is influenced or determined by proximity to Internet exchanges, PoPs, or regional fibre cabinet installations. The connections with community networks fall into three broad categories.

Colocation. A community network may be fortunate enough to have service coverage that overlaps with, or is near to, an Internet eXchange Point (IXP), as shown in the figure below. In this case a natural choice is to colocate a router within the exchange, near to the Internet service provider’s router (labeled as Cloudflare in the figure). Our launch partner NYC Mesh connects in this manner. Unfortunately, being that exchanges are most often located in urban settings, colocation is unavailable to many, if not most, community networks.

Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free

Conventional point-to-point backhaul. Community networks that are remote must establish a point-to-point backhaul connection to the Internet exchange. This connection method is shown in the figure below in which the community network in the previous figure has moved to the left, and is joined by a physical long-distance link to the Internet service router that remains in the exchange on the right.

Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free

Point-to-point backhaul is familiar. If the infrastructure is available — and this is a big ‘if’ — then backhaul is most often available from a utility company, such as a telecommunications or energy provider, that may also bundle Internet service as a way to reduce total costs. Even bundled, the total cost is variable and unaffordable to individual community networks, and is exacerbated by distance. Some community networks have succeeded in acquiring backhaul through university, research and education, or publicly-funded networks that are compelled or convinced to offer the service in the public interest. On the west coast of Scotland, for example, Tegola launched with service from the University of Highlands and Islands and the University of Edinburgh.

Start a backhaul cooperative for point-to-point and colocation. The last connection option we see among our launch partners overcomes the prohibitive costs by forming a cooperative network in which the individual subscriber community networks are also members. The cooperative model can be seen in the figure below. The exchange remains on the right. On the left the community network in the previous figure is now replaced by a collection of community networks that may optionally connect with each other (for example, to establish reliable routing if any link fails). Either directly or indirectly via other community networks, each of these community networks has a connection to a remote router at the near-end of the point-to-point connection. Crucially, the point-to-point backhaul service — as well as the co-located end-points — are owned and operated by the cooperative. In this manner, an otherwise expensive backhaul service is made affordable by being a shared cost.

Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free

Two of our launch partners, Guifi.net and HUBS c.i.c., are organised this way and their 10+ years in operation demonstrate both success and sustainability. Since the backhaul provider is a cooperative, the community network members have a say in the ways that revenue is saved, spent, and — best of all — reinvested back into the service and infrastructure.

Why is Cloudflare doing this?

Cloudflare’s mission is to help build a better Internet, for everyone, not just those with privileged access based on their geographical location. Project Pangea aligns with this mission by extending the Internet we’re helping to build — a faster, more reliable, more secure Internet — to otherwise underserved communities.

How can my community network get involved?

Check out our landing page to learn more and apply for Project Pangea today.

The ‘community’ in Cloudflare

Lastly, in a blog post about community networks, we feel it is appropriate to acknowledge the ‘community’ at Cloudflare: Project Pangea is the culmination of multiple projects, and multiple peoples’ hours, effort, dedication, and community spirit. Many, many thanks to all.
______

1For eligible networks, free up to 5Gbps at p95 levels.

Introducing Flarability, Cloudflare’s Accessibility Employee Resource Group

Post Syndicated from Janae Frischer original https://blog.cloudflare.com/introducing-flarability-cloudflares-accessibility-employee-resource-group/

Introducing Flarability, Cloudflare’s Accessibility Employee Resource Group

Introducing Flarability, Cloudflare’s Accessibility Employee Resource Group

Hello, folks! I’m pleased to introduce myself and Cloudflare’s newest Employee Resource Group (ERG), Flarability, to the world. The 31st anniversary of the signing of the Americans with Disabilities Act (ADA), which happens to fall during Cloudflare’s Impact Week, is an ideal time to raise the subject of accessibility at Cloudflare and around the world.

There are multiple accessibility-related projects and programs at Cloudflare, including office space accessibility and website and product accessibility programs, some of which we will highlight in the stories below. I wanted to share my accessibility story and the story of the birth  and growth of our accessibility community with you.

About Flarability

Flarability began with a conversation between a couple of colleagues, almost two years ago. Some of us had noticed some things about the workspace that weren’t as inclusive of people with disabilities as they could have been. For example, the open floor plan in our San Francisco office, as well as the positioning of our interview rooms, made it difficult for some to concentrate in the space. To kick off a community discussion, we formed a chat room, spread the word about our existence, and started hosting some meetings for interested employees and our allies. After a short time, we were talking about what to name our group, what our mission should be, and what kind of logo image would best represent our group.  

Our Mission: We curate and share resources about disabilities, provide a community space for those with disabilities and our allies to find support and thrive, and encourage and guide Cloudflare’s accessibility programs.

An example of how we have worked with the company was a recent Places Team consultation. As we redevelop our offices and workspaces for a return to what we are calling “back to better”, our Places Team wanted to be sure the way we design our future offices is as inclusive and accessible as possible. You may read more about how we have partnered with the Places Team in Nicole’s story below.

About the Disability Community

There is a lot of diversity amongst disabled people as there are many types of physical or mental impairments. Flarability includes employees with many of them. Some of us have intellectual disabilities such as autism and depression. Some of us have physical disabilities such as deafness and blindness. Several of us are not “out” about our disabilities and that’s definitely okay. The idea of our community is to provide a space for people to feel they can express themselves and feel comfortable. Historically, people with disabilities have been marginalized, even institutionalized. These days, there is much more awareness about and acceptance of disabilities, but there is a lot more work to be done. We are honored to take a central role in that work at Cloudflare.

Introducing Flarability, Cloudflare’s Accessibility Employee Resource Group

Stories from Flarability

I am not the only person with a disability at Cloudflare or who works to make Cloudflare more accessible to those with disabilities. We are proud to have many people with disabilities working at our company and I wanted to enable some key individuals with disabilities and supportive team members to share their experiences and stories.

What does accessibility mean to you?

Watson: “Accessibility means integration, having the same opportunities as everyone else to participate in society. My disability was seen as shameful and limiting, and it was only a few years before I started elementary school that New Jersey integrated children with disabilities into the classroom, ensuring that they received an adequate education. Growing up I was taught to hide who I was, and it’s thanks to the self-advocacy that I am now proudly autistic.”

Do you have a story to share about how workplace accessibility initiatives have impacted you?

Nicole: “Workplace accessibility is one of the top priorities of Cloudflare’s Places Team while we design and build our future office spaces. Feedback from our teammates in all our offices has always been a collaborative experience at Cloudflare. In previous years when opening a new office, the Places Team would crowdsource feedback from the company to adjust, or repair office features. Today, the Places Team involves a sync with Flarability leaders in the office design/construction process to discuss feedback and requests from coworkers with accessibility needs.

We also have an ergonomics and work accommodations program to ensure each of our teammates is sorted with workplace equipment that fits their individual needs.

Lastly, we want to provide multiple outlets for our teams to advocate for change. The Places Team hosts an internal anonymous feedback form, available to any teammate who feels comfortable submitting requests in a safe space.”

Why is accessibility advocacy important?

Janae: “Accessibility is important in the workplace. However, when people are not advocating for themselves, accessibility initiatives might not be leveraged to their fullest extent. When you don’t communicate what is holding you back from being more productive, you are doing a disservice to the company, but most importantly you. Perhaps you work more efficiently with fewer distractions, yet your boss has assigned you a desk that is right next to a noisy area of the office. What would happen if you asked them for a different workspace? For example, I am hard of hearing. As an outsider, you may not notice, as I appear to be able to carry on a verbal, face-to-face conversation with ease. In reality, I am lip reading, attempting to filter ambient noise, and watching others’ body/facial movements to fully understand what is going on. I work best when in quieter, less distracting environments. However, I am able to work in loud, distracting environments, too; I am just not able to perform at my best in this kind of environment.

Lastly, I’d like to highlight that one day I was casually chatting with a co-worker about my struggles and a company co-founder overheard me. They offered to support me in any and all ways possible. The noisy, distracting office space I had was changed to a workspace in a corner, where less foot traffic and cross conversations happened. This simple adjustment and small deed that our co-founder acted on inspired me to help start Flarability. I want all employees to feel they can advocate for themselves and if they are not comfortable enough to do so, then to know that there are people who are willing and able to help them.”

What’s next for our group?

We are looking forward to growing our Flarability membership, globally. We have already come a long way in our brief history, but we have many more employees to reach and support, company initiatives to advise, and future employees to recruit.

Thank you for reading our personal stories and the story of Flarability. I encourage all of you who are reading this to do some more reading about accessibility and find at least one way to support people with disabilities in your own community.

We would also love to connect with accessibility ERG leaders from other companies. If you’re reading this and are interested in collaborating, please hit me up at [email protected].

Welcome to Cloudflare Impact Week

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/welcome-to-cloudflare-impact-week/

Welcome to Cloudflare Impact Week

Welcome to Cloudflare Impact Week

If I’m completely honest, Cloudflare didn’t start out as a mission-driven company. When Lee, Michelle, and I first started thinking about starting a company in 2009 we saw an opportunity as the world was shifting from on-premise hardware and software to services in the cloud. It seemed inevitable to us that the same shift would come to security, performance, and reliability services. And, getting ahead of that trend, we could build a great business.

Welcome to Cloudflare Impact Week
Matthew Prince, Michelle Zatlyn, and Lee Holloway, Cloudflare’s cofounders, in 2009.

One problem we had was that we knew in order to have a great business we needed to win large organizations with big IT budgets as customers. And, in order to do that, we needed to have the data to build a service that would keep them safe. But we only could get data on security threats once we had customers. So we had a chicken and egg problem.

Our solution was to provide a basic version of Cloudflare’s services for free. We reasoned that individual developers and small businesses would sign up for the free service. We’d learn a lot about security threats and performance and reliability opportunities based on their traffic data. And, from that, we would build a service we could sell to large businesses.

And, generally, Cloudflare’s business model made sense. We found that, for the most part, small companies got a low volume of cyber attacks, and so we could charge them a relatively small amount. Large businesses faced more attacks, so we could charge them more.

But what surprised us, and we only discovered because we were providing a free version of our service, was that there was a certain set of small organizations with very limited resources that received very large attacks. Servicing them was what made Cloudflare the mission-driven company we are today.

The Committee to Protect Journalists

If you ever want to be depressed, sign up for the newsletter of the Committee to Protect Journalists (CPJ). They’re the organization that, when a journalist is kidnapped or killed anywhere in the world, negotiates their release or, far too often, recovers their body.

I’d met the director of the organization at an event in early 2012. Not long after, he called me and asked if I wanted to meet three Cloudflare customers who were in town. I didn’t, I have to confess, but Michelle pushed me to take the meeting.

On a rainy San Francisco afternoon the director of CPJ brought three African journalists to our office. All three of them hugged me. One was from Ethiopia, another was from Angola, and the third they wouldn’t tell us his name or where he was from because he was “currently being hunted by death squads.”

For the next 90 minutes, I listened to stories of how the journalists were covering corruption in their home countries, how their work put them constantly in harm’s way, how powerful forces worked to silence them, how cyberattacks had been a constant struggle, and how, today, they depended on Cloudflare’s free service to keep their work online. That last bit hit me like a ton of bricks.

After our meeting finished, and we saw the journalists out, with Cloudflare T-shirts and other swag in hand, I turned to Michelle and said, “Whoa. What have we gotten ourselves into?”

Becoming Mission Driven

I’ve thought about that meeting often since. It was the moment I realized that Cloudflare had a mission beyond just being a good business. The Internet was a critically important resource for those three journalists and many others like them. At the same time, forces that sought to limit their work would use cyberattacks to shut them down. While we hadn’t set out to ensure everyone had world-class cybersecurity, regardless of their ability to pay, now it seemed critically important.

With that realization, Cloudflare’s mission came naturally: we aim to help build a better Internet. One where you don’t need to be a giant company to be fast and reliable. And where even a journalist, working on their own against daunting odds, can be secure online.

This is why we’ve prioritized projects that give back to the Internet. We launched Project Galileo, which provides our enterprise-grade services to organizations performing politically or artistically important work. We launched the Athenian Project to help protect elections against cyber attacks. We launched Project Fair Shot to make sure the organizations distributing the COVID-19 vaccine had the technical resources they needed to do so equitably.

Welcome to Cloudflare Impact Week

And, even on the technical side, we work hard to make the Internet better even when there’s no clear economic benefit to us, or even when it’s against our economic benefit. We don’t monetize user data because it seems clear to us that a better Internet is a more private Internet. We enabled encryption for everyone even though, when we did it, it was the biggest differentiator between our free and paid plans and the number one reason people upgraded. But clearly a better Internet was an encrypted Internet, and it seemed silly that someone should have to pay extra for a little bit of math.

Our First Impact Week

This week we kick off Cloudflare’s first Impact Week. We originally conceived the idea of the week as a way to highlight some of the things we were doing as a company around our environmental, social, and governance (ESG) initiatives. But, as is the nature of innovation weeks at Cloudflare, as soon as we announced it internally our team started proposing new products and features to take some of our existing initiatives even further.

So, over the course of the week, in addition to talking about how we’ve built our network to consume less power we’ll also be demonstrating how we’re increasingly using hyper power-efficient Arm-based servers to achieve even higher levels of efficiency in order to lessen the environmental impact of running the Internet. We’ll launch a new Workers option for developers who want to be more environmentally conscious. And we’ll announce an initiative in partnership with other leading Internet companies that we hope, if broadly adopted, could cut down as much as 25% of global web traffic and the corresponding energy wasted to serve it.

We’ll also focus on how we can bring the Internet to more people. While broadband has been a revolution where it’s available, rural and underserved-urban communities around the world still suffer from slow Internet speeds and limited ISP choice. We can’t completely solve that problem (yet) but we’ll be announcing an initiative that will help with some critical aspects.

Finally, as Cloudflare becomes a larger part of the Internet, we’ll be announcing programs both to monitor the network’s health, affirm our commitments to human rights, and extend our protections of critical societal functions like protecting elections.

When I first was trying to convince Michelle that we should start a business together, I pitched her a bunch of ideas. Most of them involved finding a clever way to extract rents from some group or another, often for not much benefit to society at large. Sitting in an Ethiopian restaurant in Central Square, I remember so clearly her saying to me, “Matthew, those are all great business ideas. But they’re not for me. I want to do something where I can be proud of the work we’re doing and the positive impact we’ve made.”

That sentence made me go back to the drawing board. The next business idea I pitched to her turned out to be Cloudflare. Today, Cloudflare’s mission remains helping build a better Internet. And, as we kick off Impact Week, we are proud to continue to live that mission in everything we do.