Tag Archives: Careers

Through the eyes of a Cloudflare Technical Support Engineer

Post Syndicated from Justina Wong original https://blog.cloudflare.com/through-the-eyes-tech-support-engineer/

Through the eyes of a Cloudflare Technical Support Engineer

This post originally appeared on Landing Jobs under the title Mission: Protect the Internet where you can find open positions at Cloudflare Lisbon.

Justina Wong, Technical Support Team Lead in Lisbon, talks about what it’s like working at Cloudflare, and everything you need to know if you want to join us.

Through the eyes of a Cloudflare Technical Support Engineer

Justina joined Cloudflare about three years ago in London as a Technical Support Engineer. Currently, she’s part of their Customer Support team working in Lisbon as a team lead.

I can’t speak for others, but I love the things you can learn from the others. There are so many talented individuals who are willing and ready to teach/share. They are my inspiration and I want to become them!

On a Mission to Protect the Internet

Justina’s favourite Cloudflare products are firewall-related ones. The company’s primary care is for the customers and they want to make attack mitigation as easy as possible. As she puts it, “the fact that these protections are on multiple layers, like L7, L3/4, is very important, and I’m proud to be someone who can help our customers when they face certain attacks.”.

Cloudflare is constantly releasing new products to help build a better Internet, so product managers are always on top of tool updates to facilitate that. The company believes that it’s not only important to help customers from the product side, but it’s also as important to teach them how to help themselves so that they can fix their issues promptly without having to wait for an answer.

Company culture and Office vibes

According to Justina, one of the amazing things about Cloudflare is the unified company culture. As their SVP of Engineering, Usman, said in a recent meeting with the team, “Be helpful, look around for problems and help find solutions”.

Every Cloudflare office has its own little “flare”: London’s love of mince pies; Singapore’s super fun cultural richness in one location (they have four new years in one year, officially); and Lisbon’s forever love (and fight) for pastéis de nata.

Each office also has its own function or focus, so people working at Cloudflare get to meet very diverse individuals. For Justina, the things that she’d loved the most are learning from all of the engineers in London, picking up new customer service skills in Singapore and helping to build the new Lisbon office. She says that every time she goes to a different office, they have grown at least 50% in headcount compared to when she was last there. Talk about growth!

As a hiring manager, she also says that the company is mindful of diversity.

Through the eyes of a Cloudflare Technical Support Engineer

Working remote

Like everywhere else, remote work has become the current normal at Cloudflare. As someone who enjoyed being in the office, Justina says “all the countless times I just walked over to someone to ask a question, now all turned into a chat message; or the random coffee chat when we waited for our coffee to be done.”

Funnily enough, the EMEA CSUP team is working closer than before the pandemic. Previously, each office was somewhat in its own communication bubble, now it has turned into a collective conversation. This is great for getting to know colleagues during and beyond work hours.

What you need to know if you want to land a job at Cloudflare in Lisbon

For Cloudflare, growing the team is a continuous challenge, and Justina has never needed to do as many interviews as she has done in the Lisbon office. Although it’s a huge challenge for her, it’s also fun. Since the company is hiring aggressively despite the pandemic, their teams are eager to welcome anyone who’s ready to be part of Lisbon Cloudflare.

One of the things you can expect if you work at Cloudflare is for your manager to care and for your feedback to be heard. We know these are valuable things when considering where to work. So if you’re someone who’s willing to learn and is excited about their technologies, this call is for you. The company is expanding in different markets, so they’re looking for tech candidates who can speak multiple languages.

Currently, Cloudflare has over 25 open positions for their offices in Lisbon. Categories include Security Engineers, Full-Stack Developers, Data Scientists, and more.

Starting a new job in the middle of a pandemic

Post Syndicated from Daniela Rodrigues original https://blog.cloudflare.com/starting-a-new-job-in-the-middle-of-a-pandemic/

Starting a new job in the middle of a pandemic

Starting a new job in the middle of a pandemic

It has now been more than 90 days since I joined Cloudflare’s EMEA Recruiting Team as a Recruiting Coordinator based in Lisbon. In a year filled with hardships for so many people around the world, I wanted to share my journey. I hope people will relate and feel encouraged to pursue their dreams, even during these challenging times.

When 2020 started, it was not in my plans to change jobs and start working at a new company, completely remote, without ever meeting my colleagues in person or visiting the office. However, that is exactly what happened, and I am so glad I did.

Interviewing with Cloudflare

The number of interviews in the hiring process at Cloudflare may feel overwhelming for some – in my case, I met 11 people during this process. For me, I was glad to have so many chances to get to know the people I would be working with. I believe I got as much out of the conversations as the interviewers did, which is great — a recruitment process should be as much about the company getting to know you, as you getting to know the company.

A great thing about interviewing remotely is that I got the chance to talk to people all around the globe, which enriched the process and my idea of Cloudflare as a company. I started to picture myself as an actual member of the team, definitely interested in working towards a better and safer Internet. Even though there were many interviews to get through, the constant communication with the team made me feel engaged and excited. In the end, the process went by quickly, even quicker than I expected.

The best thing was the outpouring of support I received from what would be my future teammates once I accepted the offer. I felt welcomed way before my actual start date!

Remote Onboarding: Adapting and Evolving

In all my previous companies, onboarding was done in person and small groups. I was not prepared for a fully remote experience with a class of more than 20 people, yet it was so smooth and well-coordinated that you wouldn’t believe it had been run virtually for only a few months!

My onboarding class included people from all over the world — Lisbon, Austin, Miami, Washington, London, Munich, Singapore… And not only that, but we were all starting different roles, from Customer Success to Engineering, and even Legal Counsel! This gave me the opportunity to know people I otherwise wouldn’t have had the chance to meet, and it allowed me to establish bonds early on with my colleagues. Given the current situation, knowing that people were in the same boat with me felt reassuring. I felt that we were in it together, in a way. Not only that, but I got everything I needed for work (and more — like a pair of Cloudflare socks!) delivered to my home, making the whole experience very comfortable for me.

Ramping up and aiming for the stars

Starting a new job in the middle of a pandemic

Starting in a new role can be a daunting experience — it’s a new environment, a new team, a new project, and lots of things that could go sideways. However, there are also a lot of things that can go right!

At Cloudflare, I found an extremely welcoming, supportive team that helped me ramp up and take ownership of my work quickly and effectively. I felt so supported that I took ownership of a big project right away — Cloudflare Careers Day. Right from the start, it was clear to me that Cloudflare has ambitious goals for the growth of our Lisbon office. I thought about the ways I could help with that, and a virtual careers day seemed like a great first step to drive brand awareness and let people know we are hiring and that we are hiring! The Recruitment Team set in motion a plan to turn this idea into reality in less than three months, resulting in a successful and fun first edition of the Cloudflare Careers Day in November 2020.

Of course, there were times when I felt unsure of myself and my abilities. But this is why it is so important to be able to rely on your team. In the end, I feel I have grown a lot in just three months — not only professionally, but personally as well!

I look forward to working on more projects. I’m excited to write with this blog post, which I hope will inspire more people to take a chance, believe in themselves and just go for it! Even in these strange, stressful times, good things can and do happen, especially when you are surrounded by talented, inspiring people.

What does the future hold?

Lisbon! I am excited to help grow our Lisbon office, recruiting talented people that feel as strongly as I do about helping build a better Internet. We have many different open roles at the moment so, if you see one that suits you, take a chance and reach out. Maybe you’ll embark on a new journey, just like me.

Our Lisbon story is just beginning. I can’t wait to see all the amazing things we will accomplish in 2021, both as a team and as a company.

A Thanksgiving 2020 Reading List

Post Syndicated from Val Vesa original https://blog.cloudflare.com/a-thanksgiving-2020-reading-list/

A Thanksgiving 2020 Reading List

While our colleagues in the US are celebrating Thanksgiving this week and taking a long weekend off, there is a lot going on at Cloudflare. The EMEA team is having a full day on CloudflareTV with a series of live shows celebrating #CloudflareCareersDay.

So if you want to relax in an active and learning way this weekend, here are some of the topics we’ve covered on the Cloudflare blog this past week that you may find interesting.

Improving Performance and Search Rankings with Cloudflare for Fun and Profit

Making things fast is one of the things we do at Cloudflare. More responsive websites, apps, APIs, and networks directly translate into improved conversion and user experience. On November 10, Google announced that Google Search will directly take web performance and page experience data into account when ranking results on their search engine results pages (SERPs), beginning in May 2021.

Rustam Lalkaka and Rita Kozlov explain in this blog post how Google Search will prioritize results based on how pages score on Core Web Vitals, a measurement methodology Cloudflare has worked closely with Google to establish, and we have implemented support for in our analytics tools. Read the full blog post.

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

At the Cloudflare Core, we process logs to analyze attacks and compute analytics. In 2020, our Core servers were in need of a refresh, so we decided to redesign the hardware to be more in line with our Gen X edge servers. We designed two major server variants for the core. The first is Core Compute 2020, an AMD-based server for analytics and general-purpose compute paired with solid-state storage drives. The second is Core Storage 2020, an Intel-based server with twelve spinning disks to run database workloads. This is a refresh of the hardware that Cloudflare uses to run analytics provided big efficiency improvements.

Read the full blog post by Brian Bassett

Moving Quicksilver into production

We previously explained how and why we built Quicksilver. Quicksilver is the data store responsible for storing and distributing the billions of KV pairs used to configure the millions of sites and Internet services which use Cloudflare. This second blog post is about the long journey to production which culminates with Kyoto Tycoon removal from Cloudflare infrastructure and points to the first signs of obsolescence.

Geoffrey Plouviez takes you through the entire story of real-world engineering challenges and what it’s like to replace one of Cloudflare’s oldest critical components: read the full blog post here.

Building Black Friday e-commerce experiences with JAMstack and Cloudflare Workers

In this blog post, we explore how Cloudflare Workers continues to excel as a JAMstack deployment platform, and how it can be used to power e-commerce experiences, integrating with familiar tools like Stripe, as well as new technologies like Nuxt.js, and Sanity.io.

Read the full blog post and get all the details and open-source code from Kristian Freeman.

A Byzantine failure in the real world

When we review design documents at Cloudflare, we are always on the lookout for Single Points of Failure (SPOFs). In this post, we present a timeline of a real-world incident, and how an interesting failure mode known as a Byzantine fault played a role in a cascading series of events.

Tom Lianza and Chris Snook’s full blog post describes the consequences of a malfunctioning switch on a system built for reliability.

ASICs at the Edge

At Cloudflare, we pride ourselves in our global network that spans more than 200 cities in over 100 countries. To accelerate all that traffic through our network, there are multiple technologies at play. So let’s have a look at one of the cornerstones that makes all of this work.

Tom Strickx’ epic deep dive into ASICs is here.

Let us know your thoughts and comments below or feel free to also reach out to us via our social media channels. And because we talked about careers in the beginning of this blog post, check out our available jobs if you are interested to join Cloudflare.

My internship: Brotli compression using a reduced dictionary

Post Syndicated from Felix Hanau original https://blog.cloudflare.com/brotli-compression-using-a-reduced-dictionary/

My internship: Brotli compression using a reduced dictionary

Brotli is a state of the art lossless compression format, supported by all major browsers. It is capable of achieving considerably better compression ratios than the ubiquitous gzip, and is rapidly gaining in popularity. Cloudflare uses the Google brotli library to dynamically compress web content whenever possible. In 2015, we took an in-depth look at how brotli works and its compression advantages.

One of the more interesting features of the brotli file format, in the context of textual web content compression, is the inclusion of a built-in static dictionary. The dictionary is quite large, and in addition to containing various strings in multiple languages, it also supports the option to apply multiple transformations to those words, increasing its versatility.

The open sourced brotli library, that implements an encoder and decoder for brotli, has 11 predefined quality levels for the encoder, with higher quality level demanding more CPU in exchange for a better compression ratio. The static dictionary feature is used to a limited extent starting with level 5, and to the full extent only at levels 10 and 11, due to the high CPU cost of this feature.

We improve on the limited dictionary use approach and add optimizations to improve the compression at levels 5 through 9 at a negligible performance impact when compressing web content.

Brotli Static Dictionary

Brotli primarily uses the LZ77 algorithm to compress its data. Our previous blog post about brotli provides an introduction.

To improve compression on text files and web content, brotli also includes a static, predefined dictionary. If a byte sequence cannot be matched with an earlier sequence using LZ77 the encoder will try to match the sequence with a reference to the static dictionary, possibly using one of the multiple transforms. For example, every HTML file contains the opening <html> tag that cannot be compressed with LZ77, as it is unique, but it is contained in the brotli static dictionary and will be replaced by a reference to it. The reference generally takes less space than the sequence itself, which decreases the compressed file size.

The dictionary contains 13,504 words in six languages, with lengths from 4 to 24 characters. To improve the compression of real-world text and web data, some dictionary words are common phrases (“The current”) or strings common in web content (‘type=”text/javascript”’). Unlike usual LZ77 compression, a word from the dictionary can only be matched as a whole. Starting a match in the middle of a dictionary word, ending it before the end of a word or even extending into the next word is not supported by the brotli format.

Instead, the dictionary supports 120 transforms of dictionary words to support a larger number of matches and find longer matches. The transforms include adding suffixes (“work” becomes “working”) adding prefixes (“book” => “ the book”) making the first character uppercase (“process” => “Process”) or converting the whole word to uppercase (“html” => “HTML”). In addition to transforms that make words longer or capitalize them, the cut transform allows a shortened match (“consistently” => “consistent”), which makes it possible to find even more matches.

Methods

With the transforms included, the static dictionary contains 1,633,984 different words – too many for exhaustive search, except when used with the slow brotli compression levels 10 and 11. When used at a lower compression level, brotli either disables the dictionary or only searches through a subset of roughly 5,500 words to find matches in an acceptable time frame. It also only considers matches at positions where no LZ77 match can be found and only uses the cut transform.

Our approach to the brotli dictionary uses a larger, but more specialized subset of the dictionary than the default, using more aggressive heuristics to improve the compression ratio with negligible cost to performance. In order to provide a more specialized dictionary, we provide the compressor with a content type hint from our servers, relying on the Content-Type header to tell the compressor if it should use a dictionary for HTML, JavaScript or CSS. The dictionaries can be furthermore refined by colocation language in the future.

Fast dictionary lookup

To improve compression without sacrificing performance, we needed a fast way to find matches if we want to search the dictionary more thoroughly than brotli does by default. Our approach uses three data structures to find a matching word directly. The radix trie is responsible for finding the word while the hash table and bloom filter are used to speed up the radix trie and quickly eliminate many words that can’t be matched using the dictionary.

My internship: Brotli compression using a reduced dictionary
Lookup for a position starting with “type”

The radix trie easily finds the longest matching word without having to try matching several words. To find the match, we traverse the graph based on the text at the current position and remember the last node with a matching word. The radix trie supports compressed nodes (having more than one character as an edge label), which greatly reduces the number of nodes that need to be traversed for typical dictionary words.

The radix trie is slowed down by the large number of positions where we can’t find a match. An important finding is that most mismatching strings have a mismatching character in the first four bytes. Even for positions where a match exists, a lot of time is spent traversing nodes for the first four bytes since the nodes close to the tree root usually have many children.

Luckily, we can use a hash table to look up the node equivalent to four bytes, matching if it exists or reject the possibility of a match. We thus look up the first four bytes of the string, if there is a matching node we traverse the trie from there, which will be fast as each four-byte prefix usually only has a few corresponding dict words. If there is no matching node, there will not be a matching word at this position and we do not need to further consider it.

While the hash table is designed to reject mismatches quickly and avoid cache misses and high search costs in the trie, it still suffers from similar problems: We might search through several 4-byte prefixes with the hash value of the given position, only to learn that no match can be found. Additionally, hash lookups can be expensive due to cache misses.

To quickly reject words that do not match the dictionary, but might still cause cache misses, we use a k=1 bloom filter to quickly rule out most non-matching positions. In the k=1 case, the filter is simply a lookup table with one bit indicating whether any matching 4-byte prefixes exist for a given hash value. If the hash value for the given bit is 0, there won’t be a match. Since the bloom filter uses at most one bit for each four-byte prefix while the hash table requires 16 bytes, cache misses are much less likely. (The actual size of the structures is a bit different since there are many empty spaces in both structures and the bloom filter has twice as many elements to reject more non-matching positions.)

This is very useful for performance as a bloom filter lookup requires a single memory access. The bloom filter is designed to be fast and simple, but still rejects more than half of all non-matching positions and thus allows us to save a full hash lookup, which would often mean a cache miss.

Heuristics

To improve the compression ratio without sacrificing performance, we employed a number of heuristics:

Only search the dictionary at some positions
This is also done using the stock dictionary, but we search more aggressively. While the stock dictionary only considers positions where the LZ77 match finder did not find a match, we also consider positions that have a bad match according to the brotli cost model: LZ77 matches that are short or have a long distance between the current position and the reference usually only offer a small compression improvement, so it is worth trying to find a better match in the static dictionary.

Only consider the longest match and then transform it
Instead of finding and transforming all matches at a position, the radix trie only gives us the longest match which we then transform. This approach results in a vast performance improvement. In most cases, this results in finding the best match.

Only include some transforms
While all transformations can improve the compression ratio, we only included those that work well with the data structures. The suffix transforms can easily be applied after finding a non-transformed match. For the upper case transforms, we include both the non-transformed and the upper case version of a word in the radix trie. The prefix and cut transforms do not play well with the radix trie, therefore a cut of more than 1 byte and prefix transforms are not supported.

Generating the reduced dictionary

At low compression levels, brotli searches a subset of ~5,500 out of 13,504 words of the dictionary, negatively impacting compression. To store the entire dictionary, we would need to store ~31,700 words in the trie considering the upper case transformed output of ASCII sequences and ~11,000 four-byte prefixes in the hash. This would slow down hash table and radix trie, so we needed to find a different subset of the dictionary that works well for web content.

For this purpose, we used a large data set containing representative content. We made sure to use web content from several world regions to reflect language diversity and optimize compression. Based on this data set, we identified which words are most common and result in the largest compression improvement according to the brotli cost model. We only include the most useful words based on this calculation. Additionally, we remove some words if they slow down hash table lookups of other, more common words based on their hash value.

We have generated separate dictionaries for HTML, CSS and JavaScript content and use the MIME type to identify the right dictionary to use. The dictionaries we currently use include about 15-35% of the entire dictionary including uppercase transforms. Depending on the type of data and the desired compression/speed tradeoff, different options for the size of the dictionary can be useful. We have also developed code that automatically gathers statistics about matches and generates a reduced dictionary based on this, which makes it easy to extend this to other textual formats, perhaps data that is majority non-English or XML data and achieve better results for this type of data.

Results

We tested the reduced dictionary on a large data set of HTML, CSS and JavaScript files.

The improvement is especially big for small files as the LZ77 compression is less effective on them. Since the improvement on large files is a lot smaller, we only tested files up to 256KB. We used compression level 5, the same compression level we currently use for dynamic compression on our edge, and tested on a Intel Core i7-7820HQ CPU.

Compression improvement is defined as 1 – (compressed size using the reduced dictionary / compressed size without dictionary). This ratio is then averaged for each input size range. We also provide an average value weighted by file size. Our data set mirrors typical web traffic, covering a wide range of file sizes with small files being more common, which explains the large difference between the weighted and unweighted average.

My internship: Brotli compression using a reduced dictionary

With the improved dictionary approach, we are now able to compress HTML, JavaScript and CSS files as well, or sometimes even better than using a higher compression level would allow us, all while using only 1% to 3% more CPU. For reference using compression level 6 over 5 would increase CPU usage by up to 12%.

My living room intern experience at Cloudflare

Post Syndicated from Kevin Frazier original https://blog.cloudflare.com/my-living-room-intern-experience-at-cloudflare/

My living room intern experience at Cloudflare

My living room intern experience at Cloudflare

This was an internship unlike any other. With a backdrop of a pandemic, protests, and a puppy that interrupted just about every Zoom meeting, it was also an internship that demonstrated Cloudflare’s leadership in giving students meaningful opportunities to explore their interests and contribute to the company’s mission: to help build a better Internet.

For the past twelve weeks, I’ve had the pleasure of working as a Legal Intern at Cloudflare. A few key things set this internship apart from even those in which I’ve been able to connect with people in-person:

  • Communication
  • Community
  • Commingling
  • Collaboration

Ever since I formally accepted my internship, the Cloudflare team has been in frequent and thorough communication about what to expect and how to make the most of my experience. This approach to communication was in stark contrast to the approach taken by several other companies and law firms. The moment COVID-19 hit, Cloudflare not only reassured me that I’d still have a job, the company also doubled down on bringing on more interns. Comparatively, a bunch of my fellow law school students were left in limbo: unsure of if they had a job, the extent to which they’d be able to do it remotely, and whether it would be a worthwhile experience.

This approach has continued through the duration of the internship. I know I speak for my fellow interns when I say that we were humbled to be included in company-wide initiatives to openly communicate about the trying times our nation and particularly members of communities of color have experienced this summer. We weren’t left on the sidelines but rather invited into the fold. I’m so grateful to my manager, Jason, for clearing my schedule to participate in Cloudflare’s “Day On: Learning and Inclusion.” On June 18, the day before Juneteenth, Cloudflare employees around the world joined together for transformative and engaging sessions on how to listen, learn, participate, and take action to be better members of our communities. That day illustrated Cloudflare’s commitment to fostering communication as well as to building community and diversity.

The company’s desire to foster a sense of community pervades each team. Case in point, members of the Legal, Policy, and Trust & Safety (LPT) team were ready and eager to help my fellow legal interns and me better understand the team’s mission and day-to-day activities. I went a perfect 11/11 on asks to LPT members for 1:1 Zoom meetings — these meetings had nothing to do with a specific project but were merely meant to create a stronger community by talking with employees about how they ended up at this unique company.

From what I’ve heard from fellow interns, this sense of community was a common thread woven throughout their experiences as well. Similarly, other interns shared my appreciation for being given more than just “shadowing” opportunities. We were invited to commingle with our teammates and encouraged to take active roles in meetings and on projects.

In my own case, I got to dive into exciting research on privacy laws such as the GDPR and so much more. This research required that I do more than just be a fly on the wall, I was invited to actively converse and brief folks directly involved with making key decisions for the LPT. For instance, when Tilly came on in July as Privacy Counsel, I had the opportunity to brief her on the research I’d done related to Data Privacy Impact Assessments (DPIAs). In the same way, when Edo and Ethan identified some domain names that likely infringed on Cloudflare’s trademark, my fellow intern, Elizabeth, and I were empowered to draft WIPO complaints per the Uniform Domain Name Dispute Resolution Policy. Fingers crossed our work continues Cloudflare’s strong record before the WIPO (here’s an example of a recent favorable division). These seemingly small tasks introduced me to a wide range of fascinating legal topics that will inform my future coursework and, possibly, even my career goals.

Finally, collaboration distinguished this internship from other opportunities. By way of example, I was assigned projects that required working with others toward a successful outcome. In particular, I was excited to work with Jocelyn and Alissa on research related to the intersection of law and public policy. This dynamic duo fielded my queries, sent me background materials, and invited me to join meetings with stakeholders. This was a very different experience from previous internships in which collaboration was confined to just an email assigning the research and a cool invite to reach out if any questions came up. At Cloudflare, I had the support of a buddy, a mentor, and my manager on all of my assignments and general questions.

When I walked out of Cloudflare’s San Francisco office back in December after my in-person interview, I was thrilled to potentially have the opportunity to return and help build a better Internet. Though I’ve yet to make it back to the office due to COVID-19 and, therefore, worked entirely remotely, this internship nevertheless allowed me and my fellow interns to advance Cloudflare’s mission.

Whatever normal looks like in the following weeks, months, and years, so long as Cloudflare prioritizes communication, community, commingling, and collaboration, I know it will be a great place to work.

Diversity Welcome – A Latinx journey into Cloudflare

Post Syndicated from Pablo Viera original https://blog.cloudflare.com/diversity-welcome-a-latinx-journey-into-cloudflare/

Diversity Welcome - A Latinx journey into Cloudflare

Diversity Welcome - A Latinx journey into Cloudflare

I came to the United States chasing the love of my life, today my wife, in 2015.

A Spanish native speaker, Portuguese as my second language and born in the Argentine city of Córdoba more than 6,000 miles from San Francisco, there is no doubt that the definition of “Latino” fits me very well and with pride.

Cloudflare was not my first job in this country but it has been the organization in which I have learned many of the things that have allowed me to understand the corporate culture of a society totally alien to the one which I come from.

I was hired in January 2018 as the first Business Development Representative for the Latin America (LATAM) region based in San Francisco. This was long before the company went public in September 2019. The organization was looking for a specialist in Latin American markets with not only good experience and knowledge beyond languages ​​(Spanish/Portuguese), but understanding of the economy, politics, culture, history, go-to-market strategies, etc.—I was lucky enough to be chosen as “that person”. Cloudflare invested in me to a great extent and I was amazed at the freedom I had to propose ideas and bring them to reality. I have been able to experience far beyond my role as a sales representative: I have translated marketing materials, helped with campaigns, participated in various trainings, traveled to different countries to attend conferences and visit clients, and on.

Later, I was promoted as a sales executive for the North America (NAMER) region.

Diversity Welcome - A Latinx journey into Cloudflare
Cloudflare poster signed by colleagues after our Company retreat in 2018

I have been very fortunate to be able to closely observe the growth and maturity of the organization throughout my time here.

Today, Cloudflare has three times more employees than when I started, and I can say that much of what makes this organization unique has remained intact: Cloudflare’s core mission is to help build a better Internet, to be transparent, to protect vulnerable yet important voices online through its Project Galileo, our open door policy, the importance of investing in people, among many others.

Diversity Welcome - A Latinx journey into Cloudflare
Myself with Matthew Prince and Michelle Zatlyn, co-founders of Cloudflare

In recent weeks I have participated in conversations around “how do we recruit more under-represented groups and avoid bias in the selection process” – This has really filled me with joy but is certainly not the first initiative of its kind at Cloudflare. The company takes pride in having several Employee Resource Groups (ERGs) created and led by employees and executive sponsors—and highly encouraged by the organization: Afroflare, Desiflare, Nativeflare, Latinflare, Proudflare, Soberflare and Vetflare are just some of those groups (we have over 16 ERGs to-date!).

Diversity Welcome - A Latinx journey into Cloudflare

At Cloudflare I have found a space where I can develop professionally, where my ideas count, and where I am allowed to make mistakes—this is not something that I have experienced in my previous roles with other employers. I am not afraid to admit that in other organizations I have felt the stigma of being a person of color and that the working conditions were unfair compared to my colleagues.

Diversity Welcome - A Latinx journey into Cloudflare
Cloudflare’s values have continued to shine through during the current COVID-19 situation ​​and we have strengthened overall as an organization.

Being an immigrant (a person of color) it is a challenge to make the decision to work for organizations that don’t fully understand the value of adding more diversity to their workforce. Cloudflare is a company that does value diversity in its workforce and has demonstrated a genuine interest in recruiting as well as retaining under-represented groups and creating a collective learning environment for them and the rest of the teams within the organization.

The company is committed to increasing the diversity within our teams and we want more diverse candidates in our selection processes. To achieve this we want to invite you (or please encourage others) to visit our careers page for more information on full-time positions and internship roles at our locations across the globe and apply.

And if you have questions, I will leave you my email: [email protected] It would be a pleasure to be able to guide you and put you in touch with the right people within Cloudflare to better understand our technology and where we are going. Your experience and skills are what we need to continue improving the Internet. Come join me at Cloudflare!

Diversity Welcome - A Latinx journey into Cloudflare
Our team culture lives inside and outside the company – Here is our Soccer team!

Lessons from a 2020 intern assignment

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/lessons-from-the-2020-intern-assignment/

Lessons from a 2020 intern assignment

This summer, Cloudflare announced that we were doubling the size of our Summer 2020 intern class. Like everyone else at Cloudflare, our interns would be working remotely, and due to COVID-19, many companies had significantly reduced their intern class size, or outright cancelled their programs entirely.

With our announcement came a huge influx of  students interested in coming to Cloudflare. For applicants seeking engineering internships, we opted to create an exercise based on our serverless product Cloudflare Workers. I’m not a huge fan of timed coding exercises, which is a pretty traditional way that companies gauge candidate skill, so when I was asked to help contribute an example project that would be used instead, I was excited to jump on the project. In addition, it was a rare chance to have literally thousands of eager pairs of eyes on Workers, and on our documentation, a project that I’ve been working on daily since I started at Cloudflare over a year ago.

In this blog post, I will explain the details of the full-stack take home exercise that we sent out to our 2020 internship applicants. We asked participants to spend no more than an afternoon working on it, and because it was a take home project, developers were able to look at documentation, copy-paste code, and generally solve it however they would like. I’ll show how to solve the project, as well as some common mistakes and some of the implementations that came from reviewing submissions. If you’re interested in checking out the exercise, or want to attempt it yourself, the code is open-source on GitHub. Note that applications for our internship program this year are closed, but it’s still a fun exercise, and if you’re interested in Cloudflare Workers, you should give it a shot!

What the project was: A/B Test Application

Workers as a serverless platform excels at many different use-cases. For example, using the Workers runtime APIs, developers can directly generate responses and return them to the client: this is usually called an originless application. You can also make requests to an existing origin and enhance or alter the request or response in some way, this is known as an edge application.

In this exercise, we opted to have our applicants build an A/B test application, where the Workers code should make a request to an API, and return the response of one of two URLs. Because the application doesn’t make request to an origin, but serves a response (potentially with some modifications) from an API, it can be thought of as an originless application – everything is served from Workers.

Client <-----> Workers application <-------> API
                                   |-------> Route A
                                   |-------> Route B

A/B testing is just one of many potential things you can do with Workers. By picking something seemingly “simple”, we can hone in on how each applicant used the Workers APIs – making requests, parsing and modifying responses – as well as deploying their app using our command-line tool wrangler. In addition, because Workers can do all these things directly on the edge, it meant that we could provide a self-contained exercise. It felt unfair to ask applicants to spin up their own servers, or host files in some service. As I learned during this process, Cloudflare Workers projects can be a great way to gauge experience in take home projects, without the usual deployment headaches!

To provide a foundation for the project, I created my own Workers application with three routes – first, an API route that returns an array with two URLs, and two HTML pages, each slightly different from the other (referred to as “variants”).

Lessons from a 2020 intern assignment
Lessons from a 2020 intern assignment

With the API in place, the exercise could be completed with the following steps:

  1. Make a fetch request to the API URL (provided in the instructions)
  2. Parse the response from the API and transform it into JSON
  3. Randomly pick one of the two URLs from the array variants inside of the JSON response
  4. Make a request to that URL, and return the response back from the Workers application to the client

The exercise was designed specifically to be a little past beginner JavaScript. If you know JavaScript and have worked on web applications, a lot of this stuff, such as making fetch requests, getting JSON responses, and randomly picking values in an array, should be things you’re familiar with, or have at least seen before. Again, remember that this exercise was a take-home test: applicants could look up code, read the Workers documentation, and find the solution to the problem in whatever way they could. However, because there was an external API, and the variant URLs weren’t explicitly mentioned in the prompt for the exercise, you still would need to correctly implement the fetch request and API response parsing in order to give a correct solution to the project.

Here’s one solution:

addEventListener('fetch', (event) => {
  event.respondWith(handleRequest(event.request))
})


// URL returning a JSON response with variant URLs, in the format
//   { variants: [url1, url2] }
const apiUrl = `https://cfw-takehome.developers.workers.dev/api/variants`


const random = array => array[Math.floor(Math.random() * array.length)]


async function handleRequest(request) {
  const apiResp = await fetch(apiUrl)
  const { variants } = await apiResp.json()
  const url = random(variants)
  return fetch(url)
}

When an applicant completed the exercise, they needed to use wrangler to deploy their project to a registered Workers.dev subdomain. This falls under the free tier of Workers, so it was a great way to get people exploring wrangler, our documentation, and the deploy process. We saw a number of GitHub issues filed on our docs and in the wrangler repo from people attempting to install wrangler and deploy their code, so it was great feedback on a number of things across the Workers ecosystem!

Extra credit: using the Workers APIs

In addition to the main portion of the exercise, I added a few extra credit sections to the project. These were explicitly not required to submit the project (though the existence of the extra credit had an impact on submissions: see the next section of the blog post), but if you were able to quickly finish the initial part of the exercise, you could dive deeper into some more advanced topics (and advanced Workers runtime APIs) to build a more interesting submission.

Changing contents on the page

With the variant responses being returned to the client, the first extra credit portion asked developers to replace the content on the page using Workers APIs. This could be done in two ways: simple text replacement, or using the HTMLRewriter API built into the Workers runtime.

JavaScript has a string .replace function like most programming languages, and for simple substitutions, you could use it inside of the Worker to replace pieces of text inside of the response body:

// Rewrite using simple text replacement - this example modifies the CTA button
async function handleRequestWithTextReplacement(request) {
  const apiResponse = await fetch(apiUrl)
  const { variants } = await apiResponse.json()
  const url = random(variants)
  const response = await fetch(url)


  // Get the response as a text string
  const text = await response.text()


  // Replace the Cloudflare URL string and CTA text
  const replacedCtaText = text
    .replace('https://cloudflare.com', 'https://workers.cloudflare.com')
    .replace('Return to cloudflare.com', 'Return to Cloudflare Workers')
  return new Response(replacedCtaText, response)
}

If you’ve used string replacement at scale, on larger applications, you know that it can be fragile. The strings have to match exactly, and on a more technical level, reading response.text() into a variable means that Workers has to hold the entire response in memory. This problem is common when writing Workers applications, so in this exercise, we wanted to push people towards trying our runtime solution to this problem: the HTMLRewriter API.

The HTMLRewriter API provides a streaming selector-based interface for modifying a response as it passes through a Workers application. In addition, the API also allows developers to compose handlers to modify parts of the response using JavaScript classes or functions, so it can be a good way to test how people write JavaScript and their understanding of APIs. In the below example, we set up a new instance of the HTMLRewriter, and rewrite the title tag, as well as three pieces of content on the site: h1#title, p#description, and a#url:

// Rewrite text/URLs on screen with HTML Rewriter
async function handleRequestWithRewrite(request) {
  const apiResponse = await fetch(apiUrl)
  const { variants } = await apiResponse.json()
  const url = random(variants)
  const response = await fetch(url)


  // A collection of handlers for rewriting text and attributes
  // using the HTMLRewriter
  //
  // https://developers.cloudflare.com/workers/reference/apis/html-rewriter/#handlers
  const titleRewriter = {
    element: (element) => {
      element.setInnerContent('My Cool Application')
    },
  }
  const headerRewriter = {
    element: (element) => {
      element.setInnerContent('My Cool Application')
    },
  }
  const descriptionRewriter = {
    element: (element) => {
      element.setInnerContent(
        'This is the replaced description of my cool project, using HTMLRewriter',
      )
    },
  }
  const urlRewriter = {
    element: (element) => {
      element.setAttribute('href', 'https://workers.cloudflare.com')
      element.setInnerContent('Return to Cloudflare Workers')
    },
  }

  // Create a new HTMLRewriter and attach handlers for title, h1#title,
  // p#description, and a#url.
  const rewriter = new HTMLRewriter()
    .on('title', titleRewriter)
    .on('h1#title', headerRewriter)
    .on('p#description', descriptionRewriter)
    .on('a#url', urlRewriter)


  // Pass the variant response through the HTMLRewriter while sending it
  // back to the client.
  return rewriter.transform(response)
}

Persisting variants

A traditional A/B test application isn’t as simple as randomly sending users to different URLs: for it to work correctly, it should also persist a chosen URL per-user. This means that when User A is sent to variant A, they should continue to see Variant A in subsequent visits. In this portion of the extra credit, applicants were encouraged to use Workers’ close integration with the Request and Response classes to persist a cookie for the user, which can be parsed in subsequent requests to indicate a specific variant to be returned.

This exercise is dear to my heart, because surprisingly, I had no idea how to implement cookies before this year! I hadn’t worked with request/response behavior as closely as I do with the Workers API in my past programming experience, so it seemed like a good challenge to encourage developers to check out our documentation, and wrap their head around how a crucial part of the web works! Below is an example implementation for persisting a variant using cookies:

// Persist sessions with a cookie
async function handleRequestWithPersistence(request) {
  let url, resp
  const cookieHeader = request.headers.get('Cookie')

  // If a Variant field is already set on the cookie...
  if (cookieHeader && cookieHeader.includes('Variant')) {
    // Parse the URL from it using regexp
    url = cookieHeader.match(/Variant=(.*)/)[1]
    // and return it to the client
    return fetch(url)
  } else {
    const apiResponse = await fetch(apiUrl)
    const { variants } = await apiResponse.json()
    url = random(variants)
    response = await fetch(url)

    // If the cookie isn't set, create a new Response
    // passing in all the information from the original response,
    // along with a Set-cookie header, setting the value `Variant`
    // to the randomly selected variant URL.
    return new Response(response.body, {
      ...resp,
      headers: {
        'Set-cookie': `Variant=${url}`,
      },
    })
  }
}

Deploying to a domain

Workers makes a great platform for these take home-style projects because the existence of workers.dev and the ability to claim your workers.dev subdomain means you can deploy your Workers application without needing to own any domains. That being said, wrangler and Workers do have the ability to deploy to a domain, so for another piece of extra credit, applicants were encouraged to deploy their project to a domain that they owned! We were careful here to tell people not to buy a domain for this project: that’s a potential financial burden that we don’t want to put on anyone (especially interns), but for many web developers, they may already have test domains or even subdomains they could deploy their project to.

This extra credit section is particularly useful because it also gives developers a chance to dig into other Cloudflare features outside of Workers. Because deploying your Workers application to a domain requires that it be set up as a zone in the Cloudflare Dashboard, it’s a great opportunity for interns to familiarize themselves with our onboarding process as they go through the exercise.

You can see an example Workers application deploy to a domain, as indicated by the wrangler.toml configuration file used to deploy the project:

name = "my-fullstack-example"
type = "webpack"
account_id = "0a1f7e807cfb0a78bec5123ff1d3"
zone_id = "9f7e1af6b59f99f2fa4478a159a4"

Where people went wrong

By far the place where applicants struggled the most was in writing clean code. While we didn’t evaluate submissions against a style guide, most people would have benefitted strongly from running their code through a “code prettifier”: this could have been as simple as opening the file in VS Code or something similar, and using the “Format Document” option. Consistent indentation and similar “readability” problems made some submissions, even though they were technically correct, very hard to read!

In addition, there were many applicants who dove directly into the extra credit, without making sure that the base implementation was working correctly. Opening the API URL in-browser, copying one of the two variant URLs, and hard-coding it into the application isn’t a valid solution to the exercise, but with that implementation in place, going and implementing the HTMLRewriter/content-rewriting aspect of the exercise makes it a pretty clear case of rushing! As I reviewed submissions, I found that this happened a ton, and it was a bummer to mark people down for incorrect implementations when it was clear that they were eager enough to approach some of the more complex aspects of the exercise.

On the topic of incorrect implementations, the most common mistake was misunderstanding or incorrectly implementing the solution to the exercise. A common version of this was hard-coding URLs as I mentioned above, but I also saw people copying the entire JSON array, misunderstanding how to randomly pick between two values in the array, or not preparing for a circumstance in which a third value could be added to that array. In addition, the second most common mistake around implementation was excessive bandwidth usage: instead of looking at the JSON response and picking a URL before fetching it, many people opted to get both URLs, and then return one of the two responses to the user. In a small serverless application, this isn’t a huge deal, but in a larger application, excessive bandwidth usage or being wasteful with request time can be a huge problem!

Finding the solution and next steps

If you’re interested in checking out more about the fullstack example exercise we gave to our intern applicants this year, check out the source on GitHub: https://github.com/cloudflare-internship-2020/internship-application-fullstack.

If you tried the exercise and want to build more stuff with Cloudflare Workers, check out our docs! We have tons of tutorials and templates available to help you get up and running: https://workers.cloudflare.com/docs.

Internship Experience: Cryptography Engineer

Post Syndicated from Watson Ladd original https://blog.cloudflare.com/internship-experience-cryptography-engineer/

Internship Experience: Cryptography Engineer

Internship Experience: Cryptography Engineer

Back in the summer of 2017 I was an intern at Cloudflare. During the scholastic year I was a graduate student working on automorphic forms and computational Langlands at Berkeley: a part of number theory with deep connections to representation theory, aimed at uncovering some of the deepest facts about number fields. I had also gotten involved in Internet standardization and security research, but much more on the applied side.

While I had published papers in computer security and had coded for my dissertation, building and deploying new protocols to production systems was going to be new. Going from the academic environment of little day to day supervision to the industrial one of more direction; from greenfield code that would only ever be run by one person to large projects that had to be understandable by a team; from goals measured in years or even decades, to goals measured in days, weeks, or quarters; these transitions would present some challenges.

Cloudflare at that stage was a very different company from what it is now. Entire products and offices simply did not exist. Argo, now a mainstay of our offering for sophisticated companies, was slowly emerging. Access, which has been helping safeguard employees working from home these past weeks, was then experiencing teething issues. Workers was being extensively developed for launch that autumn. Quicksilver was still in the slow stages of replacing KyotoTycoon. Lisbon wasn’t on the map, and Austin was very new.

Day 1

My first job was to get my laptop working. Quickly I discovered that despite the promise of using either Mac or Linux, only Mac was supported as a local development environment. Most Linux users would take a good part of a month to tweak all the settings and get the local development environment up. I didn’t have months. After three days, I broke down and got a Mac.

Needless to say I asked for some help. Like a drowning man in quicksand, I managed to attract three engineers to this near insoluble problem of the edge dev stack, and after days of hacking on it, fixing problems that had long been ignored, we got it working well enough to test a few things. That development environment is now gone and replaced with one built Kubernetes VMs, and works much better that way. When things work on your machine, you can now send everyone your machine.

Speeding up

With setup complete enough, it was on to the problem we needed to solve. Our goal was to implement a set of three interrelated Internet drafts, one defining secondary certificates, one defining external authentication with TLS certificates, and a third permitting servers to advertise the websites they could serve.

External authentication is a TLS feature that permits a server or a client on an already opened connection to prove its possession of the private key of another certificate. This proof of possession is tied to the TLS connection, avoiding attacks on bearer tokens caused by the lack of this binding.

Secondary certificates is an HTTP/2 feature enabling clients and servers to send certificates together with proof that they actually know the private key. This feature has many applications such as certificate-based authentication, but also enables us to prove that we are permitted to serve the websites we claim to serve.

The last draft was the HTTP/2 ORIGIN frame. The ORIGIN frame enables a website to advertise other sites that it could serve, permitting more connection reuse than allowed under the traditional rules. Connection reuse is an important part of browser performance as it avoids much of the setup of a connection.

These drafts solved an important problem for Cloudflare. Many resources such as JavaScript, CSS, and images hosted by one website can be used by others. Because Cloudflare proxies so many different websites, our servers have often cached these resources as well. Browsers though, do not know that these different websites are made faster by Cloudflare, and as a result they repeat all the steps to request the subresources again. This takes unnecessary time since there is an established and usable perfectly good connection already. If the browser could know this, it could use the connection again.

We could only solve this problem by getting browsers and the broader community of TLS implementers on board. Some of these drafts such as external authentication and secondary certificates had a broader set of motivations, such as getting certificate based authentication to work with HTTP/2 and TLS 1.3. All of these needs had to be addressed in the drafts, even if we were only implementing a subset of the uses.

Successful standards cover the use cases that are needed while being simple enough to implement and achieve interoperability. Implementation experience is essential to achieving this success: a standard with no implementations fails to incorporate hard won lessons. Computers are hard.

Prototype

My first goal was to set up a simple prototype to test the much more complex production implementation, as well as to share outside of Cloudflare so that others could have confidence in their implementations. But these drafts that had to be implemented in the prototype were incremental improvements to an already massive stack of TLS and HTTP standards.

I decided it would be easiest to build on top of an already existing implementation of TLS and HTTP. I picked the Go standard library as my base: it’s simple, readable, and in a language I was already familiar with. There was already a basic demo showcasing support in Firefox for the ORIGIN frame, and it would be up to me to extend it.

Using that as my starting point I was able in 3 weeks to set up a demonstration server and a client. This showed good progress, and that nothing in the specification was blocking implementation. But without integrating it into our servers for further experimentation so that we might  discover rare issues that could be showstoppers. This was a bitter lesson learned from TLS 1.3, where it took months to track down a single brand of printer that was incompatible with the standard, and forced a change.

From Prototype to Production

We also wanted to understand the benefits with some real world data, to convince others that this approach was worthwhile. Our position as a provider to many websites globally gives us diverse, real world data on performance that we use to make our products better, and perhaps more important, to learn lessons that help everyone make the Internet better. As a result we had to implement this in production: the experimental framework for TLS 1.3 development had been removed and we didn’t have an environment for experimentation.

At the time everything at Cloudflare was based on variants of NGINX. We had extended it with modules to implement features like Keyless and customized certificate handling to meet our needs, but much of the business logic was and is carried out in Lua via OpenResty.

Lua has many virtues, but at the time both the TLS termination and the core business logic lived in the same repo despite being different processes at runtime. This made it very difficult to understand what code was running when, and changes to basic libraries could create problems for both. The build system for this creation had the significant disadvantage of building the same targets with different settings. Lua also is a very dynamic language, but unlike the dynamic languages I was used to, there was no way to interact with the system as it was running on requests.

The first step was implementing the ORIGIN frame. In implementing this, we had to figure out which sites hosted the subresources used by the page we were serving. Luckily, we already had this logic to enable server push support driven by Link headers. Building on this let me quickly get ORIGIN working.

This work wasn’t the only thing I was up to as an intern. I was also participating in weekly team meetings, attending our engineering presentations, and getting a sense of what life was like at Cloudflare. We had an excursion for interns to the Computer History Museum in Mountain View and Moffett Field, where we saw the base museum.

The next challenge was getting the CERTIFICATE frame to work. This was a much deeper problem. NGINX processes a request in phases, and some of the phases, like the header processing phase, do not permit network I/O without locking up the event loop. Since we are parsing the headers to determine what to send, the frame is created in the header processing phase. But finding a certificate and telling Keyless to sign it required network I/O.

The standard solution to this problem is to have Lua execute a timer callback, in which network I/O is possible. But this context doesn’t have any data from the request: some serious refactoring was needed to create a way to get the keyless module to function outside the context of a request.

Once the signature was created, the battle was half over. Formatting the CERTIFICATE frame was simple, but it had to be stuck into the connection associated with the request that had demanded it be created. And there was no reason to expect the request was still alive, and no way to know what state it was in when the request was handled by the Keyless module.

To handle this issue I made a shared btree indexed by a number containing space for the data to be passed back and forth. This enabled the request to record that it was ready to send the CERTIFICATE frame and Keyless to record that it was ready with a frame to send. Whichever of these happened second would do the work to enqueue the frame to send out.

This was not an easy solution: the Keyless module had been written years before and largely unmodified. It fundamentally assumed it could access data from the request, and changing this assumption opened the door to difficult to diagnose bugs. It integrates into BoringSSL callbacks through some pretty tricky mechanisms.

However, I was able to test it using the client from the prototype and it worked. Unfortunately when I pushed the commit in which it worked upstream, the CI system could not find the git repo where the client prototype was due to a setting I forgot to change. The CI system unfortunately didn’t associate this failure with the branch, but attempted to check it out whenever it checked out any other branch people were working on. Murphy ensured my accomplishment had happened on a Friday afternoon Pacific time, and the team that manages the SSL server was then exclusively in London…

Monday morning the issue was quickly fixed, and whatever tempers had frayed were smoothed over when we discovered the deficiency in the CI system that had enabled a single branch to break every build. It’s always tricky to work in a global team. Later Alessandro flew to San Francisco for a number of projects with the team here and we worked side by side trying to get a demonstration working on a test site. Unfortunately there was some difficulty tracking down a bug that prevented it working in production. We had run out of time, and my internship was over.  Alessandro flew back to London, and I flew to Idaho to see the eclipse.

The End

Ultimately we weren’t able to integrate this feature into the software at our edge: the risks of such intrusive changes for a very experimental feature outweighed the benefits. With not much prospect of support by clients, it would be difficult to get the real savings in performance promised. There also were nontechnical issues in standardization that have made this approach more difficult to implement: any form of traffic direction that doesn’t obey DNS creates issues for network debugging, and there were concerns about the impact of certificate misissuance.

While the project was less successful than I hoped it would be, I learned a lot of important skills: collaborating on large software projects, working with git, and communicating with other implementers about issues we found.  I also got a taste of what it would be like to be on the Research team at Cloudflare and turning research from idea into practical reality and this ultimately confirmed my choice to go into industrial research.

I’ve now returned to Cloudflare full-time, working on extensions for TLS as well as time synchronization. These drafts have continued to progress through the standardization process, and we’ve contributed some of the code I wrote as a starting point for other implementers to use.  If we knew all our projects would work out, they wouldn’t be ambitious enough to be research worth doing.

If this sort of research experience appeals to you, we’re hiring.

Cloudflare Doubling Size of 2020 Summer Intern Class

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/cloudflare-doubling-size-of-2020-summer-intern-class/

Cloudflare Doubling Size of 2020 Summer Intern Class

Cloudflare Doubling Size of 2020 Summer Intern Class

We are living through extraordinary times. Around the world, the Coronavirus has caused disruptions to nearly everyone’s work and personal lives. It’s been especially hard to watch as friends and colleagues outside Cloudflare are losing jobs and businesses struggle through this crisis.

We have been extremely fortunate at Cloudflare. The super heroes of this crisis are clearly the medical professionals at the front lines saving people’s lives and the scientists searching for a cure. But the faithful sidekick that’s helping us get through this crisis — still connected to our friends, loved ones, and, for those of us fortunate enough to be able to continue work from home, our jobs — is the Internet. As we all need it more than ever, we’re proud of our role in helping ensure that the Internet continues to work securely and reliably for all our customers.

We plan to invest through this crisis. We are continuing to hire across all teams at Cloudflare and do not foresee any need for layoffs. I appreciate the flexibility of our team and new hires to adapt what was our well-oiled, in-person orientation process to something virtual we’re continuing to refine weekly as new people join us.

Summer Internships

One group that has been significantly impacted by this crisis are students who were expecting internships over the summer. Many are, unfortunately, getting notice that the experiences they were counting on have been cancelled. These internships are not only a significant part of these students’ education, but in many cases provide an income that helps them get through the school year.

Cloudflare is not cancelling any of our summer internships. We anticipate that many of our internships will need to be remote to comply with public health recommendations around travel and social distancing. We also understand that some students may prefer a remote internship even if we do begin to return to the office so they can take care of their families and avoid travel during this time. We stand by every internship offer we have extended and are committed to making each internship a terrific experience whether remote, in person, or some mix of both.

Doubling the Size of the 2020 Internship Class

But, seeing how many great students were losing their internships at other companies, we wanted to do more. Today we are announcing that we will double the size of Cloudflare’s summer 2020 internship class. Most of the internships we offer are in our product, security, research and engineering organizations, but we also have some positions in our marketing and legal teams. We are reopening the internship application process and are committed to making decisions quickly so students can plan their summers. You can find newly open internships posted at the link below.

https://boards.greenhouse.io/cloudflare/jobs/2156436?gh_jid=2156436

Internships are jobs, and we believe people should be paid for the jobs they do, so every internship at Cloudflare is paid. That doesn’t change with these new internship positions we’re creating: they will all be paid.

Highlighting Other Companies with Opportunities

Even when we double the size of our internship class we expect that we will receive far more qualified applicants than we will be able to accommodate. We hope that other companies that are in a fortunate position to be able to weather this crisis will consider expanding their internship classes as well. We plan to work with peer organizations and will highlight those that also have summer internship openings. If your company still has available internship positions, please let us know by emailing so we can point students your way: [email protected]

Opportunity During Crisis

Cloudflare was born out of a time of crisis. Michelle and I were in school when the global financial crisis hit in 2008. Michelle had spent that summer at an internship at Google. That was the one year Google decided to extend no full-time offers to summer interns. So, in the spring of 2009, we were both still trying to figure out what we were going to do after school.

It didn’t feel great at the time, but had we not been in the midst of that crisis I’m not sure we ever would have started Cloudflare. Michelle and I remember the stress of that time very clearly. The recognition of the importance of planning for rainy days has been part of what has made Cloudflare so resilient. And it’s why, when we realized we could play a small part in ensuring some students who had lost the internships they thought they had could still have a rewarding experience, we knew it was the right decision.

Together, we can get through this. And, when we do, we will all be stronger.

https://boards.greenhouse.io/cloudflare/jobs/2156436?gh_jid=2156436