Tag Archives: Internships

How We Ran a Successful Remote Internship Program in 2020

Post Syndicated from Ellie Jamison original https://blog.cloudflare.com/how-we-ran-a-successful-remote-internship-program-in-2020and-how-we-are-planning-to-do-it-again-in-2021/

And how we are planning to do it again in 2021…

How We Ran a Successful Remote Internship Program in 2020

How We Ran a Successful Remote Internship Program in 2020

On March 5, I sat in a small conference room with a few key contributors in creating and hiring for the Cloudflare summer intern program. With the possibility of office shutdowns looming, the group discussed what an internship would look like without in-person mentorship. How would the managers cope? How would the interns cope? Would it even be worthwhile? After a few minutes of discussions, we settled on ‘absolutely’. A remote summer internship at Cloudflare would be worthwhile for students, mentors, buddies, and managers alike. After all, Cloudflare is an Internet company and we were ready to trust the Internet with a whole lot more than we had anticipated.

The months leading up to the summer were a blur, all I remember is that we did a lot of planning, interviewing and hiring. And I mean, a lot. On April 2, Matthew Prince announced that Cloudflare would be doubling the size of our 2020 intern class in response to other companies cutting their intern programs all together. Due to these cuts, many talented students lost their opportunities for the summer. We knew we couldn’t hire them all so we created a list of peer organizations actively hiring and a three-month webinar program to give all applicants (whether they joined Cloudflare or not) mentorship from our executive team.

The program kicked off on May 18 with our first intern orientation group of the summer. Over the next month and a half, they were joined by six more orientation groups bringing the total intern class to 90 students. Being remote had its advantages, we were able to welcome students from all over the world and on nearly every team at Cloudflare — from Product Management to Engineering to Legal. We were proud to host a diverse class with nearly half of the students being from an underrepresented gender identity or racial ethnic minority.

A question I hear often is, “How do you replicate the benefits from an in-person experience to a remote one?”. The short answer is “You don’t”, but you adapt and stay positive and creative to help bring benefits that otherwise wouldn’t exist for the students. Throughout last summer, we held weekly Zoom lunches, created bi-weekly newsletters to share the various intern projects, and hosted hour-long executive “fireside chats” for the interns to directly meet our executive team. We had a buddy program and a mentor program; opportunities for each intern to be paired with a Cloudflare employee outside of their team as well as on their team respectively. We celebrated “Intern Week” at the end of July to facilitate students from different teams, regions, and continents in getting to know each other. To end the program, we hosted five “Intern Presentation Sessions” where each intern presented virtually about their projects and experiences to a company-wide audience.

To gauge the success of the program, we asked the students to anonymously submit how they felt about their teams, projects, and the intern events. In answering the question, “Did your internship experience meet your overall expectations?”, we received a 100% “Yes” response and 98% of the interns would consider working for Cloudflare again. See below for specific comments.

How We Ran a Successful Remote Internship Program in 2020

Many of our interns also detailed their virtual experiences on The Cloudflare Blog including Selina Cho, who reflected on personal and professional growth during her Product Management internship. Kevin Frazier, a Legal intern highlighted the sense of community and inclusiveness as “a common thread woven throughout the internship experience.” Ryan Jacobs outlined his journey in discovering the process, problem and solution of how Cloudflare uses Cloudflare Spectrum during his internship on the Spectrum team. We are always grateful and eager to read what our interns have to say about their time at Cloudflare.

What’s next? Summer 2021, of course!

As we look ahead, we are planning for an even bigger intern class filled with more engaging programs, nurturing events, and challenging projects. This year we are thrilled to nearly double (again) our intern class size and hire talented students for the exciting summer ahead. If you are interested in applying to one of our summer internship opportunities, check out our Careers page under the “University” section or reach out directly to us at [email protected] Special thanks to Judy Cheong, our Head of University Programs, the Executive team, the Recruiting team, and all intern hiring managers and mentors for not only making the remote internship program possible, but a success!

Announcing Spectrum DDoS Analytics and DDoS Insights & Trends

Post Syndicated from Selina Cho original https://blog.cloudflare.com/announcing-spectrum-ddos-analytics-and-ddos-insights-trends/

Announcing Spectrum DDoS Analytics and DDoS Insights & Trends

Announcing Spectrum DDoS Analytics and DDoS Insights & Trends

We’re excited to announce the expansion of the Network Analytics dashboard to Spectrum customers on the Enterprise plan. Additionally, this announcement introduces two major dashboard improvements for easier reporting and investigation.

Network Analytics

Cloudflare’s packet and bit oriented dashboard, Network Analytics, provides visibility into Internet traffic patterns and DDoS attacks in Layers 3 and 4 of the OSI model. This allows our users to better understand the traffic patterns and DDoS attacks as observed at the Cloudflare edge.

When the dashboard was first released in January, these capabilities were only available to Bring Your Own IP customers on the Spectrum and Magic Transit services, but now Spectrum customers using Cloudflare’s Anycast IPs are also supported.

Protecting L4 applications

Spectrum is Cloudflare’s L4 reverse-proxy service that offers unmetered DDoS protection and traffic acceleration for TCP and UDP applications. It provides enhanced traffic performance through faster TLS, optimized network routing, and high speed interconnection. It also provides encryption to legacy protocols and applications that don’t come with embedded encryption. Customers who typically use Spectrum operate services in which network performance and resilience to DDoS attacks are of utmost importance to their business, such as email, remote access, and gaming.

Spectrum customers can now view detailed traffic reports on DDoS attacks on their configured TCP/ UDP applications, including size of attacks, attack vectors, source location of attacks, and permitted traffic. What’s more, users can also configure and receive real-time alerts when their services are attacked.

Network Analytics: Rebooted

Announcing Spectrum DDoS Analytics and DDoS Insights & Trends

Since releasing the Network Analytics dashboard in January, we have been constantly improving its capabilities. Today, we’re announcing two major improvements that will make both reporting and investigation easier for our customers: DDoS Insights & Trend and Group-by Filtering for grouping-based traffic analysis.

First and foremost, we are adding a new DDoS Insights & Trends card, which provides dynamic insights into your attack trends over time. This feature provides a real-time view of the number of attacks, the percentage of attack traffic, the maximum attack rates, the total mitigated bytes, the main attack origin country, and the total duration of attacks, which can indicate the potential downtime that was prevented. These data points were surfaced as the most crucial ones by our customers in the feedback sessions. Along with the percentage of change period-over-period, our customers can easily understand how their security landscape evolves.

Announcing Spectrum DDoS Analytics and DDoS Insights & Trends
Trends Insights

Troubleshooting made easy

In the main time series chart seen in the dashboard, we added an ability for users to change the Group-by field which enables users to customize the Y axis. This way, a user can quickly identify traffic anomalies and sudden changes in traffic based on criteria such as IP protocols, TCP flags, source country, and take action if needed with Magic Firewall, Spectrum or BYOIP.

Announcing Spectrum DDoS Analytics and DDoS Insights & Trends
Time Series Group-By Filtering

Harnessing Cloudflare’s edge to empower our users

The DDoS Insights & Trends, the new investigation tools and the additional user interface enhancements can assist your organization to better understand your security landscape and take more meaningful actions as needed. We have more updates in Network Analytics dashboard, which are not covered in the scope of this post, including:

  • Export logs as a CSV
  • Zoom-in feature in the time series chart
  • Drop-down view option for average rate and total volume
  • Increased Top N views for source and destination values
  • Addition of country and data center for source values
  • New visualisation of the TCP flag distribution

Details on these updates can be found in our Help Center, which you can now access via the dashboard as well.

In the near future, we will also expand Network Analytics to Spectrum customers on the Business plan, and WAF customers on the Enterprise and Business plans. Stay tuned!

If you are a customer in Magic Transit, Spectrum or BYOIP, go try out the Network Analytics dashboard yourself today.

If you operate your own network, try Cloudflare Magic Transit for free with a limited time offer: https://www.cloudflare.com/lp/better/.

A Virtual Product Management Internship Experience

Post Syndicated from Selina Cho original https://blog.cloudflare.com/a-virtual-product-management-internship-experience/

A Virtual Product Management Internship Experience

A Virtual Product Management Internship Experience

In July 2020, I joined Cloudflare as a Product Management Intern on the DDoS (Distributed Denial of Service) team to enhance the benefits that Network Analytics brings to our customers. In the following, I am excited to share with you my experience with remote working as an intern, and how I acclimatized into Cloudflare. I also give details about what my work entailed and how we approached the process of Product Management.

Onboarding to Cloudflare during COVID19

As a long-time user of Cloudflare’s Free CDN plan myself, I was thrilled to join the company and learn what was happening behind the scenes while making its products. The entering internship class consisted of students and recent graduates from various backgrounds around the world – all with a mutual passion in helping build a better Internet.

The catch here was that 2020 would make the experience of being an intern very different. As it was the case with many other fellow interns, it was the first time I had taken up work remotely from scratch. The initial challenge was to integrate into the working environment without ever meeting colleagues in a physical office. Because everything took place online, it was much harder to pick up non-verbal cues that play a key role in communication, such as eye contact and body language.

To face this challenge, Cloudflare introduced creative and active ways in which we could better interact with one another. From the very first day, I was welcomed to an abundance of knowledge sharing talks and coffee chats with new and existing colleagues in different offices across the world. Whether it was data protection from the Legal team or going serverless with Workers, we were welcomed to afternoon seminars every week on a new area that was being pursued within Cloudflare.

Cloudflare not only retained the summer internship scheme, but in fact doubled the size of the class; this reinforced an optimistic mood within the entering class and a sense of personal responsibility. I was paired up with a mentor, a buddy, and a manager who helped me find my way quickly within Cloudflare, and without which my experience would not have been the same. Thanks to Omer, Pat, Val and countless others for all your incredible support!

Social interactions took various forms and were scheduled for all global time zones. I was invited to weekly virtual yoga sessions and intern meetups to network and discover what other interns across the world were working on. We got to virtually mingle at an “Intern Mixer” where we shared answers to philosophical prompts – what’s more, this was accompanied by an UberEats coupon for us to enjoy refreshments in our work-from-home setting. We also had Pub Quizzes with colleagues in the EMEA region to brush up on our trivia skills. At this uncertain time of the year, part of which I spent in complete self-isolation, these gatherings helped create a sense of belonging within the community, as well as an affinity towards the colleagues I interacted with.

Product Management at Cloudflare

My internship also offered a unique learning experience from the Product Management perspective. I took on the task of increasing the value of Network Analytics by giving customers and internal stakeholders improved  transparency in the traffic patterns and attacks taking place. Network Analytics is Cloudflare’s packet- and bit-oriented dashboard that provides visibility into network- and transport-layer attacks which are mitigated across the world. Among various updates I led in visibility features is the new trends insights. During this time the dashboard was also extended to Enterprise customers on the Spectrum service, Cloudflare’s L4 reverse-proxy that provides DDoS protection against attacks and facilitates network performance.

I was at the intersection of multiple teams that contributed to Network Analytics from different angles, including user interface, UX research, product design, product content and backend engineering, among many others. The key to a successful delivery of Network Analytics as a product, given its interdisciplinary nature, meant that I actively facilitated communication and collaboration across experts in these teams as well as reflected the needs of the users.

I spent the first month of the internship approaching internal stakeholders, namely Customer Support engineers, Solutions Engineers, Customer Success Managers, and Product Managers, to better understand the common pain points. Given their past experience with customers, their insights revealed how Network Analytics could both leverage the existing visibility features to reduce overhead costs on the internal support side and empower users with actionable insights. This process also helped ensure that I didn’t reinvent wheels that had already been explored by existing Product Managers.

I then approached customers to enquire about desired areas for improvements. An example of such a desired improvement was that the display of data in the dashboard was not helping users infer any meaning regarding next steps. It did not answer questions like: What do these numbers represent in retrospect, and should I be concerned? Discussing these aspects helped validate the needs, and we subsequently came up with rough solutions to address them, such as dynamic trends view. Over the calls, we confirmed that – especially from those who rarely accessed the dashboard – having an overview of these numbers in the form of a trends card would incentivize users to log in more often and get more value from the product.

A Virtual Product Management Internship Experience
Trends Insights

The 1:1 dialogues were incredibly helpful in understanding how Network Analytics could be more effectively utilized, and guided ways for us to better surface the performance of our DDoS mitigation tools to our customers. In the first few weeks of the internship, I shadowed customer calls of other products; this helped me gain the confidence, knowledge, and language appropriate in Cloudflare’s user research. I did a run-through of the interview questions with a UX Researcher, and was informed on the procedure for getting in touch with appropriate customers. We even had bilingual calls where the Customer Success Manager helped translate the dialogues real-time.

In the following weeks, I synthesized these findings into a Product Requirements Document and lined up the features according to quarterly goals that could now be addressed in collaboration with other teams. After a formal review and discussion with Product Managers, engineers, and designers, we developed and rolled out each feature to the customers on a bi-weekly basis. We always welcomed feedback before and after the feature releases, as the goal wasn’t to have an ultimate final product, but to deliver incremental enhancements to meet the evolving needs of our customers.

Of course, all my interactions, including customer and internal stakeholder calls, were all held remotely. We all embraced video conferencing and instant chat messengers to make it feel as though we were physically close. I had weekly check-ins with various colleagues including my managers, Network Analytics team, DDoS engineering team, and DDoS reports team, to ensure that things were on track. For me, the key to working remotely was the instant chat function, which was not as intrusive as a fully fledged meeting, but a quick and considerate way to communicate in a tightly-knit team.

Looking Back

Product Management is a growth process – both for the corresponding individual and the product. As an individual, you grow fast through creative thinking, problem solving and incessant curiosity to better understand a product in the shoes of a customer. At the same time, the product continues to evolve and grow as a result of synergy between experts from diverse fields and customer feedback. Products are used and experienced by people, so it is a no-brainer that maintaining constant and direct feedback from our customers and internal stakeholders are what bolsters their quality.

It was an incredible opportunity to have been a part of an organization that represents one of the largest networks. Network Analytics is a window into the efforts led by Cloudflare engineers and technicians to help secure the Internet, and we are ambitious to scale the transparency across further mitigation systems in the future.

The internship was a successful immersive experience into the world of Network Analytics and Product Management, even in the face of a pandemic. Owing to Cloudflare’s flexibility and ready access to resources for remote work, I was able to adapt to the work environment from the first day onwards and gain an authentic learning experience into how products work. As I now return to university, I look back on an internship that significantly added to my personal and professional growth. I am happy to leave behind the latest evolution of Network Analytics dashboard with hopefully many more to come. Thanks to Cloudflare and all my colleagues for making this possible!

How Cloudflare uses Cloudflare Spectrum: A look into an intern’s project at Cloudflare

Post Syndicated from Ryan Jacobs original https://blog.cloudflare.com/how-cloudflare-uses-cloudflare-spectrum-a-look-into-an-interns-project-at-cloudflare/

How Cloudflare uses Cloudflare Spectrum: A look into an intern’s project at Cloudflare

How Cloudflare uses Cloudflare Spectrum: A look into an intern’s project at Cloudflare

Cloudflare extensively uses its own products internally in a process known as dogfooding. As part of my onboarding as an intern on the Spectrum (a layer 4 reverse proxy) team, I learned that many internal services dogfood Spectrum, as they are exposed to the Internet and benefit from layer 4 DDoS protection. One of my first tasks was to update the configuration for an internal service that was using Spectrum. The configuration was managed in Salt (used for configuration management at Cloudflare), which was not particularly user-friendly, and required an engineer on the Spectrum team to handle updating it manually.

This process took about a week. That should instantly raise some questions, as a typical Spectrum customer can create a new Spectrum app in under a minute through Cloudflare Dashboard. So why couldn’t I?

This question formed the basis of my intern project for the summer.

The Process

Cloudflare uses various IP ranges for its products. Some customers also authorize Cloudflare to announce their IP prefixes on their behalf (this is known as BYOIP). Collectively, we can refer to these IPs as managed addresses. To prevent Bad Stuff (defined later) from happening, we prohibit managed addresses from being used as Spectrum origins. To accomplish this, Spectrum had its own table of denied networks that included the managed addresses. For the average customer, this approach works great – they have no legitimate reason to use a managed address as an origin.

Unfortunately, the services dogfooding Spectrum all use Cloudflare IPs, preventing those teams with a legitimate use-case from creating a Spectrum app through the configuration service (i.e. Cloudflare Dashboard). To bypass this check, these internal customers needed to define a custom Spectrum configuration, which needed to be manually deployed to the edge via a pull request to our Salt repo, resulting in a time consuming process.

If an internal customer wanted to change their configuration, the same time consuming process must be used. While this allowed internal customers to use Spectrum, it was tedious and error prone.

Bad Stuff

Bad Stuff is quite vague and deserves a better definition. It may seem arbitrary that we deny Cloudflare managed addresses. To motivate this, consider two Spectrum apps, A and B, where the origin of app A is the Cloudflare edge IP of app B, and the origin of app B is the edge IP of app A. Essentially, app A will proxy incoming connections to app B, and app B will proxy incoming connections to app A, creating a cycle.

This could potentially crash the daemon or degrade performance. In practice, this configuration is useless and would only be created by a malicious user, as the proxied connection never reaches an origin, so it is never allowed.

How Cloudflare uses Cloudflare Spectrum: A look into an intern’s project at Cloudflare

In fact, the more general case of setting another Spectrum app as an origin (even when the configuration does not result in a cycle) is (almost1) never needed, so it also needs to be avoided.

As well, since we are providing a reverse proxy to customer origins, we do not need to allow connections to IP ranges that cannot be used on the public Internet, as specified in this RFC.

The Problem

To improve usability and allow internal Spectrum customers to create apps using the Dashboard instead of the static configuration workflow, we needed a way to give particular customers permission to use Cloudflare managed addresses in their Spectrum configuration. Solving this problem was my main project for the internship.

A good starting point ended up being the Addressing API. The Addressing API is Cloudflare’s solution to IP management, an internal database and suite of tools to keep track of IP prefixes, with the goal of providing a unified source of truth for how IP addresses are being used across the organization. This makes it possible to provide a cross-product platform for products and features such as BYOIP, BGP On Demand, and Magic Transit.

The Addressing API keeps track of all Cloudflare managed IP prefixes, along with who owns the prefix. As well, the owner of a prefix can give permission for someone else to use the prefix. We call this a delegation.

A user’s permission to use an IP address managed by the Addressing API is determined as followed:

  1. Is the user the owner of the prefix containing the IP address?
    a) Yes, the user has permission to use the IP
    b) No, go to step 2
  2. Has the user been delegated a prefix containing the IP address?
    a) Yes, the user has permission to use the IP.
    b) No, the user does not have permission to use the IP.

The Solution

With the information present in the Addressing API, the solution starts to become clear. For a given customer and IP, we use the following algorithm:

  1. Is the IP managed by Cloudflare (or contained in the previous RFC)?
    a) Yes, go to step 2
    b) No, allow as origin
  2. Does the customer have permission to use the IP address?
    a) Yes, allow as origin
    b) No, deny as origin

As long as the internal customer has been given permission to use the Cloudflare IP (through a delegation in the Addressing API), this approach would allow them to use it as an origin.

However, we run into a corner case here – since BYOIP customers also have permission to use their own ranges, they would be able to set their own IP as an origin, potentially causing a cycle. To mitigate this, we need to check if the IP is a Spectrum edge IP. Fortunately, the Addressing API also contains this information, so all we have to do is check if the given origin IP is already in use as a Spectrum edge IP, and if so, deny it. Since all of the denied networks checks occur in the Addressing API, we were able to remove Spectrum’s own deny network database, reducing the engineering workload to maintain it along the way.

Let’s go through a concrete example. Consider an internal customer who wants to use 104.16.8.54/32 as an origin for their Spectrum app. This address is managed by Cloudflare, and suppose the customer has permission to use it, and the address is not already in use as an edge IP. This means the customer is able to specify this IP as an origin, since it meets all of our criteria.

For example, a request to the Addressing API could look like this:

curl --silent 'https://addr-api.internal/edge_services/spectrum/validate_origin_ip_acl?cidr=104.16.8.54/32' -H "Authorization: Bearer $JWT" | jq .
{
  "success": true,
  "errors": [],
  "result": {
    "allowed_origins": {
      "104.16.8.54/32": {
        "allowed": true,
        "is_managed": true,
        "is_delegated": true,
        "is_reserved": false,
        "has_binding": false
      }
    }
  },
  "messages": []
}

Now we have completely moved the responsibility of validating the use of origin IP addresses from Spectrum’s configuration service to the Addressing API.

Performance

This approach required making another HTTP request on the critical path of every create app request in the Spectrum configuration service. Some basic performance testing showed (as expected) increased response times for the API call (about 100ms). This led to discussion among the Spectrum team about the performance impact of different HTTP requests throughout the critical path. To investigate, we decided to use OpenTracing.

OpenTracing is a standard for providing distributed tracing of microservices. When an HTTP request is received, special headers are added to it to allow it to be traced across the different services. Within a given trace, we can see how long a SQL query took, the time a function took to complete, the amount of time a request spent at a given service, and more.

We have been deploying a tracing system for our services to provide more visibility into a complex system.

After instrumenting the Spectrum config service with OpenTracing, we were able to determine that the Addressing API accounted for a very small amount of time in the overall request, and allowed us to identify potentially problematic request times to other services.

How Cloudflare uses Cloudflare Spectrum: A look into an intern’s project at Cloudflare

Lessons Learned

Reading documentation is important! Having a good understanding of how the Addressing API and the config service worked allowed me to create and integrate an endpoint that made sense for my use-case.

Writing documentation is just as important. For the final part of my project, I had to onboard Crossbow – an internal Cloudflare tool used for diagnostics – to Spectrum, using the new features I had implemented. I had written an onboarding guide, but some stuff was unclear during the onboarding process, so I made sure to gather feedback from the Crossbow team to improve the guide.

Finally, I learned not to underestimate the amount of complexity required to implement relatively simple validation logic. In fact, the implementation required understanding the entire system. This includes how multiple microservices work together to validate the configuration and understanding how the data is moved from the Core to the Edge, and then processed there. I found increasing my understanding of this system to be just as important and rewarding as completing the project.

Footnotes:

Regional Services actually makes use of proxying a Spectrum connection to another colocation, and then proxying to the origin, but the configuration plane is not involved in this setup.

My living room intern experience at Cloudflare

Post Syndicated from Kevin Frazier original https://blog.cloudflare.com/my-living-room-intern-experience-at-cloudflare/

My living room intern experience at Cloudflare

My living room intern experience at Cloudflare

This was an internship unlike any other. With a backdrop of a pandemic, protests, and a puppy that interrupted just about every Zoom meeting, it was also an internship that demonstrated Cloudflare’s leadership in giving students meaningful opportunities to explore their interests and contribute to the company’s mission: to help build a better Internet.

For the past twelve weeks, I’ve had the pleasure of working as a Legal Intern at Cloudflare. A few key things set this internship apart from even those in which I’ve been able to connect with people in-person:

  • Communication
  • Community
  • Commingling
  • Collaboration

Ever since I formally accepted my internship, the Cloudflare team has been in frequent and thorough communication about what to expect and how to make the most of my experience. This approach to communication was in stark contrast to the approach taken by several other companies and law firms. The moment COVID-19 hit, Cloudflare not only reassured me that I’d still have a job, the company also doubled down on bringing on more interns. Comparatively, a bunch of my fellow law school students were left in limbo: unsure of if they had a job, the extent to which they’d be able to do it remotely, and whether it would be a worthwhile experience.

This approach has continued through the duration of the internship. I know I speak for my fellow interns when I say that we were humbled to be included in company-wide initiatives to openly communicate about the trying times our nation and particularly members of communities of color have experienced this summer. We weren’t left on the sidelines but rather invited into the fold. I’m so grateful to my manager, Jason, for clearing my schedule to participate in Cloudflare’s “Day On: Learning and Inclusion.” On June 18, the day before Juneteenth, Cloudflare employees around the world joined together for transformative and engaging sessions on how to listen, learn, participate, and take action to be better members of our communities. That day illustrated Cloudflare’s commitment to fostering communication as well as to building community and diversity.

The company’s desire to foster a sense of community pervades each team. Case in point, members of the Legal, Policy, and Trust & Safety (LPT) team were ready and eager to help my fellow legal interns and me better understand the team’s mission and day-to-day activities. I went a perfect 11/11 on asks to LPT members for 1:1 Zoom meetings — these meetings had nothing to do with a specific project but were merely meant to create a stronger community by talking with employees about how they ended up at this unique company.

From what I’ve heard from fellow interns, this sense of community was a common thread woven throughout their experiences as well. Similarly, other interns shared my appreciation for being given more than just “shadowing” opportunities. We were invited to commingle with our teammates and encouraged to take active roles in meetings and on projects.

In my own case, I got to dive into exciting research on privacy laws such as the GDPR and so much more. This research required that I do more than just be a fly on the wall, I was invited to actively converse and brief folks directly involved with making key decisions for the LPT. For instance, when Tilly came on in July as Privacy Counsel, I had the opportunity to brief her on the research I’d done related to Data Privacy Impact Assessments (DPIAs). In the same way, when Edo and Ethan identified some domain names that likely infringed on Cloudflare’s trademark, my fellow intern, Elizabeth, and I were empowered to draft WIPO complaints per the Uniform Domain Name Dispute Resolution Policy. Fingers crossed our work continues Cloudflare’s strong record before the WIPO (here’s an example of a recent favorable division). These seemingly small tasks introduced me to a wide range of fascinating legal topics that will inform my future coursework and, possibly, even my career goals.

Finally, collaboration distinguished this internship from other opportunities. By way of example, I was assigned projects that required working with others toward a successful outcome. In particular, I was excited to work with Jocelyn and Alissa on research related to the intersection of law and public policy. This dynamic duo fielded my queries, sent me background materials, and invited me to join meetings with stakeholders. This was a very different experience from previous internships in which collaboration was confined to just an email assigning the research and a cool invite to reach out if any questions came up. At Cloudflare, I had the support of a buddy, a mentor, and my manager on all of my assignments and general questions.

When I walked out of Cloudflare’s San Francisco office back in December after my in-person interview, I was thrilled to potentially have the opportunity to return and help build a better Internet. Though I’ve yet to make it back to the office due to COVID-19 and, therefore, worked entirely remotely, this internship nevertheless allowed me and my fellow interns to advance Cloudflare’s mission.

Whatever normal looks like in the following weeks, months, and years, so long as Cloudflare prioritizes communication, community, commingling, and collaboration, I know it will be a great place to work.

Lessons from a 2020 intern assignment

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/lessons-from-the-2020-intern-assignment/

Lessons from a 2020 intern assignment

This summer, Cloudflare announced that we were doubling the size of our Summer 2020 intern class. Like everyone else at Cloudflare, our interns would be working remotely, and due to COVID-19, many companies had significantly reduced their intern class size, or outright cancelled their programs entirely.

With our announcement came a huge influx of  students interested in coming to Cloudflare. For applicants seeking engineering internships, we opted to create an exercise based on our serverless product Cloudflare Workers. I’m not a huge fan of timed coding exercises, which is a pretty traditional way that companies gauge candidate skill, so when I was asked to help contribute an example project that would be used instead, I was excited to jump on the project. In addition, it was a rare chance to have literally thousands of eager pairs of eyes on Workers, and on our documentation, a project that I’ve been working on daily since I started at Cloudflare over a year ago.

In this blog post, I will explain the details of the full-stack take home exercise that we sent out to our 2020 internship applicants. We asked participants to spend no more than an afternoon working on it, and because it was a take home project, developers were able to look at documentation, copy-paste code, and generally solve it however they would like. I’ll show how to solve the project, as well as some common mistakes and some of the implementations that came from reviewing submissions. If you’re interested in checking out the exercise, or want to attempt it yourself, the code is open-source on GitHub. Note that applications for our internship program this year are closed, but it’s still a fun exercise, and if you’re interested in Cloudflare Workers, you should give it a shot!

What the project was: A/B Test Application

Workers as a serverless platform excels at many different use-cases. For example, using the Workers runtime APIs, developers can directly generate responses and return them to the client: this is usually called an originless application. You can also make requests to an existing origin and enhance or alter the request or response in some way, this is known as an edge application.

In this exercise, we opted to have our applicants build an A/B test application, where the Workers code should make a request to an API, and return the response of one of two URLs. Because the application doesn’t make request to an origin, but serves a response (potentially with some modifications) from an API, it can be thought of as an originless application – everything is served from Workers.

Client <-----> Workers application <-------> API
                                   |-------> Route A
                                   |-------> Route B

A/B testing is just one of many potential things you can do with Workers. By picking something seemingly “simple”, we can hone in on how each applicant used the Workers APIs – making requests, parsing and modifying responses – as well as deploying their app using our command-line tool wrangler. In addition, because Workers can do all these things directly on the edge, it meant that we could provide a self-contained exercise. It felt unfair to ask applicants to spin up their own servers, or host files in some service. As I learned during this process, Cloudflare Workers projects can be a great way to gauge experience in take home projects, without the usual deployment headaches!

To provide a foundation for the project, I created my own Workers application with three routes – first, an API route that returns an array with two URLs, and two HTML pages, each slightly different from the other (referred to as “variants”).

Lessons from a 2020 intern assignment
Lessons from a 2020 intern assignment

With the API in place, the exercise could be completed with the following steps:

  1. Make a fetch request to the API URL (provided in the instructions)
  2. Parse the response from the API and transform it into JSON
  3. Randomly pick one of the two URLs from the array variants inside of the JSON response
  4. Make a request to that URL, and return the response back from the Workers application to the client

The exercise was designed specifically to be a little past beginner JavaScript. If you know JavaScript and have worked on web applications, a lot of this stuff, such as making fetch requests, getting JSON responses, and randomly picking values in an array, should be things you’re familiar with, or have at least seen before. Again, remember that this exercise was a take-home test: applicants could look up code, read the Workers documentation, and find the solution to the problem in whatever way they could. However, because there was an external API, and the variant URLs weren’t explicitly mentioned in the prompt for the exercise, you still would need to correctly implement the fetch request and API response parsing in order to give a correct solution to the project.

Here’s one solution:

addEventListener('fetch', (event) => {
  event.respondWith(handleRequest(event.request))
})


// URL returning a JSON response with variant URLs, in the format
//   { variants: [url1, url2] }
const apiUrl = `https://cfw-takehome.developers.workers.dev/api/variants`


const random = array => array[Math.floor(Math.random() * array.length)]


async function handleRequest(request) {
  const apiResp = await fetch(apiUrl)
  const { variants } = await apiResp.json()
  const url = random(variants)
  return fetch(url)
}

When an applicant completed the exercise, they needed to use wrangler to deploy their project to a registered Workers.dev subdomain. This falls under the free tier of Workers, so it was a great way to get people exploring wrangler, our documentation, and the deploy process. We saw a number of GitHub issues filed on our docs and in the wrangler repo from people attempting to install wrangler and deploy their code, so it was great feedback on a number of things across the Workers ecosystem!

Extra credit: using the Workers APIs

In addition to the main portion of the exercise, I added a few extra credit sections to the project. These were explicitly not required to submit the project (though the existence of the extra credit had an impact on submissions: see the next section of the blog post), but if you were able to quickly finish the initial part of the exercise, you could dive deeper into some more advanced topics (and advanced Workers runtime APIs) to build a more interesting submission.

Changing contents on the page

With the variant responses being returned to the client, the first extra credit portion asked developers to replace the content on the page using Workers APIs. This could be done in two ways: simple text replacement, or using the HTMLRewriter API built into the Workers runtime.

JavaScript has a string .replace function like most programming languages, and for simple substitutions, you could use it inside of the Worker to replace pieces of text inside of the response body:

// Rewrite using simple text replacement - this example modifies the CTA button
async function handleRequestWithTextReplacement(request) {
  const apiResponse = await fetch(apiUrl)
  const { variants } = await apiResponse.json()
  const url = random(variants)
  const response = await fetch(url)


  // Get the response as a text string
  const text = await response.text()


  // Replace the Cloudflare URL string and CTA text
  const replacedCtaText = text
    .replace('https://cloudflare.com', 'https://workers.cloudflare.com')
    .replace('Return to cloudflare.com', 'Return to Cloudflare Workers')
  return new Response(replacedCtaText, response)
}

If you’ve used string replacement at scale, on larger applications, you know that it can be fragile. The strings have to match exactly, and on a more technical level, reading response.text() into a variable means that Workers has to hold the entire response in memory. This problem is common when writing Workers applications, so in this exercise, we wanted to push people towards trying our runtime solution to this problem: the HTMLRewriter API.

The HTMLRewriter API provides a streaming selector-based interface for modifying a response as it passes through a Workers application. In addition, the API also allows developers to compose handlers to modify parts of the response using JavaScript classes or functions, so it can be a good way to test how people write JavaScript and their understanding of APIs. In the below example, we set up a new instance of the HTMLRewriter, and rewrite the title tag, as well as three pieces of content on the site: h1#title, p#description, and a#url:

// Rewrite text/URLs on screen with HTML Rewriter
async function handleRequestWithRewrite(request) {
  const apiResponse = await fetch(apiUrl)
  const { variants } = await apiResponse.json()
  const url = random(variants)
  const response = await fetch(url)


  // A collection of handlers for rewriting text and attributes
  // using the HTMLRewriter
  //
  // https://developers.cloudflare.com/workers/reference/apis/html-rewriter/#handlers
  const titleRewriter = {
    element: (element) => {
      element.setInnerContent('My Cool Application')
    },
  }
  const headerRewriter = {
    element: (element) => {
      element.setInnerContent('My Cool Application')
    },
  }
  const descriptionRewriter = {
    element: (element) => {
      element.setInnerContent(
        'This is the replaced description of my cool project, using HTMLRewriter',
      )
    },
  }
  const urlRewriter = {
    element: (element) => {
      element.setAttribute('href', 'https://workers.cloudflare.com')
      element.setInnerContent('Return to Cloudflare Workers')
    },
  }

  // Create a new HTMLRewriter and attach handlers for title, h1#title,
  // p#description, and a#url.
  const rewriter = new HTMLRewriter()
    .on('title', titleRewriter)
    .on('h1#title', headerRewriter)
    .on('p#description', descriptionRewriter)
    .on('a#url', urlRewriter)


  // Pass the variant response through the HTMLRewriter while sending it
  // back to the client.
  return rewriter.transform(response)
}

Persisting variants

A traditional A/B test application isn’t as simple as randomly sending users to different URLs: for it to work correctly, it should also persist a chosen URL per-user. This means that when User A is sent to variant A, they should continue to see Variant A in subsequent visits. In this portion of the extra credit, applicants were encouraged to use Workers’ close integration with the Request and Response classes to persist a cookie for the user, which can be parsed in subsequent requests to indicate a specific variant to be returned.

This exercise is dear to my heart, because surprisingly, I had no idea how to implement cookies before this year! I hadn’t worked with request/response behavior as closely as I do with the Workers API in my past programming experience, so it seemed like a good challenge to encourage developers to check out our documentation, and wrap their head around how a crucial part of the web works! Below is an example implementation for persisting a variant using cookies:

// Persist sessions with a cookie
async function handleRequestWithPersistence(request) {
  let url, resp
  const cookieHeader = request.headers.get('Cookie')

  // If a Variant field is already set on the cookie...
  if (cookieHeader && cookieHeader.includes('Variant')) {
    // Parse the URL from it using regexp
    url = cookieHeader.match(/Variant=(.*)/)[1]
    // and return it to the client
    return fetch(url)
  } else {
    const apiResponse = await fetch(apiUrl)
    const { variants } = await apiResponse.json()
    url = random(variants)
    response = await fetch(url)

    // If the cookie isn't set, create a new Response
    // passing in all the information from the original response,
    // along with a Set-cookie header, setting the value `Variant`
    // to the randomly selected variant URL.
    return new Response(response.body, {
      ...resp,
      headers: {
        'Set-cookie': `Variant=${url}`,
      },
    })
  }
}

Deploying to a domain

Workers makes a great platform for these take home-style projects because the existence of workers.dev and the ability to claim your workers.dev subdomain means you can deploy your Workers application without needing to own any domains. That being said, wrangler and Workers do have the ability to deploy to a domain, so for another piece of extra credit, applicants were encouraged to deploy their project to a domain that they owned! We were careful here to tell people not to buy a domain for this project: that’s a potential financial burden that we don’t want to put on anyone (especially interns), but for many web developers, they may already have test domains or even subdomains they could deploy their project to.

This extra credit section is particularly useful because it also gives developers a chance to dig into other Cloudflare features outside of Workers. Because deploying your Workers application to a domain requires that it be set up as a zone in the Cloudflare Dashboard, it’s a great opportunity for interns to familiarize themselves with our onboarding process as they go through the exercise.

You can see an example Workers application deploy to a domain, as indicated by the wrangler.toml configuration file used to deploy the project:

name = "my-fullstack-example"
type = "webpack"
account_id = "0a1f7e807cfb0a78bec5123ff1d3"
zone_id = "9f7e1af6b59f99f2fa4478a159a4"

Where people went wrong

By far the place where applicants struggled the most was in writing clean code. While we didn’t evaluate submissions against a style guide, most people would have benefitted strongly from running their code through a “code prettifier”: this could have been as simple as opening the file in VS Code or something similar, and using the “Format Document” option. Consistent indentation and similar “readability” problems made some submissions, even though they were technically correct, very hard to read!

In addition, there were many applicants who dove directly into the extra credit, without making sure that the base implementation was working correctly. Opening the API URL in-browser, copying one of the two variant URLs, and hard-coding it into the application isn’t a valid solution to the exercise, but with that implementation in place, going and implementing the HTMLRewriter/content-rewriting aspect of the exercise makes it a pretty clear case of rushing! As I reviewed submissions, I found that this happened a ton, and it was a bummer to mark people down for incorrect implementations when it was clear that they were eager enough to approach some of the more complex aspects of the exercise.

On the topic of incorrect implementations, the most common mistake was misunderstanding or incorrectly implementing the solution to the exercise. A common version of this was hard-coding URLs as I mentioned above, but I also saw people copying the entire JSON array, misunderstanding how to randomly pick between two values in the array, or not preparing for a circumstance in which a third value could be added to that array. In addition, the second most common mistake around implementation was excessive bandwidth usage: instead of looking at the JSON response and picking a URL before fetching it, many people opted to get both URLs, and then return one of the two responses to the user. In a small serverless application, this isn’t a huge deal, but in a larger application, excessive bandwidth usage or being wasteful with request time can be a huge problem!

Finding the solution and next steps

If you’re interested in checking out more about the fullstack example exercise we gave to our intern applicants this year, check out the source on GitHub: https://github.com/cloudflare-internship-2020/internship-application-fullstack.

If you tried the exercise and want to build more stuff with Cloudflare Workers, check out our docs! We have tons of tutorials and templates available to help you get up and running: https://workers.cloudflare.com/docs.

Doubling the intern class – and making it all virtual

Post Syndicated from Judy Cheong original https://blog.cloudflare.com/doubling-the-intern-class-and-making-it-all-virtual/

Doubling the intern class - and making it all virtual

Doubling the intern class - and making it all virtual

Earlier this month, we announced our plans to relaunch our intern hiring and double our intern class this summer to support more students who may have lost their internships due to COVID-19. You can find that story here. We’ve had interns joining us over the last few summers – students were able to find their way to us by applying to full-time roles and sometimes through Twitter. But, it wasn’t until last summer, in 2019, when we officially had our first official Summer Internship Program. And this year, we are doubling down.

Why do we invest in interns?

We have found interns to be invaluable. Not only do they bring an electrifying new energy over the summer, but they also come with their curiosity to help solve problems, contribute to major projects, and bring refreshing perspectives to the company.

  1. Ship projects: Our interns are matched with a team and work on real and meaningful projects. They are expected to ramp up, contribute like other members of the team and ship by the end of their internship.
  2. Hire strong talent: The internship is the “ultimate interview” that allows us to better assess new grad talent. The 12 weeks they spend with us tell us how they work with the team, their curiosity, passion and interest in the company and mission, and overall ability to execute and ship.
  3. Increase brand awareness: Some of the best interns and new grads we’ve hired come from referrals from past interns. Students go back to school and will share their summer experience with their peers and classmates, and it can catch like wildfire. This will make long term hiring much easier.
  4. Help grow future talent: Companies of all sizes should hire interns to help grow a more diverse talent pool, otherwise the future talent would be shaped by companies like Google, Facebook, Microsoft and the like. The experience gained from working at a small or mid-sized startup versus a behemoth company is very different.

Our founding principles. What makes a great internship?

How do we make sure we’re prepared for interns? And what should companies and teams consider to ensure a great internship experience? It’s important for companies to be prepared to onboard interns so interns have a great and fruitful experience. These are general items to consider:

  1. Committed manager and/or mentor: Interns need a lot of support especially in the beginning, and it’s essential to have a manager or mentor who is willing to commit 30+% of their time to train, teach, and guide the intern for the entire duration of the summer. I would even advise managers/mentors to plan their summer vacations accordingly and if they’re not there for a week or more, they should have a backup support plan.
  2. Defined projects and goals: We ask managers to work with their interns to clearly  identify projects and goals they would be interested in working on either before the internship starts, or within the first 2 weeks. By the end of the internship, we want each intern to have learned a lot, be proud of the work they’ve accomplished and present their work to executives and the whole company.
  3. Open environment and networking: Throughout the internship, we intentionally create opportunities to meet more people and allow a safe environment for them to ask questions and be curious. Interns connect with each other, employees across other teams, and executives through our Buddy Program, Executive Round Tables, and other social events and outings.
  4. Visibility and exposure: Near the end of the internship, all interns are encouraged and given the opportunity to present their work to the whole company and share their project or experience on the company blog. Because they are an integral part of the team, many times they’ll join meetings with our leaders and executives.

The pivot to virtual: what we changed

The above are general goals and best practices for an internship during normal times. These are far from normal times. Like many companies, we were faced with the daunting question of what to do with our internship program when it was apparent that all or most of it would be virtual. We leaned into that challenge and developed a plan to build a virtual internship program that still embodies the principles we mentioned and ensures a robust internship experience.

The general mantra will be to over-communicate and make sure interns are included in all the team’s activities, communications, meetings, etc. Not only will it be important to include interns in this, it’s even more important because these members of our team will crave it the most. They’ll lack the historical context existing employees share, and also won’t have the breadth of general work experience that their team has. This is where mentors and managers will have to find ways to go above and beyond. Here are some tips below.

Onboarding

Interns will need to onboard in a completely remote environment, which may be new to both the manager and the company. If possible, check in with the interns before their first day to start building that relationship – understand what their remote work environment is like, how’s their mental health during COVID-19, are they excited and prepared to start? Also, keep in mind that the first two weeks are critical to set expectations for goals and deliverables, to connect them with the right folks involved in their project, and allow them to ask all the questions and get comfortable with the team.

Logistically, this may involve a laptop being mailed to them, or other accommodations for remote work. Verify that the intern has been onboarded correctly with access to necessary tools. Make a checklist. Some ideas to start with:

  1. Can they send/receive email on your company’s email address?
  2. Do you have their phone number if all else fails? And vice-versa?
  3. Do they have access to your team’s wiki space? Jira? Chat rooms?
  4. Can they join a Google Meet/Zoom meeting with you and the team? Including working camera and microphone?
  5. Can they access Google Calendar and have they been invited to team meetings? Do they know the etiquette for meetings (to accept and decline) and how to set up meetings with others?
  6. Have they completed the expected onboarding training provided by the company?
  7. Do they have access to the role-specific tools they’ll need to do their job? Source control, CI, Salesforce, Zendesk, etc. (make a checklist of these for your team!)

Cadence of Work

It’s critical to establish a normal work cadence, and that can be particularly challenging if someone starts off fully remote. For some interns, this may be their first time working in a professional environment and may need more guidance. Some suggestions for getting that established:

  1. Hold an explicit kickoff meeting between the intern and mentor in which they review the project/goals, and discuss how the team will work and interact (meeting frequency, chat room communication, etc).
  2. If an intern is located in a different timezone, establish what would be normal working hours and how the team will update them if they miss certain meetings.
  3. Ensure there’s a proper introduction to the team. This could be a dedicated 1:1 for each member, or a block of the team’s regular meeting to introduce the candidate to the team and vice-versa. Set up a social lunch or hour during the first week to have more casual conversations.
  4. Schedule weekly 1:1s and checkpoint meetings for the duration of the internship.
  5. Set up a very short-term goal that can be achieved quickly so the intern can get a sense for the end-to-end. Similar to how you might learn a new card game by “playing a few hands for fun” – the best way to learn is to dive right in.
  6. Consider having the mentor do an end-of-day check-in with the intern every day for at least the first week or two.
  7. Schedule at least one dedicated midpoint meeting to provide feedback. This is a time to evaluate how they’re progressing against their goals and deliverables and if they’re meeting their internship expectations. If they are, great. If not, it is essential at this point to inform them so they can improve.

Social Activities

A major part of a great internship also involves social activities and networking opportunities for interns to connect with different people. This becomes more difficult and requires ever more creativity to try to create those experiences. Here are some ideas:

  1. Hold weekly virtual intern lunches and if there’s budget, offer a food delivery gift card. Have themed lunches.
  2. Think about virtual social games, Netflix parties, and possibly other apps that can augment virtual networking experiences.
  3. Set up social hours for smaller groups of interns to connect and rotate. Have interns meet with interns from their original office locations, from the same departments,
  4. Set up an intern group chat and have a topic, joke, picture, meme of the day to the conversations alive.
  5. Create a constant “water cooler” Google Meet/Zoom room so folks can sign on anytime and see who is on.
  6. Host virtual conversations or round tables with executives and senior leaders.
  7. Involve them in other company activities, especially Employee Resource Groups (ERGs).
  8. Pair them with a buddy who is an employee from a different team or function. Also, pair them up with a peer intern buddy so they can share their experience.
  9. Send all the swag love you can so they can deck out their house and wardrobe. Maybe not all at once, so they can get some surprises.
  10. Find a way to highlight interns during regular all-hands meetings or other company events, so people are reminded they’re here.
  11. Survey the students and get their ideas! Very likely – they have better ideas on how to socialize in this virtual world.

Interns in the past have proven to be invaluable and have made huge contributions to Cloudflare. So, we are excited that we are able to double the program to give more students meaningful work this summer. Despite these very odd and not-so-normal times, we are committed to providing them the best experience possible and making it memorable.

We hope that by sharing our approach we can help other companies make the pivot to remote internships more easily. If you’re interested in collaborating and sharing ideas, please contact [email protected].

Doubling the intern class - and making it all virtual
Doubling the intern class - and making it all virtual
Doubling the intern class - and making it all virtual
Doubling the intern class - and making it all virtual

Helping sites get back online: the origin monitoring intern project

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/helping-sites-get-back-online-the-origin-monitoring-intern-project/

Helping sites get back online: the origin monitoring intern project

Helping sites get back online: the origin monitoring intern project

The most impactful internship experiences involve building something meaningful from scratch and learning along the way. Those can be tough goals to accomplish during a short summer internship, but our experience with Cloudflare’s 2019 intern program met both of them and more! Over the course of ten weeks, our team of three interns (two engineering, one product management) went from a problem statement to a new feature, which is still working in production for all Cloudflare customers.

The project

Cloudflare sits between customers’ origin servers and end users. This means that all traffic to the origin server runs through Cloudflare, so we know when something goes wrong with a server and sometimes reflect that status back to users. For example, if an origin is refusing connections and there’s no cached version of the site available, Cloudflare will display a 521 error. If customers don’t have monitoring systems configured to detect and notify them when failures like this occur, their websites may go down silently, and they may hear about the issue for the first time from angry users.

Helping sites get back online: the origin monitoring intern project
Helping sites get back online: the origin monitoring intern project
When a customer’s origin server is unreachable, Cloudflare sends a 5xx error back to the visitor.‌‌

This problem became the starting point for our summer internship project: since Cloudflare knows when customers’ origins are down, let’s send them a notification when it happens so they can take action to get their sites back online and reduce the impact to their users! This work became Cloudflare’s passive origin monitoring feature, which is currently available on all Cloudflare plans.

Over the course of our internship, we ran into lots of interesting technical and product problems, like:

Making big data small

Working with data from all requests going through Cloudflare’s 26 million+ Internet properties to look for unreachable origins is unrealistic from a data volume and performance perspective. Figuring out what datasets were available to analyze for the errors we were looking for, and how to adapt our whiteboarded algorithm ideas to use this data, was a challenge in itself.

Ensuring high alert quality

Because only a fraction of requests show up in the sampled timing and error dataset we chose to use, false positives/negatives were disproportionately likely to occur for low-traffic sites. These are the sites that are least likely to have sophisticated monitoring systems in place (and therefore are most in need of this feature!). In order to make the notifications as accurate and actionable as possible, we analyzed patterns of failed requests throughout different types of Cloudflare Internet properties. We used this data to determine thresholds that would maximize the number of true positive notifications, while making sure they weren’t so sensitive that we end up spamming customers with emails about sporadic failures.

Designing actionable notifications

Cloudflare has lots of different kinds of customers, from people running personal blogs with interest in DDoS mitigation to large enterprise companies with extremely sophisticated monitoring systems and global teams dedicated to incident response. We wanted to make sure that our notifications were understandable and actionable for people with varying technical backgrounds, so we enabled the feature for small samples of customers and tested many variations of the “origin monitoring email”. Customers responded right back to our notification emails, sent in support questions, and posted on our community forums. These were all great sources of feedback that helped us improve the message’s clarity and actionability.

We frontloaded our internship with lots of research (both digging into request data to understand patterns in origin unreachability problems and talking to customers/poring over support tickets about origin unreachability) and then spent the next few weeks iterating. We enabled passive origin monitoring for all customers with some time remaining before the end of our internships, so we could spend time improving the supportability of our product, documenting our design decisions, and working with the team that would be taking ownership of the project.

We were also able to develop some smaller internal capabilities that built on the work we’d done for the customer-facing feature, like notifications on origin outage events for larger sites to help our account teams provide proactive support to customers. It was super rewarding to see our work in production, helping Cloudflare users get their sites back online faster after receiving origin monitoring notifications.

Our internship experience

The Cloudflare internship program was a whirlwind ten weeks, with each day presenting new challenges and learnings! Some factors that led to our productive and memorable summer included:

A well-scoped project

It can be tough to find a project that’s meaningful enough to make an impact but still doable within the short time period available for summer internships. We’re grateful to our managers and mentors for identifying an interesting problem that was the perfect size for us to work on, and for keeping us on the rails if the technical or product scope started to creep beyond what would be realistic for the time we had left.

Working as a team of interns

The immediate team working on the origin monitoring project consisted of three interns: Annika in product management and Ilya and Zhengyao in engineering. Having a dedicated team with similar goals and perspectives on the project helped us stay focused and work together naturally.

Quick, agile cycles

Since our project faced strict time constraints and our team was distributed across two offices (Champaign and San Francisco), it was critical for us to communicate frequently and work in short, iterative sprints. Daily standups, weekly planning meetings, and frequent feedback from customers and internal stakeholders helped us stay on track.

Great mentorship & lots of freedom

Our managers challenged us, but also gave us room to explore our ideas and develop our own work process. Their trust encouraged us to set ambitious goals for ourselves and enabled us to accomplish way more than we may have under strict process requirements.

After the internship

In the last week of our internships, the engineering interns, who were based in the Champaign, IL office, visited the San Francisco office to meet with the team that would be taking over the project when we left and present our work to the company at our all hands meeting. The most exciting aspect of the visit: our presentation was preempted by Cloudflare’s co-founders announcing public S-1 filing at the all hands! 🙂

Helping sites get back online: the origin monitoring intern project
Helping sites get back online: the origin monitoring intern project

Over the next few months, Cloudflare added a notifications page for easy configurability and announced the availability of passive origin monitoring along with some other tools to help customers monitor their servers and avoid downtime.

Ilya is working for Cloudflare part-time during the school semester and heading back for another internship this summer, and Annika is joining the team full-time after graduation this May. We’re excited to keep working on tools that help make the Internet a better place!

Also, Cloudflare is doubling the size of the 2020 intern class—if you or someone you know is interested in an internship like this one, check out the open positions in software engineering, security engineering, and product management.

Internship Experience: Cryptography Engineer

Post Syndicated from Watson Ladd original https://blog.cloudflare.com/internship-experience-cryptography-engineer/

Internship Experience: Cryptography Engineer

Internship Experience: Cryptography Engineer

Back in the summer of 2017 I was an intern at Cloudflare. During the scholastic year I was a graduate student working on automorphic forms and computational Langlands at Berkeley: a part of number theory with deep connections to representation theory, aimed at uncovering some of the deepest facts about number fields. I had also gotten involved in Internet standardization and security research, but much more on the applied side.

While I had published papers in computer security and had coded for my dissertation, building and deploying new protocols to production systems was going to be new. Going from the academic environment of little day to day supervision to the industrial one of more direction; from greenfield code that would only ever be run by one person to large projects that had to be understandable by a team; from goals measured in years or even decades, to goals measured in days, weeks, or quarters; these transitions would present some challenges.

Cloudflare at that stage was a very different company from what it is now. Entire products and offices simply did not exist. Argo, now a mainstay of our offering for sophisticated companies, was slowly emerging. Access, which has been helping safeguard employees working from home these past weeks, was then experiencing teething issues. Workers was being extensively developed for launch that autumn. Quicksilver was still in the slow stages of replacing KyotoTycoon. Lisbon wasn’t on the map, and Austin was very new.

Day 1

My first job was to get my laptop working. Quickly I discovered that despite the promise of using either Mac or Linux, only Mac was supported as a local development environment. Most Linux users would take a good part of a month to tweak all the settings and get the local development environment up. I didn’t have months. After three days, I broke down and got a Mac.

Needless to say I asked for some help. Like a drowning man in quicksand, I managed to attract three engineers to this near insoluble problem of the edge dev stack, and after days of hacking on it, fixing problems that had long been ignored, we got it working well enough to test a few things. That development environment is now gone and replaced with one built Kubernetes VMs, and works much better that way. When things work on your machine, you can now send everyone your machine.

Speeding up

With setup complete enough, it was on to the problem we needed to solve. Our goal was to implement a set of three interrelated Internet drafts, one defining secondary certificates, one defining external authentication with TLS certificates, and a third permitting servers to advertise the websites they could serve.

External authentication is a TLS feature that permits a server or a client on an already opened connection to prove its possession of the private key of another certificate. This proof of possession is tied to the TLS connection, avoiding attacks on bearer tokens caused by the lack of this binding.

Secondary certificates is an HTTP/2 feature enabling clients and servers to send certificates together with proof that they actually know the private key. This feature has many applications such as certificate-based authentication, but also enables us to prove that we are permitted to serve the websites we claim to serve.

The last draft was the HTTP/2 ORIGIN frame. The ORIGIN frame enables a website to advertise other sites that it could serve, permitting more connection reuse than allowed under the traditional rules. Connection reuse is an important part of browser performance as it avoids much of the setup of a connection.

These drafts solved an important problem for Cloudflare. Many resources such as JavaScript, CSS, and images hosted by one website can be used by others. Because Cloudflare proxies so many different websites, our servers have often cached these resources as well. Browsers though, do not know that these different websites are made faster by Cloudflare, and as a result they repeat all the steps to request the subresources again. This takes unnecessary time since there is an established and usable perfectly good connection already. If the browser could know this, it could use the connection again.

We could only solve this problem by getting browsers and the broader community of TLS implementers on board. Some of these drafts such as external authentication and secondary certificates had a broader set of motivations, such as getting certificate based authentication to work with HTTP/2 and TLS 1.3. All of these needs had to be addressed in the drafts, even if we were only implementing a subset of the uses.

Successful standards cover the use cases that are needed while being simple enough to implement and achieve interoperability. Implementation experience is essential to achieving this success: a standard with no implementations fails to incorporate hard won lessons. Computers are hard.

Prototype

My first goal was to set up a simple prototype to test the much more complex production implementation, as well as to share outside of Cloudflare so that others could have confidence in their implementations. But these drafts that had to be implemented in the prototype were incremental improvements to an already massive stack of TLS and HTTP standards.

I decided it would be easiest to build on top of an already existing implementation of TLS and HTTP. I picked the Go standard library as my base: it’s simple, readable, and in a language I was already familiar with. There was already a basic demo showcasing support in Firefox for the ORIGIN frame, and it would be up to me to extend it.

Using that as my starting point I was able in 3 weeks to set up a demonstration server and a client. This showed good progress, and that nothing in the specification was blocking implementation. But without integrating it into our servers for further experimentation so that we might  discover rare issues that could be showstoppers. This was a bitter lesson learned from TLS 1.3, where it took months to track down a single brand of printer that was incompatible with the standard, and forced a change.

From Prototype to Production

We also wanted to understand the benefits with some real world data, to convince others that this approach was worthwhile. Our position as a provider to many websites globally gives us diverse, real world data on performance that we use to make our products better, and perhaps more important, to learn lessons that help everyone make the Internet better. As a result we had to implement this in production: the experimental framework for TLS 1.3 development had been removed and we didn’t have an environment for experimentation.

At the time everything at Cloudflare was based on variants of NGINX. We had extended it with modules to implement features like Keyless and customized certificate handling to meet our needs, but much of the business logic was and is carried out in Lua via OpenResty.

Lua has many virtues, but at the time both the TLS termination and the core business logic lived in the same repo despite being different processes at runtime. This made it very difficult to understand what code was running when, and changes to basic libraries could create problems for both. The build system for this creation had the significant disadvantage of building the same targets with different settings. Lua also is a very dynamic language, but unlike the dynamic languages I was used to, there was no way to interact with the system as it was running on requests.

The first step was implementing the ORIGIN frame. In implementing this, we had to figure out which sites hosted the subresources used by the page we were serving. Luckily, we already had this logic to enable server push support driven by Link headers. Building on this let me quickly get ORIGIN working.

This work wasn’t the only thing I was up to as an intern. I was also participating in weekly team meetings, attending our engineering presentations, and getting a sense of what life was like at Cloudflare. We had an excursion for interns to the Computer History Museum in Mountain View and Moffett Field, where we saw the base museum.

The next challenge was getting the CERTIFICATE frame to work. This was a much deeper problem. NGINX processes a request in phases, and some of the phases, like the header processing phase, do not permit network I/O without locking up the event loop. Since we are parsing the headers to determine what to send, the frame is created in the header processing phase. But finding a certificate and telling Keyless to sign it required network I/O.

The standard solution to this problem is to have Lua execute a timer callback, in which network I/O is possible. But this context doesn’t have any data from the request: some serious refactoring was needed to create a way to get the keyless module to function outside the context of a request.

Once the signature was created, the battle was half over. Formatting the CERTIFICATE frame was simple, but it had to be stuck into the connection associated with the request that had demanded it be created. And there was no reason to expect the request was still alive, and no way to know what state it was in when the request was handled by the Keyless module.

To handle this issue I made a shared btree indexed by a number containing space for the data to be passed back and forth. This enabled the request to record that it was ready to send the CERTIFICATE frame and Keyless to record that it was ready with a frame to send. Whichever of these happened second would do the work to enqueue the frame to send out.

This was not an easy solution: the Keyless module had been written years before and largely unmodified. It fundamentally assumed it could access data from the request, and changing this assumption opened the door to difficult to diagnose bugs. It integrates into BoringSSL callbacks through some pretty tricky mechanisms.

However, I was able to test it using the client from the prototype and it worked. Unfortunately when I pushed the commit in which it worked upstream, the CI system could not find the git repo where the client prototype was due to a setting I forgot to change. The CI system unfortunately didn’t associate this failure with the branch, but attempted to check it out whenever it checked out any other branch people were working on. Murphy ensured my accomplishment had happened on a Friday afternoon Pacific time, and the team that manages the SSL server was then exclusively in London…

Monday morning the issue was quickly fixed, and whatever tempers had frayed were smoothed over when we discovered the deficiency in the CI system that had enabled a single branch to break every build. It’s always tricky to work in a global team. Later Alessandro flew to San Francisco for a number of projects with the team here and we worked side by side trying to get a demonstration working on a test site. Unfortunately there was some difficulty tracking down a bug that prevented it working in production. We had run out of time, and my internship was over.  Alessandro flew back to London, and I flew to Idaho to see the eclipse.

The End

Ultimately we weren’t able to integrate this feature into the software at our edge: the risks of such intrusive changes for a very experimental feature outweighed the benefits. With not much prospect of support by clients, it would be difficult to get the real savings in performance promised. There also were nontechnical issues in standardization that have made this approach more difficult to implement: any form of traffic direction that doesn’t obey DNS creates issues for network debugging, and there were concerns about the impact of certificate misissuance.

While the project was less successful than I hoped it would be, I learned a lot of important skills: collaborating on large software projects, working with git, and communicating with other implementers about issues we found.  I also got a taste of what it would be like to be on the Research team at Cloudflare and turning research from idea into practical reality and this ultimately confirmed my choice to go into industrial research.

I’ve now returned to Cloudflare full-time, working on extensions for TLS as well as time synchronization. These drafts have continued to progress through the standardization process, and we’ve contributed some of the code I wrote as a starting point for other implementers to use.  If we knew all our projects would work out, they wouldn’t be ambitious enough to be research worth doing.

If this sort of research experience appeals to you, we’re hiring.

Trailblazing a Development Environment for Workers

Post Syndicated from Avery Harnish original https://blog.cloudflare.com/trailblazing-a-development-environment-for-workers/

Trailblazing a Development Environment for Workers

Trailblazing a Development Environment for Workers

When I arrived at Cloudflare for an internship in the summer of 2018, I was taken on a tour, introduced to my mentor who took me out for coffee (shoutout to Preston), and given a quick whiteboard overview of how Cloudflare works. Each of the interns would work on a small project of their own and they’d try to finish them by the end of the summer. The description of the project I was given on my very first day read something along the lines of “implementing signed exchanges in a Cloudflare Worker to fix the AMP URL attribution problem,” which was a lot to take in at once. I asked so many questions those first couple of weeks. What are signed exchanges? Can I put these stickers on my laptop? What’s a Cloudflare Worker? Is there a limit to how much Topo Chico I can take from the fridge? What’s the AMP URL attribution problem? Where’s the bathroom?

I got the answers to all of those questions (and more!) and eventually landed a full-time job at Cloudflare. Here’s the story of my internship and working on the Workers Developer Experience team at Cloudflare.

Getting Started with Workers in 2018

After doing a lot of reading, and asking a lot more questions, it was time to start coding. I set up a Cloudflare account with a Workers subscription, and was greeted with a page that looked something like this:

Trailblazing a Development Environment for Workers

I was able to change the code in the text area on the left, click “Update”, and the changes would be reflected on the right — fairly self-explanatory. There was also a testing tab which allowed me to handcraft HTTP requests with different methods and custom headers. So far so good.

As my project evolved, it became clear that I needed to leave the Workers editor behind. Anything more than a one-off script tends to require JavaScript modules and multiple files. I spent some time setting up a local development environment for myself with npm and webpack (see, purgatory: a place or state of temporary suffering. merriam-webster.com).

After I finally got everything working, my iteration cycle looked a bit like this:

  1. Make a change to my code
  2. Run npm run build (which ran webpack and bundled my code in a single script)
  3. Open ./dist/worker.min.js (the output from my build step)
  4. Copy the entire contents of the built Worker to my clipboard
  5. Switch to the Cloudflare Workers Dashboard
  6. Paste my script into the Workers editor
  7. Click update
  8. Investigate the behavior of my recently modified script
  9. Rinse and repeat

There were two main things here that were decidedly not a fantastic developer experience:

  1. Inspecting the value of a variable by adding a console.log statement would take me ~2-3 minutes and involved lots of manual steps to perform a full rebuild.
  2. I was unable to use familiar HTTP clients such as cURL and Postman without deploying to production. This was because the Workers Preview UI was an iframe nested in the dashboard.

Luckily for me, Cloudflare Workers deploy globally incredibly quickly, so I could push the latest iteration of my Worker, wait just a few seconds for it to go live, and cURL away.

A Better Workers Developer Experience in 2019

Shortly after we shipped AMP Real URL, Cloudflare released Wrangler, the official CLI tool for developing Workers, and I was hired full time to work on it. Wrangler came with a feature that automated steps 2-7 of my workflow by running the command wrangler preview, which was a significant improvement. Running the command would build my Worker and open the browser automatically for me so I could see log messages and test out HTTP requests. That summer, our intern Matt Alonso created wrangler preview --watch. This command automatically updates the Workers preview window when changes are made to your code. You can read more about that here. This was, yet again, another improvement over my old friend Build and Open and Copy and Switch Windows and Paste Forever and Ever, Amen. But there was still no way that I could test my Worker with any HTTP client I wanted without deploying to production — I was still locked in to using the nested iframe.

A few months ago we decided it was time to do something about it. To the whiteboard!

Enter wrangler dev

Most web developers are familiar with developing their applications on localhost, and since Wrangler is written in Rust, it means we could start up a server on localhost that would handle requests to a Worker. The idea was to somehow start a server on localhost and then transform incoming requests and send them off to a preview session running on a Cloudflare server.

Proof of Concept

What we came up with ended up looking a little something like this — when a developer runs wrangler dev, do the following:

Trailblazing a Development Environment for Workers

  1. Build the Worker
  2. Upload the Worker via the Cloudflare API as a previewable Worker
  3. The Cloudflare API takes the uploaded script and creates a preview session, and returns an access token
  4. Start listening for incoming HTTP requests at localhost:8787

Top secret fact: 8787 spells out Rust on a phone numpad Happy Easter!

  1. All incoming requests to localhost:8787 are modified:

  • All headers are prepended with cf-ew-raw- (for instance, X-Auth-Header would become cf-ew-raw-X-Auth-Header)
  • The URL is changed to https://rawhttp.cloudflareworkers.com/${path}
  • The Host header is changed to rawhttp.cloudflareworkers.com
  • The cf-ew-preview header is added with the access token returned from the API in step 3

  1. After sending this request, the response is modified

  • All headers not prefixed with cf-ew-raw- are discarded and headers with the prefix have it removed (for instance, cf-ew-raw-X-Auth-Success would become X-Auth-Success)

The hard part here was already done — the Workers Core team had already implemented the API to support the Preview UI. We just needed to gently nudge Wrangler and the API to be the best of friends. After some investigation into Rust’s HTTP ecosystem, we settled on using the HTTP library hyper, which I highly recommend if you’re in need of a low level HTTP library — it’s fast, correct, and the ergonomics are constantly improving. After a bit of work, we got a prototype working and carved Wrangler ❤️ Cloudflare API into the old oak tree down by Lady Bird Lake.

Usage

Let’s say I have a Workers script that looks like this:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  let message = "Hello, World!"
  return new Response(message)
}

If I created a Wrangler project with this code and ran wrangler dev, this is what it looked like:

$ wrangler dev
👂  Listening on http://127.0.0.1:8787

In another terminal session, I could run the following:

$ curl localhost:8787
Hello, World!

It worked! Hooray!

Just the Right Amount of Scope Creep

At this point, our initial goal was complete: any HTTP client could test out a Worker before it was deployed. However, wrangler dev was still missing crucial functionality. When running wrangler preview, it’s possible to view console.log output in the browser editor. This is incredibly useful for debugging Workers applications, and something with a name like wrangler dev should include a way to view those logs as well. “This will be easy,” I said, not yet knowing what I was signing up for. Buckle up!

console.log, V8, and the Chrome Devtools Protocol, Oh My!

My first goal was to get a Hello, World! message streamed to my terminal session so that developers can debug their applications using wrangler dev. Let’s take the script from earlier and add a console.log statement to it:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  let message = "Hello, World!"
  console.log(message) // this line is new
  return new Response(message)
}

If you’d like to follow along, you can paste that script into the editor at cloudflareworkers.com using Google Chrome.

This is what the Preview editor looks like when that script is run:

Trailblazing a Development Environment for Workers

You can see that Hello, World! has been printed to the console. This may not be the most useful example, but in more complex applications logging different variables is helpful for debugging. If you’re following along, try changing console.log(message) to something more interesting, like console.log(request.url).

The console may look familiar to you if you’re a web developer because it’s the same interface you see when you open the Developer Tools in Google Chrome. Since Cloudflare Workers is built on top of V8 (more info about that here and here), the Workers runtime is able to create a WebSocket that speaks the Chrome Devtools Protocol. This protocol allows the client (your browser, Wrangler, or anything else that supports WebSockets) to send and receive messages that contain information about the script that is running.

In order to see the messages that are being sent back and forth between our browser and the Workers runtime:

  1. Open Chrome Devtools
  2. Click the Network tab at the top of the inspector
  3. Click the filter icon underneath the Network tab (it looks like a funnel and is nested between the cancel icon and the search icon)
  4. Click WS to filter out all requests but WebSocket connections

Your inspector should look like this:

Trailblazing a Development Environment for Workers

Then, reload the page, and select the /inspect item to view its messages. It should look like this:

Trailblazing a Development Environment for Workers

Hey look at that! We can see messages that our browser sent to the Workers runtime to enable different portions of the developer tools for this Worker, and we can see that the runtime sent back our Hello, World! Pretty cool!

On the Wrangler side of things, all we had to do to get started was initialize a WebSocket connection for the current Worker, and send a message with the method Runtime.enable so the Workers runtime would enable the Runtime domain and start sending console.log messages from our script.

After those initial steps, it quickly became clear that a lot more work was needed to get to a useful developer tool. There’s a lot that goes into the Chrome Devtools Inspector and most of the libraries for interacting with it are written in languages other than Rust (which we use for Wrangler). We spent a lot of time switching WebSocket libraries due to incompatibilities across operating systems (turns out TLS is hard) and implementing the part of the Chrome Devtools Protocol in Rust that we needed to. There’s a lot of work that still needs to be done in order to make wrangler dev a top notch developer tool, but we wanted to get it into the hands of developers as quickly as possible.

Try it Out!

wrangler dev is currently in alpha, and we’d love it if you could try it out! You should first check out the Quick Start and then move on to wrangler dev. If you run into issues or have any feedback, please let us know!

Signing Off

I’ve come a long way from where I started in 2018 and so has the Workers ecosystem. It’s been awesome helping to improve the developer experience of Workers for future interns, internal Cloudflare teams, and of course our customers. I can’t wait to see what we do next. I have some ideas for what’s next with Wrangler, so stay posted!

P.S. Wrangler is also open source, and we are more than happy to field bug reports, feedback, and community PRs. Check out our Contribution Guide if you want to help out!

Cloudflare Doubling Size of 2020 Summer Intern Class

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/cloudflare-doubling-size-of-2020-summer-intern-class/

Cloudflare Doubling Size of 2020 Summer Intern Class

Cloudflare Doubling Size of 2020 Summer Intern Class

We are living through extraordinary times. Around the world, the Coronavirus has caused disruptions to nearly everyone’s work and personal lives. It’s been especially hard to watch as friends and colleagues outside Cloudflare are losing jobs and businesses struggle through this crisis.

We have been extremely fortunate at Cloudflare. The super heroes of this crisis are clearly the medical professionals at the front lines saving people’s lives and the scientists searching for a cure. But the faithful sidekick that’s helping us get through this crisis — still connected to our friends, loved ones, and, for those of us fortunate enough to be able to continue work from home, our jobs — is the Internet. As we all need it more than ever, we’re proud of our role in helping ensure that the Internet continues to work securely and reliably for all our customers.

We plan to invest through this crisis. We are continuing to hire across all teams at Cloudflare and do not foresee any need for layoffs. I appreciate the flexibility of our team and new hires to adapt what was our well-oiled, in-person orientation process to something virtual we’re continuing to refine weekly as new people join us.

Summer Internships

One group that has been significantly impacted by this crisis are students who were expecting internships over the summer. Many are, unfortunately, getting notice that the experiences they were counting on have been cancelled. These internships are not only a significant part of these students’ education, but in many cases provide an income that helps them get through the school year.

Cloudflare is not cancelling any of our summer internships. We anticipate that many of our internships will need to be remote to comply with public health recommendations around travel and social distancing. We also understand that some students may prefer a remote internship even if we do begin to return to the office so they can take care of their families and avoid travel during this time. We stand by every internship offer we have extended and are committed to making each internship a terrific experience whether remote, in person, or some mix of both.

Doubling the Size of the 2020 Internship Class

But, seeing how many great students were losing their internships at other companies, we wanted to do more. Today we are announcing that we will double the size of Cloudflare’s summer 2020 internship class. Most of the internships we offer are in our product, security, research and engineering organizations, but we also have some positions in our marketing and legal teams. We are reopening the internship application process and are committed to making decisions quickly so students can plan their summers. You can find newly open internships posted at the link below.

https://boards.greenhouse.io/cloudflare/jobs/2156436?gh_jid=2156436

Internships are jobs, and we believe people should be paid for the jobs they do, so every internship at Cloudflare is paid. That doesn’t change with these new internship positions we’re creating: they will all be paid.

Highlighting Other Companies with Opportunities

Even when we double the size of our internship class we expect that we will receive far more qualified applicants than we will be able to accommodate. We hope that other companies that are in a fortunate position to be able to weather this crisis will consider expanding their internship classes as well. We plan to work with peer organizations and will highlight those that also have summer internship openings. If your company still has available internship positions, please let us know by emailing so we can point students your way: [email protected]

Opportunity During Crisis

Cloudflare was born out of a time of crisis. Michelle and I were in school when the global financial crisis hit in 2008. Michelle had spent that summer at an internship at Google. That was the one year Google decided to extend no full-time offers to summer interns. So, in the spring of 2009, we were both still trying to figure out what we were going to do after school.

It didn’t feel great at the time, but had we not been in the midst of that crisis I’m not sure we ever would have started Cloudflare. Michelle and I remember the stress of that time very clearly. The recognition of the importance of planning for rainy days has been part of what has made Cloudflare so resilient. And it’s why, when we realized we could play a small part in ensuring some students who had lost the internships they thought they had could still have a rewarding experience, we knew it was the right decision.

Together, we can get through this. And, when we do, we will all be stronger.

https://boards.greenhouse.io/cloudflare/jobs/2156436?gh_jid=2156436