Tag Archives: dashboard

One of our most requested features is here: DNS record comments and tags

Post Syndicated from Hannes Gerhart original https://blog.cloudflare.com/dns-record-comments/

One of our most requested features is here: DNS record comments and tags

One of our most requested features is here: DNS record comments and tags

Starting today, we’re adding support on all zone plans to add custom comments on your DNS records. Users on the Pro, Business and Enterprise plan will also be able to tag DNS records.

DNS records are important

DNS records play an essential role when it comes to operating a website or a web application. In general, they are used to mapping human-readable hostnames to machine-readable information, most commonly IP addresses. Besides mapping hostnames to IP addresses they also fulfill many other use cases like:

  • Ensuring emails can reach your inbox, by setting up MX records.
  • Avoiding email spoofing and phishing by configuring SPF, DMARC and DKIM policies as TXT records.
  • Validating a TLS certificate by adding a TXT (or CNAME) record.
  • Specifying allowed certificate authorities that can issue certificates on behalf of your domain by creating a CAA record.
  • Validating ownership of your domain for other web services (website hosting, email hosting, web storage, etc.) – usually by creating a TXT record.
  • And many more.

With all these different use cases, it is easy to forget what a particular DNS record is for and it is not always possible to derive the purpose from the name, type and content of a record. Validation TXT records tend to be on seemingly arbitrary names with rather cryptic content. When you then also throw multiple people or teams into the mix who have access to the same domain, all creating and updating DNS records, it can quickly happen that someone modifies or even deletes a record causing the on-call person to get paged in the middle of the night.

Enter: DNS record comments & tags 📝

Starting today, everyone with a zone on Cloudflare can add custom comments on each of their DNS records via the API and through the Cloudflare dashboard.

One of our most requested features is here: DNS record comments and tags

To add a comment, just click on the Edit action of the respective DNS record and fill out the Comment field. Once you hit Save, a small icon will appear next to the record name to remind you that this record has a comment. Hovering over the icon will allow you to take a quick glance at it without having to open the edit panel.

One of our most requested features is here: DNS record comments and tags

What you also can see in the screenshot above is the new Tags field. All users on the Pro, Business, or Enterprise plans now have the option to add custom tags to their records. These tags can be just a key like “important” or a key-value pair like “team:DNS” which is separated by a colon. Neither comments nor tags have any impact on the resolution or propagation of the particular DNS record, and they’re only visible to people with access to the zone.

Now we know that some of our users love automation by using our API. So if you want to create a number of zones and populate all their DNS records by uploading a zone file as part of your script, you can also directly include the DNS record comments and tags in that zone file. And when you export a zone file, either to back up all records of your zone or to easily move your zone to another account on Cloudflare, it will also contain comments and tags. Learn more about importing and exporting comments and tags on our developer documentation.

;; A Records
*.mycoolwebpage.xyz.     1      IN  A    192.0.2.3
mycoolwebpage.xyz.       1      IN  A    203.0.113.1 ; Contact Hannes for details.
sub1.mycoolwebpage.xyz.  1      IN  A    192.0.2.2 ; Test origin server. Can be deleted eventually. cf_tags=testing
sub1.mycoolwebpage.xyz.  1      IN  A    192.0.2.1 ; Production origin server. cf_tags=important,prod,team:DNS

;; MX Records
mycoolwebpage.xyz.       1      IN  MX   1 mailserver1.example.
mycoolwebpage.xyz.       1      IN  MX   2 mailserver2.example.

;; TXT Records
mycoolwebpage.xyz.       86400	IN  TXT  "v=spf1 ip4:192.0.2.0/24 -all" ; cf_tags=important,team:EMAIL
sub1.mycoolwebpage.xyz.  86400  IN  TXT  "hBeFxN3qZT40" ; Verification record for service XYZ. cf_tags=team:API

New filters

It might be that your zone has hundreds or thousands of DNS records, so how on earth would you find all the records that belong to the same team or that are needed for one particular application?

For this we created a new filter option in the dashboard. This allows you to not only filter for comments or tags but also for other record data like name, type, content, or proxy status. The general search bar for a quick and broader search will still be available, but it cannot (yet) be used in conjunction with the new filters.

One of our most requested features is here: DNS record comments and tags

By clicking on the “Add filter” button, you can select individual filters that are connected with a logical AND. So if I wanted to only look at TXT records that are tagged as important, I would add these filters:

One more thing (or two)

Another change we made is to replace the Advanced button with two individual actions: Import and Export, and Dashboard Display Settings.

You can find them in the top right corner under DNS management. When you click on Import and Export you have the option to either export all existing DNS records (including their comments and tags) into a zone file or import new DNS records to your zone by uploading a zone file.

The action Dashboard Display Settings allows you to select which special record types are shown in the UI. And there is an option to toggle showing the record tags inline under the respective DNS record or just showing an icon if there are tags present on the record.

And last but not least, we increased the width of the DNS record table as part of this release. The new table makes better use of the existing horizontal space and allows you to see more details of your DNS records, especially if you have longer subdomain names or content.

Try it now

DNS record comments and tags are available today. Just navigate to the DNS tab of your zone in the Cloudflare dashboard and create your first comment or tag. If you are not yet using Cloudflare DNS, sign up for free in just a few minutes.

Learn more about DNS record comments and tags on our developer documentation.

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

Post Syndicated from Emily Flannery original https://blog.cloudflare.com/project-a11y/

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

At Cloudflare, we believe the Internet should be accessible to everyone. And today, we’re happy to announce a more inclusive Cloudflare dashboard experience for our users with disabilities. Recent improvements mean our dashboard now adheres to industry accessibility standards, including Web Content Accessibility Guidelines (WCAG) 2.1 AA and Section 508 of the Rehabilitation Act.

Over the past several months, the Cloudflare team and our partners have been hard at work to make the Cloudflare dashboard1 as accessible as possible for every single one of our current and potential customers. This means incorporating accessibility features that comply with the latest Web Content Accessibility Guidelines (WCAG) and Section 508 of the US’s federal Rehabilitation Act. We are invested in working to meet or exceed these standards; to demonstrate that commitment and share openly about the state of accessibility on the Cloudflare dashboard, we have completed the Voluntary Product Accessibility Template (VPAT), a document used to evaluate our level of conformance today.

Conformance with a technical and legal spec is a bit abstract–but for us, accessibility simply means that as many people as possible can be successful users of the Cloudflare dashboard. This is important because each day, more and more individuals and businesses rely upon Cloudflare to administer and protect their websites.

For individuals with disabilities who work on technology, we believe that an accessible Cloudflare dashboard could mean improved economic and technical opportunities, safer websites, and equal access to tools that are shaping how we work and build on the Internet.

For designers and developers at Cloudflare, our accessibility remediation project has resulted in an overhaul of our component library. Our newly WCAG-compliant components expedite and simplify our work building accessible products. They make it possible for us to deliver on our commitment to an accessible dashboard going forward.

Our Journey to an Accessible Cloudflare Dashboard

In 2021, we initiated an audit with third party experts to identify accessibility challenges in the Cloudflare dashboard. This audit came back with a daunting 213-page document—a very, very long list of compliance gaps.

We learned from the audit that there were many users we had unintentionally failed to design and build for in Cloudflare dashboard user interfaces. Most especially, we had not done well accommodating keyboard users and screen reader users, who often rely upon these technologies because of a physical impairment. Those impairments include low vision or blindness, motor disabilities (examples include tremors and repetitive strain injury), or cognitive disabilities (examples include dyslexia and dyscalculia).

As a product and engineering organization, we had spent more than a decade in cycles of rapid growth and product development. While we’re proud of what we have built, the audit made clear to us that there was a great need to address the design and technical debt we had accrued along the way.

One year, four hundred Jira tickets, and over 25 new, accessible web components later, we’re ready to celebrate our progress with you. Major categories of work included:

  1. Forms: We re-wrote our internal form components with accessibility and developer experience top of mind. We improved form validation and error handling, labels, required field annotations, and made use of persistent input descriptions instead of placeholders. Then, we deployed those component upgrades across the dashboard.
  2. Data visualizations: After conducting a rigorous re-evaluation of their design, we re-engineered charts and graphs to be accessible to keyboard and screen reader users. See below for a brief case study.
  3. Heading tags: We corrected page structure throughout the dashboard by replacing all our heading tags (<h1>, <h2>, etc.) with a technique we borrowed from Heydon Pickering. This technique is an approach to heading level management that uses React Context and basic arithmetic.
  4. SVGs: We reworked how we create SVGs (Scalable Vector Graphics), so that they are labeled properly and only exposed to assistive technology when useful.
  5. Node modules: We jumped several major versions of old, inaccessible node modules that our UI components depend upon (and we broke many things along the way).
  6. Color: We overhauled our use of color, and contributed a new volume of accessible sequential colors to our design system.
  7. Bugs: We squashed a lot of bugs that had made their way into the dashboard over the years. The most common type of bug we encountered related to incorrect or unsemantic use of HTML elements—for example, using a <div> where we should have used a <td> (table data) or <tr> (table row) element within a table.

Case Study: Accessibility Work On Cloudflare Dashboard Data & Analytics

The Cloudflare dashboard is replete with analytics and data visualizations designed to offer deep insight into users’ websites’ performance, traffic, security, and more. Making those data visualizations accessible proved to be among the most complex and interdisciplinary issues we faced in the remediation work.

An example of a problem we needed to solve related to WCAG success criterion 1.4.1, which pertains to the use of color. 1.4.1 specifies that color cannot be the only means by which to convey information, such as the differentiation between two items compared in a chart or graph.

Our charts were clearly nonconforming with this standard, using color alone to represent different data being compared. For example, a typical graph might have used the color blue to show the number of requests to a website that were 200 OK, and the color orange to show 403 Forbidden, but failed to offer users another way to discern between the two status codes.

Our UI team went to work on the problem, and chose to focus our effort first on the Cloudflare dashboard time series graphs.

Interestingly, we found that design patterns recommended even by accessibility experts created wholly unusable visualizations when placed into the context of real world data. Examples of such recommended patterns include using different line weights, patterns (dashed, dotted or other line styles), and terminal glyphs (symbols set at the beginning and end of the lines) to differentiate items being compared.

We tried, and failed, to apply a number of these patterns; you can see the evolution of this work on our time series graph component in the three different images below.

v.1

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
Here is an early attempt at using both terminal glyphs and patterns to differentiate data in a time series graph. You can see that the terminal glyphs pile up and become indistinguishable; the differences among the line patterns are very hard to discern. This code never made it into production.

v.2

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
In this version, we eliminated terminal glyphs but kept line patterns. Additionally, we faded the unfocused items in the graph to help bring highlighted data to the forefront. This latter technique made it into our final solution.

v.3

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
Here we eliminated patterns altogether, simplified the user interface to only use the fading technique on unfocused items, and put our new, sequentially accessible colors to use. Finally, a visual design solution approved by accessibility and data visualization experts, as well as our design and engineering teams.

After arriving at our design solution, we had some engineering work to do.

In order to meet WCAG success criterion 2.1.1, we rewrote our time series graphs to be fully keyboard accessible by adding focus handling to every data point, and enabling the traversal of data using arrow keys.

Navigating time series data points by keyboard on the Cloudflare dashboard.

We did some fine-tuning, specifically to support screen readers: we eliminated auditory “chartjunk” (unnecessary clutter or information in a chart or graph) and cleaned up decontextualized data (a scenario in which numbers are exposed to and read by a screen reader, but contextualizing information, like x- and y-axis labels, is not).

And lastly, to meet WCAG 1.1.1, we engineered new UI component wrappers to make chart and graph data downloadable in CSV format. We deployed this part of the solution across all charts and graphs, not just the time series charts like those shown above. No matter how you browse and interact with the web, we hope you’ll notice this functionality around the Cloudflare dashboard and find value in it.

Making all of this data available to low vision, keyboard, and assistive technology users was an interesting challenge for us, and a true team effort. It necessitated a separate data visualization report conducted by another, more specialized team of third party experts, deep collaboration between engineering and design, and many weeks of development.

Applying this thorough treatment to all data visualizations on the Cloudflare dashboard is our goal, but still work in progress. Please stay tuned for more accessible updates to our chart and graph components.

Conclusion

There’s a lot of nuance to accessibility work, and we were novices at the beginning: researching and learning as we were doing. We also broke a lot of things in the process, which (as any engineering team knows!) can be stressful.

Overall, our team’s biggest challenge was figuring out how to complete a high volume of cross-functional work in the shortest time possible, while also setting a foundation for these improvements to persist over time.

As a frontend engineering and design team, we are very grateful for having had the opportunity to focus on this problem space and to learn from truly world-class accessibility experts along the way.

Accessibility matters to us, and we know it does to you. We’re proud of our progress, and there’s always more to do to make Cloudflare more usable for all of our customers. This is a critical piece of our foundation at Cloudflare, where we are building the most secure, performant and reliable solutions for the Internet. Stay tuned for what’s next!

Not using Cloudflare yet? Get started today and join us on our mission to build a better Internet.

1All references to “dashboard” in this post are specific to the primary user authenticated Cloudflare web platform. This does not include Cloudflare’s product-specific dashboards, marketing, support, educational materials, or third party integrations.

Now all customers can share access to their Cloudflare account with Role Based Access Controls

Post Syndicated from Joseph So original https://blog.cloudflare.com/rbac-for-everyone/

Now all customers can share access to their Cloudflare account with Role Based Access Controls

Now all customers can share access to their Cloudflare account with Role Based Access Controls

Cloudflare’s mission is to help build a better Internet. Pair that with our core belief that security is something that should be accessible to everyone and the outcome is a better and safer Internet for all. Previously, our FREE and PAYGO customers didn’t have the flexibility to give someone control of just part of their account, they had to give access to everything.

Starting today, role based access controls (RBAC), and all of our additional roles will be rolled out to users on every plan! Whether you are a small business or even a single user, you can ensure that you can add users only to parts of Cloudflare you deem appropriate.

Why should I limit access?

It is good practice with security in general to limit access to what a team member needs to do a job. Restricting access limits the overall threat surface if a given user was compromised, and ensures that you limit the surface that mistakes can be made.

If a malicious user was able to gain access to an account, but it only had read access, you’ll find yourself with less of a headache than someone who had administrative access, and could change how your site operates. Likewise, you can prevent users outside their role from accidentally making changes to critical features like firewall or DNS configuration.

What are roles?

Roles are a grouping of permissions that make sense together. At Cloudflare, this means grouping permissions together by access to a product suite.

Cloudflare is a critical piece of infrastructure for customers, and roles ensure that you can give your team the access they need, scoped to what they’ll do, and which products they interact with.

Once enabled for Role Based Access Controls, by going to “Manage Account” and “Members” in the left sidebar, you’ll have the following list of roles available, which each grant access to disparate subsets of the Cloudflare offering.

Role Name Role Description
Administrator Can access the full account, except for membership management and billing.
Administrator Read Only Can access the full account in read-only mode.
Analytics Can read Analytics.
Audit Logs Viewer Can view Audit Logs.
Billing Can edit the account’s billing profile and subscriptions.
Cache Purge Can purge the edge cache.
Cloudflare Access Can edit Cloudflare Access policies.
Cloudflare Gateway Can edit Cloudflare Gateway and read Access.
Cloudflare Images Can edit Cloudflare Images assets
Cloudflare Stream Can edit Cloudflare Stream media.
Cloudflare Workers Admin Can edit Cloudflare Workers.
Cloudflare Zero Trust Can edit Cloudflare Zero Trust.
Cloudflare Zero Trust PII Can access Cloudflare Zero Trust PII.
Cloudflare Zero Trust Read Only Can access Cloudflare for Zero Trust read only mode.
Cloudflare Zero Trust Reporting Can access Cloudflare for Zero Trust reporting data.
DNS Can edit DNS records.
Firewall Can edit WAF, IP Firewall, and Zone Lockdown settings.
HTTP Applications Can view and edit HTTP Applications
HTTP Applications Read Can view HTTP Applications
Load Balancer Can edit Load Balancers, Pools, Origins, and Health Checks.
Log Share Can edit Log Share configuration.
Log Share Reader Can read Enterprise Log Share.
Magic Network Monitoring Can view and edit MNM configuration
Magic Network Monitoring Admin Can view, edit, create, and delete MNM configuration
Magic Network Monitoring Read-Only Can view MNM configuration
Network Services Read (Magic) Grants read access to network configurations for Magic services.
Network Services Write (Magic) Grants write access to network configurations for Magic services.
SSL/TLS, Caching, Performance, Page Rules, and Customization Can edit most Cloudflare settings except for DNS and Firewall.
Trust and Safety Can view and request reviews for blocks
Zaraz Admin Can edit Zaraz configuration.
Zaraz Readonly Can read Zaraz configuration.

If you find yourself on a team that is growing, you may want to grant firewall and DNS access to a delegated network admin, billing access to your bookkeeper, and Workers access to your developer.

Each of these roles provides specific access to a portion of your Cloudflare account, scoping them to the appropriate set of products. Even Super Administrator is now available, allowing you to provide this access to somebody without handing over your password and 2FA.

How to use our roles

The first step to using RBAC is an analysis and review of the duties and tasks of your team. When a team member primarily interacts with a specific part of the Cloudflare offering, start off by giving them only access to that part(s). Our roles are built in a way that allows multiple to be assigned to a single user, such that when they require more access, you can grant them an additional role.

Rollout

At this point in time, we will be rolling out RBAC over the next few weeks. When the roles become available in your account, head over to our documentation to learn about each of the roles in detail.

We’ve shipped so many products the Cloudflare dashboard needed its own search engine

Post Syndicated from Emily Flannery original https://blog.cloudflare.com/quick-search-beta/

We've shipped so many products the Cloudflare dashboard needed its own search engine

We've shipped so many products the Cloudflare dashboard needed its own search engine

Today we’re proud to announce our first release of quick search for the Cloudflare dashboard, a beta version of our first ever cross-dashboard search tool to help you navigate our products and features. This first release is now available to a small percentage of our customers. Want to request early access? Let us know by filling out this form.

What we’re launching

We’re launching quick search to speed up common interactions with the Cloudflare dashboard. Our dashboard allows you to configure Cloudflare’s full suite of products and features, and quick search gives you a shortcut.

To get started, you can access the quick search tool from anywhere within the Cloudflare dashboard by clicking the magnifying glass button in the top navigation, or hitting Ctrl + K on Linux and Windows or ⌘ + K on Mac. (If you find yourself forgetting which key combination it is just remember that it’s or Ctrl-K-wik.) From there, enter a search term and then select from the results shown below.

We've shipped so many products the Cloudflare dashboard needed its own search engine
Access quick search from the top navigation bar, or use keyboard shortcuts Ctrl + K on Linux and Windows or ⌘ + K on Mac.

Current supported functionality

What functionality will you have access to? Below you’ll learn about the three core capabilities of quick search that are included in this release, as well as helpful tips for using the tool.

Search for a page in the dashboard

Start typing in the name of the product you’re looking for, and we’ll load matching terms after each key press. You will see results for any dashboard page that currently exists in your sidebar navigation. Then, just click the desired result to navigate directly there.

We've shipped so many products the Cloudflare dashboard needed its own search engine
Search for “page” and you’ll see results categorized into “website-only products” and “account-wide products.”
We've shipped so many products the Cloudflare dashboard needed its own search engine
Search for “ddos” and you’ll see results categorized into “websites,” “website-only products” and “account-wide products.”

Search for website-only products

For our customers who manage a website or domain in Cloudflare, you have access to a multitude of Cloudflare products and features to enhance your website’s security, performance and reliability. Quick search can be used to easily find those products and features, regardless of where you currently are in the dashboard (even from within another website!).

You may easily search for your website by name to navigate to your website’s Overview page:

We've shipped so many products the Cloudflare dashboard needed its own search engine

You may also navigate to the products and feature pages within your specific website(s). Note that you can perform a website-specific search from anywhere in your core dashboard using one of two different approaches, which are explained below.

First, you may search first for your website by name, then navigate search results from there:

We've shipped so many products the Cloudflare dashboard needed its own search engine

Alternatively, you may search first for the product or feature you’re looking for, then filter down by your website:

We've shipped so many products the Cloudflare dashboard needed its own search engine

Search for account-wide products

Many Cloudflare products and features are not tied directly to a website or domain that you have set up in Cloudflare, like Workers, R2, Magic Transit—not to mention their related sub-pages. Now, you may use quick search to more easily navigate to those sections of the dashboard.

We've shipped so many products the Cloudflare dashboard needed its own search engine

Here’s an overview of what’s next on our quick search roadmap (and not yet supported today):

  • Search results do not currently return results of product- and feature-specific names or configurations, such as Worker names, specific DNS records, IP addresses, Firewall Rules.
  • Search results do not currently return results from within the Zero Trust dashboard.
  • Search results do not currently return results for Cloudflare content living outside the dashboard, like Support or Developer documentation.

We’d love to hear what you think. What would you like to see added next? Let us know using the feedback link found at the bottom of the search window.

We've shipped so many products the Cloudflare dashboard needed its own search engine

Our vision for the future of the dashboard

We’re excited to launch quick search and to continue improving our dashboard experience for all customers. Over time, we’ll mature our search functionality to index any and all content you might be looking for — including search results for all product content, Support and Developer docs, extending search across accounts, caching your recent searches, and more.

Quick search is one of many important user experience improvements we are planning to tackle over the coming weeks, months and years. The dashboard is central to your Cloudflare experience, and we’re fully committed to making your experience delightful, useful, and easy. Stay tuned for an upcoming blog post outlining the vision for the Cloudflare dashboard, from our in-app home experience to our global navigation and beyond.

For now, keep your eye out for the little search icon that will help you in your day-to-day responsibilities in Cloudflare, and if you don’t see it yet, don’t worry—we can’t wait to ship it to you soon.

If you don’t yet see quick search in your Cloudflare dashboard, you can request early access by filling out this form.

Internship Experience: Software Development Intern

Post Syndicated from Ulysses Kee original https://blog.cloudflare.com/internship-experience-software-development-intern/

Internship Experience: Software Development Intern

Before we dive into my experience interning at Cloudflare, let me quickly introduce myself. I am currently a master’s student at the National University of Singapore (NUS) studying Computer Science. I am passionate about building software that improves people’s lives and making the Internet a better place for everyone. Back in December 2021, I joined Cloudflare as a Software Development Intern on the Partnerships team to help improve the experience that Partners have when using the platform. I was extremely excited about this opportunity and jumped at the prospect of working on serverless technology to build viable tools for our partners and customers. In this blog post, I detail my experience working at Cloudflare and the many highlights of my internship.

Interview Experience

The process began for me back when I was taking a software engineering module at NUS where one of my classmates had shared a job post for an internship at Cloudflare. I had known about Cloudflare’s DNS service prior and was really excited to learn more about the internship opportunity because I really resonated with the company’s mission to help build a better Internet.

I knew right away that this would be a great opportunity and submitted my application. Soon after, I heard back from the recruiting team and went through the interview process – the entire interview process was extremely accommodating and is definitely the most enjoyable interview experience I have had. Throughout the process, I was constantly asked about the kind of things I would like to work on and the relevance of the work that I would be doing. I felt that this thorough communication carried on throughout the internship and really was a cornerstone of my experience interning at Cloudflare.

My Internship

My internship began with onboarding and training, and then after, I had discussions with my mentor, Ayush Verma, on the projects we aimed to complete during the internship and the order of objectives. The main issues we wanted to address was the current manual process that our internal teams and partners go through when they want to duplicate the configuration settings on a zone, or when they want to compare one zone to other zones to ensure that there are no misconfigurations. As you can imagine, with the number of different configurations offered on the Cloudflare dashboard for customers, it could take significant time to copy over every setting and rule manually from one zone to another. Additionally, this process, when done manually, poses a potential threat for misconfigurations due to human error. Furthermore, as more and more customers onboard different zones onto Cloudflare, there needs to be a more automated and improved way for them to make these configuration setups.

Initially, we discussed using Terraform as Cloudflare already supports terraform automation. However, this approach would only cater towards customers and users that have more technical resources and, in true Cloudflare spirit, we wanted to keep it simple such that it could be used by any and everyone. Therefore, we decided to leverage the publicly available Cloudflare APIs and create a browser-based application that interacts with these APIs to display configurations and make changes easily from a simple UI.

With the end goal of simplifying the experience for our partners and customers in duplicating zone configurations, we decided to build a Zone Copier web application solely built on Cloudflare Workers. This tool would, in a click of a button, automatically copy over every setting that can be copied from one zone to another, significantly reducing the amount of time and effort required to make the changes.

Alongside the Zone Copier, we would have some auxiliary tools such as a Zone Viewer, and Zone Comparison, where a customer can easily have a full view of their configurations on a single webpage and be able to compare different zones that they use respectively. These other applications improve upon the existing methods through which Cloudflare users can view their zone configurations, and allow for the direct comparison between different zones.

Importantly, these applications are not to replace the Cloudflare Dashboard, but to complement it instead – for deeper dives into a single particular configuration setting, the Cloudflare Dashboard remains the way to go.

To begin building the web application, I spent the first few weeks diving into the publicly available APIs offered by Cloudflare as part of the v4 API to verify the outputs of each endpoint, and the type of data that would be sent as a response from a request. This took much longer than expected as certain endpoints provided different default responses for a zone that has either an empty setting – for example, not having any Firewall Rules created – or uses a nested structure for its relevant response. These different potential responses have to be examined so that when the web application calls the respective API endpoint, the responses are handled appropriately. This process was quite manual as each endpoint had to be verified individually to ensure the output would work seamlessly with the application.

Once I completed my research, I was able to start designing the web application. Building the web application was a very interesting experience as the stack rested solely on Workers, a serverless application platform. My prior experiences building web applications used servers that require the deployment of a server built using Express and Node.js, whereas for my internship project, I completely relied on a backend built using the itty-router library on Workers to interface with the publicly available Cloudflare APIs. I found this extremely exciting as building a serverless application required less overhead compared to setting up a server and deploying it, and using Workers itself has many other added benefits such as zero cold starts. This introduction to serverless technology and my experience deep-diving into the capabilities of Workers has really opened my eyes to the possibilities that Workers as a platform can offer. With Workers, you can deploy any application on Cloudflare’s global network like I did!

For the frontend of the web application, I used React and the Chakra-UI library to build the user interface for which the Zone Viewer, Zone Comparison, and Zone Copier, is based on. The routing between different pages was done using React Router and the application is deployed directly through Workers.

Here is a screenshot of the application:

Internship Experience: Software Development Intern

Presenting the prototype application

As developers will know, the best way to obtain feedback for the tool that you’re building is to directly have your customers use them and let you know what they think of your application and the kind of features they want to have built on top of it. Therefore, once we had a prototype version of the web application for the Zone Viewer and Zone Comparison complete, we presented the application to the Solutions Engineering team to hear their thoughts on the impact the tool would have on their work and additional features they would like to see built on the application. I found this process very enriching as they collectively mentioned how impactful the application would be for their work and the value add this project provides to them.

Some interesting feedback and feature requests I received were:

  1. The Zone Copier would definitely be very useful for our partners who have to replicate the configuration of one zone to another regularly, and it’s definitely going to help make sure there are less human errors in the process of configuring the setups.
  2. Besides duplicating configurations from zone-to-zone, could we use this to replicate the configurations from a best-in-class setup for different use cases and allow partners to deploy this with a few clicks?
  3. Can we use this tool to generate quarterly reports?
  4. The Zone Viewer would be very helpful for us when we produce documentation on a particular zone’s configuration as part of a POC report.
  5. The Zone Viewer will also give us much deeper insight to better understand the current zone configurations and provide recommendations to improve it.

It was also a very cool experience speaking to the broad Solutions Engineering team as I found that many were very technically inclined and had many valid suggestions for improving the architecture and development of the applications. A special thanks to Edwin Wong for setting up the sharing session with the internal team, and many thanks to Xin Meng, AQ Jiao, Yonggil Choi, Steve Molloy, Kyouhei Hayama, Claire Lim and Jamal Boutkabout for their great insight and suggestions!

Impact of Cloudflare outside of work

While Cloudflare is known for its impeccable transparency throughout the company, and the stellar products it provides in helping make the Internet better, I wanted to take this opportunity to talk about the other endeavors that the company has too.

Cloudflare is part of the Pledge 1%, where the company dedicates 1% of products and 1% of our time to give back to the local communities as well as all the communities we support online around the world.

I took part in one of these activities, where we spent a morning cleaning up parts of the East Coast Park beach, by picking up trash and litter that had been left behind by other park users. Here’s a picture of us from that morning:

Internship Experience: Software Development Intern

From day one, I have been thoroughly impressed by Cloudflare’s commitment to its culture and the effort everyone at Cloudflare puts in to make the company a great place to work and have a positive impact on the surrounding community.

In addition to giving back to the community, other aspects of company culture include having a good team spirit and safe working environment where you feel appreciated and taken care of. At Cloudflare, I have found that everyone is very understanding of work commitments. I faced a few challenges during the internship where I had to spend additional time on university related projects and work, and my manager has always been very supportive and understanding if I required additional time to complete parts of the internship project.

Concluding takeaways

My experience interning at Cloudflare has been extremely positive, and I have seen first hand how transparent the company is with not only its employees but also its customers, and it truly is a great place to work. Cloudflare’s collaborative culture allowed me to access members from different teams, to obtain their thoughts and assistance with certain issues that I faced from time to time. I would not have been able to produce an impactful project without the help of the different brilliant, and motivated, people I worked with across the span of the internship, and I am truly grateful for such a rewarding experience.

We are getting ready to open intern roles for this coming Fall, so we encourage you to visit our careers page frequently, to be up-to-date on all the opportunities we have within our teams.

Query and visualize Amazon Redshift operational metrics using the Amazon Redshift plugin for Grafana

Post Syndicated from Sergey Konoplev original https://aws.amazon.com/blogs/big-data/query-and-visualize-amazon-redshift-operational-metrics-using-the-amazon-redshift-plugin-for-grafana/

Grafana is a rich interactive open-source tool by Grafana Labs for visualizing data across one or many data sources. It’s used in a variety of modern monitoring stacks, allowing you to have a common technical base and apply common monitoring practices across different systems. Amazon Managed Grafana is a fully managed, scalable, and secure Grafana-as-a-service solution developed by AWS in collaboration with Grafana Labs.

Amazon Redshift is the most widely used data warehouse in the cloud. You can view your Amazon Redshift cluster’s operational metrics on the Amazon Redshift console, use AWS CloudWatch, and query Amazon Redshift system tables directly from your cluster. The first two options provide a set of predefined general metrics and visualizations. The last one allows you to use the flexibility of SQL to get deep insights into the details of the workload. However, querying system tables requires knowledge of system table structures. To address that, we came up with a consolidated Amazon Redshift Grafana dashboard that visualizes a set of curated operational metrics and works on top of the Amazon Redshift Grafana data source. You can easily add it to an Amazon Managed Grafana workspace, as well as to any other Grafana deployments where the data source is installed.

This post guides you through a step-by-step process to create an Amazon Managed Grafana workspace and configure an Amazon Redshift cluster with a Grafana data source for it. Lastly, we show you how to set up the Amazon Redshift Grafana dashboard to visualize the cluster metrics.

Solution overview

The following diagram illustrates the solution architecture.

Architecture Diagram

The solution includes the following components:

  • The Amazon Redshift cluster to get the metrics from.
  • Amazon Managed Grafana, with the Amazon Redshift data source plugin added to it. Amazon Managed Grafana communicates with the Amazon Redshift cluster via the Amazon Redshift Data Service API.
  • The Grafana web UI, with the Amazon Redshift dashboard using the Amazon Redshift cluster as the data source. The web UI communicates with Amazon Managed Grafana via an HTTP API.

We walk you through the following steps during the configuration process:

  1. Configure an Amazon Redshift cluster.
  2. Create a database user for Amazon Managed Grafana on the cluster.
  3. Configure a user in AWS Single Sign-On (AWS SSO) for Amazon Managed Grafana UI access.
  4. Configure an Amazon Managed Grafana workspace and sign in to Grafana.
  5. Set up Amazon Redshift as the data source in Grafana.
  6. Import the Amazon Redshift dashboard supplied with the data source.

Prerequisites

To follow along with this walkthrough, you should have the following prerequisites:

  • An AWS account
  • Familiarity with the basic concepts of the following services:
    • Amazon Redshift
    • Amazon Managed Grafana
    • AWS SSO

Configure an Amazon Redshift cluster

If you don’t have an Amazon Redshift cluster, create a sample cluster before proceeding with the following steps. For this post, we assume that the cluster identifier is called redshift-demo-cluster-1 and the admin user name is awsuser.

  1. On the Amazon Redshift console, choose Clusters in the navigation pane.
  2. Choose your cluster.
  3. Choose the Properties tab.

Redshift Cluster Properties

To make the cluster discoverable by Amazon Managed Grafana, you must add a special tag to it.

  1. Choose Add tags. Redshift Cluster Tags
  2. For Key, enter GrafanaDataSource.
  3. For Value, enter true.
  4. Choose Save changes.

Redshift Cluster Tags

Create a database user for Amazon Managed Grafana

Grafana will be directly querying the cluster, and it requires a database user to connect to the cluster. In this step, we create the user redshift_data_api_user and apply some security best practices.

  1. On the cluster details page, choose Query data and Query in query editor v2.Query Editor v2
  2. Choose the redshift-demo-cluster-1 cluster we created previously.
  3. For Database, enter the default dev.
  4. Enter the user name and password that you used to create the cluster.
  5. Choose Create connection.Redshift SU
  6. In the query editor, enter the following statements and choose Run:
CREATE USER redshift_data_api_user PASSWORD '&lt;password&gt;' CREATEUSER;
ALTER USER redshift_data_api_user SET readonly TO TRUE;
ALTER USER redshift_data_api_user SET query_group TO 'superuser';

The first statement creates a user with superuser privileges necessary to access system tables and views (make sure to use a unique password). The second prohibits the user from making modifications. The last statement isolates the queries the user can run to the superuser queue, so they don’t interfere with the main workload.

In this example, we use service managed permissions in Amazon Managed Grafana and a workspace AWS Identity and Access Management (IAM) role as an authentication provider in the Amazon Redshift Grafana data source. We create the database user redshift_data_api_user using the AmazonGrafanaRedshiftAccess policy.

Configure a user in AWS SSO for Amazon Managed Grafana UI access

Two authentication methods are available for accessing Amazon Managed Grafana: AWS SSO and SAML. In this example, we use AWS SSO.

  1. On the AWS SSO console, choose Users in the navigation pane.
  2. Choose Add user.
  3. In the Add user section, provide the required information.

SSO add user

In this post, we select Send an email to the user with password setup instructions. You need to be able to access the email address you enter because you use this email further in the process.

  1. Choose Next to proceed to the next step.
  2. Choose Add user.

An email is sent to the email address you specified.

  1. Choose Accept invitation in the email.

You’re redirected to sign in as a new user and set a password for the user.

  1. Enter a new password and choose Set new password to finish the user creation.

Configure an Amazon Managed Grafana workspace and sign in to Grafana

Now you’re ready to set up an Amazon Managed Grafana workspace.

  1. On the Amazon Grafana console, choose Create workspace.
  2. For Workspace name, enter a name, for example grafana-demo-workspace-1.
  3. Choose Next.
  4. For Authentication access, select AWS Single Sign-On.
  5. For Permission type, select Service managed.
  6. Chose Next to proceed.AMG Workspace configure
  7. For IAM permission access settings, select Current account.AMG permission
  8. For Data sources, select Amazon Redshift.
  9. Choose Next to finish the workspace creation.Redshift to workspace

You’re redirected to the workspace page.

Next, we need to enable AWS SSO as an authentication method.

  1. On the workspace page, choose Assign new user or group.SSO new user
  2. Select the previously created AWS SSO user under Users and Select users and groups tables.SSO User

You need to make the user an admin, because we set up the Amazon Redshift data source with it.

  1. Select the user from the Users list and choose Make admin.
  2. Go back to the workspace and choose the Grafana workspace URL link to open the Grafana UI.AMG workspace
  3. Sign in with the user name and password you created in the AWS SSO configuration step.

Set up an Amazon Redshift data source in Grafana

To visualize the data in Grafana, we need to access the data first. To do so, we must create a data source pointing to the Amazon Redshift cluster.

  1. On the navigation bar, choose the lower AWS icon (there are two) and then choose Redshift from the list.
  2. For Regions, choose the Region of your cluster.
  3. Select the cluster from the list and choose Add 1 data source.Choose Redshift Cluster
  4. On the Provisioned data sources page, choose Go to settings.
  5. For Name, enter a name for your data source.
  6. By default, Authentication Provider should be set as Workspace IAM Role, Default Region should be the Region of your cluster, and Cluster Identifier should be the name of the chosen cluster.
  7. For Database, enter dev.
  8. For Database User, enter redshift_data_api_user.
  9. Choose Save & Test.Settings for Data Source

A success message should appear.

Data source working

Import the Amazon Redshift dashboard supplied with the data source

As the last step, we import the default Amazon Redshift dashboard and make sure that it works.

  1. In the data source we just created, choose Dashboards on the top navigation bar and choose Import to import the Amazon Redshift dashboard.Dashboards in the plugin
  2. Under Dashboards on the navigation sidebar, choose Manage.
  3. In the dashboards list, choose Amazon Redshift.

The dashboard appear, showing operational data from your cluster. When you add more clusters and create data sources for them in Grafana, you can choose them from the Data source list on the dashboard.

Clean up

To avoid incurring unnecessary charges, delete the Amazon Redshift cluster, AWS SSO user, and Amazon Managed Grafana workspace resources that you created as part of this solution.

Conclusion

In this post, we covered the process of setting up an Amazon Redshift dashboard working under Amazon Managed Grafana with AWS SSO authentication and querying from the Amazon Redshift cluster under the same AWS account. This is just one way to create the dashboard. You can modify the process to set it up with SAML as an authentication method, use custom IAM roles to manage permissions with more granularity, query Amazon Redshift clusters outside of the AWS account where the Grafana workspace is, use an access key and secret or AWS Secrets Manager based connection credentials in data sources, and more. You can also customize the dashboard by adding or altering visualizations using the feature-rich Grafana UI.

Because the Amazon Redshift data source plugin is an open-source project, you can install it in any Grafana deployment, whether it’s in the cloud, on premises, or even in a container running on your laptop. That allows you to seamlessly integrate Amazon Redshift monitoring into virtually all your existing Grafana-based monitoring stacks.

For more details about the systems and processes described in this post, refer to the following:


About the Authors

Sergey Konoplev is a Senior Database Engineer on the Amazon Redshift team. Sergey has been focusing on automation and improvement of database and data operations for more than a decade.

Milind Oke is a Data Warehouse Specialist Solutions Architect based out of New York. He has been building data warehouse solutions for over 15 years and specializes in Amazon Redshift.

How to set up Amazon Quicksight dashboard for Amazon Pinpoint and Amazon SES engagement events

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-set-up-amazon-quicksight-dashboard-for-amazon-pinpoint-and-amazon-ses-events/

In this post, we will walk through using Amazon Pinpoint and Amazon Quicksight to create customizable messaging campaign reports. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service that allows customers to connect with users over channels like email, SMS, push, or voice. Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. This solution allows event and user data from Amazon Pinpoint to flow into Amazon Quicksight. Once in Quicksight, customers can build their own reports that shows campaign performance on a more granular level.

Engagement Event Dashboard

Customers want to view the results of their messaging campaigns in ever increasing levels of granularity and ensure their users see value from the email, SMS or push notifications they receive. Customers also want to analyze how different user segments respond to different messages, and how to optimize subsequent user communication. Previously, customers could only view this data in Amazon Pinpoint analytics, which offers robust reporting on: events, funnels, and campaigns. However, does not allow analysis across these different parameters and the building of custom reports. For example, show campaign revenue across different user segments, or show what events were generated after a user viewed a campaign in a funnel analysis. Customers would need to extract this data themselves and do the analysis in excel.

Prerequisites

  • Digital user engagement event database solution must be setup at 1st.
  • Customers should be prepared to purchase Amazon Quicksight because it has its own set of costs which is not covered within Amazon Pinpoint cost.

Solution Overview

This Solution uses the Athena tables created by Digital user engagement events database solution. The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about Amazon Pinpoint engagement events and log those in Amazon Athena in the form of Athena views. You still need to manually configure Amazon Quicksight dashboards to link to these newly generated Athena views. Please follow the steps below in order for further information.

Use case(s)

Event dashboard solutions have following use cases: –

  • Deep dive into engagement insights. (eg: SMS events, Email events, Campaign events, Journey events)
  • The ability to view engagement events at the individual user level.
  • Data/process mining turn raw event data into useful marking insights.
  • User engagement benchmarking and end user event funneling.
  • Compute campaign conversions (post campaign user analysis to show campaign effectiveness)
  • Build funnels that shows user progression.

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

Step 1 – Create AWS account, Pinpoint Project, Implement Event-Database-Solution.
As part of this step customers need to implement DUE Event database solution as the current solution (DUE event dashboard) is an extension of DUE event database solution. The basic assumption here is that the customer has already configured Amazon Pinpoint project or Amazon SES within the required AWS region before implementing this step.

The steps required to implement an event dashboard solution are as follows.

a/Follow the steps mentioned in Event database solution to implement the complete stack. Prior installing the complete stack copy and save the name Athena events database name as shown in the diagram. For my case it is due_eventdb. Database name is required as an input parameter for the current Event Dashboard solution.

b/Once the solution is deployed, navigate to the output page of the cloud formation stack, and copy, and save the following information, which will be required as input parameters in step 2 of the current Event Dashboard solution.

Step 2 – Deploy Cloud formation template for Event dashboard solution
This step generates a number of new Amazon Athena views that will serve as a data source for Amazon Quicksight. Continue with the following actions.

  • Download the cloud formation template(“Event-dashboard.yaml”) from AWS samples.
  • Navigate to Cloud formation page in AWS console, click up right on “Create stack” and select the option “With new resources (standard)”
  • Leave the “Prerequisite – Prepare template” to “Template is ready” and for the “Specify template” option, select “Upload a template file”. On the same page, click on “Choose file”, browse to find the file “Event-dashboard.yaml” file and select it. Once the file is uploaded, click “Next” and deploy the stack.

  • Enter following information under the section “Specify stack details”:
    • EventAthenaDatabaseName – As mentioned in Step 1-a.
    • S3DataLogBucket- As mentioned in Step 1-b
    • This solution will create additional 5 Athena views which are
      • All_email_events
      • All_SMS_events
      • All_custom_events (Custom events can be Mobile app/WebApp/Push Events)
      • All_campaign_events
      • All_journey_events

Step 3 – Create Amazon Quicksight engagement Dashboard
This step walks you through the process of creating an Amazon Quicksight dashboard for Amazon Pinpoint engagement events using the Athena views you created in step-2

  1. To Setup Amazon Quicksight for the 1st time please follow this link (this process is not needed if you have already setup Amazon Quicksight). Please make sure you are an Amazon Quicksight Administrator.
  2. Go/search Amazon Quicksight on AWS console.
  3. Create New Analysis and then select “New dataset”
  4. Select Athena as data source
  5. As a next step, you need to select what all analysis you need for respective events. This solution provides option to create 5 different set of analysis as mentioned in Step 2. They are a/All email events, b/All SMS Events, c/All Custom Events (Mobile/Web App, web push etc), d/ All Campaign events, e/All Journey events. Dashboard can be created from Quicksight analysis and same can be shared among the organization stake holders. Following are the steps to create analysis and dashboards for different type of events.
  6. Email Events –
    • For all email events, name the analysis “All-emails-events” (this can be any kind of customer preferred nomenclature), select Athena workgroup as primary, and then create a data source.
    • Once you create the data source Quicksight lists all the views and tables available under the specified database (in our case it is:-  due_eventdb). Select the email_all_events view as data source.
    • Select the event data location for analysis. There are mainly two options available which are a/ Import to Spice quicker analysis b/ Directly query your data. Please select the preferred options and then click on “visualize the data”.
    • Import to Spice quicker analysis – SPICE is the Amazon QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. (1 GB of storage is available for free for extra storage customer need to pay extra, please refer cost section in this document )
    • Directly query your data – This process enables Quicksight to query directly to the Athena or source database (In the current case it is Athena) and Quicksight will not store any data.
    • Now that you have selected a data source, you will be taken to a blank quick sight canvas (Blank analysis page) as shown in the following Image, please drag and drop what visualization type you need to visualize onto the auto-graph pane. Please note that Amazon QuickSight is a Busines intelligence platform, so customers are free to choose the desired visualization types to observe the individual engagement events.
    • As part of this blog, we have displayed how to create some simple analysis graphs to visualize the engagement events.
    • As an initial step please Select tabular Visualization as shown in the Image.
    • Select all the event dimensions that you want to put it as part of the Table in X axis. Amazon Quicksight table can be extended to show as many as tables columns, this completely depends upon the business requirement how much data marketers want to visualize.
    • Further filtering on the table can be done using Quicksight filters, you can apply the filter on specific granular values to enable further filtering. For Eg – If you want to apply filtering on the destination email Id then 1/Select the filter from left hand menu 2/Add destination field as the filtering criterion 3/ Tick on the destination field you are trying to filter or search for the Destination email ID that 4/ All the result in the table gets further filtered as per the filter criterion
    • As a next step please add another visual from top left corner “Add -> Add Visual”, then select the Donut Chart from Visual types pane. Donut charts are always used for displaying aggregation.
    • Then select the “event_type” as the Group to visualize the aggregated events, this helps marketers/business users to figure out how many email events occurred and what are the aggregated success ratio, click ratio, complain ratio or bounce ratio etc for the emails/Campaign that’s sent to end users.
    • To create a Quicksight dashboards from the Quicksight analysis click Share menu option at the top right corner then select publish dashboard”. Provide required dashboard name while publishing the dashboard”. Same dashboard can be shared with multiple audiences in the Organization.
    • Following is the final version of the dashboard. As mentioned above Quicksight dashboards can be shared with other stakeholders and also complete dashboard can be exported as excel sheet.
  7. SMS Events-
    • As shown above SMS events can be analyzed using Quicksight and dash boards can be created out of the analysis. Please repeat all of the sub-steps listed in step 6. Following is a sample SMS dashboard.
  8. Custom Events-
    • After you integrate your application (app) with Amazon Pinpoint, Amazon Pinpoint can stream event data about user activity, different type custom events, and message deliveries for the app. Eg :- Session.start, Product_page_view, _session.stop etc. Do repeat all of the sub-steps listed in step 6 create a custom event dashboards.
  9. Campaign events
    • As shown before campaign also can be included in the same dashboard or you can create new dashboard only for campaign events.

Cost for Event dashboard solution
You are responsible for the cost of the AWS services used while running this solution. As of the date of publication, the cost for running this solution with default settings in the US West (Oregon) Region is approximately $65 a month. The cost estimate includes the cost of AWS Lambda, Amazon Athena, Amazon Quicksight. The estimate assumes querying 1TB of data in a month, and two authors managing Amazon Quicksight every month, four Amazon Quicksight readers witnessing the events dashboard unlimited times in a month, and a Quicksight spice capacity is 50 GB per month. Prices are subject to change. For full details, see the pricing webpage for each AWS service you will be using in this solution.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. On the CloudFormation console, select your stack and choose Delete. This cleans up all the resources created by the stack,
  2. Delete the Amazon Quicksight Dashboards and data sets that you have created.

Conclusion

In this blog post, I have demonstrated how marketers, business users, and business analysts can utilize Amazon Quicksight dashboards to evaluate and exploit user engagement data from Amazon SES and Pinpoint event streams. Customers can also utilize this solution to understand how Amazon Pinpoint campaigns lead to business conversions, in addition to analyzing multi-channel communication metrics at the individual user level.

Next steps

The personas for this blog are both the tech team and the marketing analyst team, as it involves a code deployment to create very simple Athena views, as well as the steps to create an Amazon Quicksight dashboard to analyse Amazon SES and Amazon Pinpoint engagement events at the individual user level. Customers may then create their own Amazon Quicksight dashboards to illustrate the conversion ratio and propensity trends in real time by integrating campaign events with app-level events such as purchase conversions, order placement, and so on.

Extending the solution

You can download the AWS Cloudformation templates, code for this solution from our public GitHub repository and modify it to fit your needs.


About the Author


Satyasovan Tripathy works at Amazon Web Services as a Senior Specialist Solution Architect. He is based in Bengaluru, India, and specialises on the AWS Digital User Engagement product portfolio. He likes reading and travelling outside of work.

Dark Mode for the Cloudflare Dashboard

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/dark-mode/

Dark Mode for the Cloudflare Dashboard

Dark Mode for the Cloudflare Dashboard

Today, dark mode is available for the Cloudflare Dashboard in beta! From your user profile, you can configure the Cloudflare Dashboard in light mode, dark mode, or match it to your system settings.

For those unfamiliar, dark mode, or light on dark color schemes, uses light text on dark backgrounds instead of the typical dark text on light (usually white) backgrounds. In low-light environments, this can help reduce eyestrain and actually reduce power consumption on OLED screens. For many though, dark mode is simply a preference supported widely by applications and devices.

Dark Mode for the Cloudflare Dashboard
Side by side comparing the Cloudflare dashboard in dark mode and in light mode

How to enable dark mode

  1. Log into Cloudflare.
  2. Go to your user profile.
  3. Under Appearance, select an option: Light, Dark, or Use system setting. For the time being, your choice is saved into local storage.
Dark Mode for the Cloudflare Dashboard
The appearance card in the dashboard for modifying color themes

There are many primers and how-tos on implementing dark mode, and you can find articles talking about the general complications of implementing a dark mode including this straightforward explanation. Instead, we will talk about what enabled us to be able to implement dark mode in only a matter of weeks.

Cloudflare’s Design System – Our Secret Weapon

Before getting into the specifics of how we implemented dark mode, it helps to understand the system that underpins all product design and UI work at Cloudflare – the Cloudflare Design System.

Dark Mode for the Cloudflare Dashboard
The six pillars of the design system: logo, typography, color, layout, icons, videos

Cloudflare’s Design System defines and documents the interface elements and patterns used to build products at Cloudflare. The system can be used to efficiently build consistent experiences for Cloudflare customers. In practice, the Design System defines primitives like typography, color, layout, and icons in a clear and standard fashion. What this means is that anytime a new interface is designed, or new UI code is written, an easily referenceable, highly detailed set of documentation is available to ensure that the work matches previous work. This increases productivity, especially for new employees, and prevents repetitious discussions about style choices and interaction design.

Built on top of these design primitives, we also have our own component library. This is a set of ready to use components that designers and engineers can combine to form the products our customers use every day. They adhere to the design system, are battle tested in terms of code quality, and enhance the user experience by providing consistent implementations of common UI components. Any button, table, or chart you see looks and works the same because it is the same underlying code with the relevant data changed for the specific use case.

So, what does all of this have to do with dark mode? Everything, it turns out. Due to the widespread adoption of the design system across the dashboard, changing a set of variables like background color and text color in a specific way and seeing the change applied nearly everywhere at once becomes much easier. Let’s take a closer look at how we did that.

Turning Out the Lights

The use of color at Cloudflare has a well documented history. When we originally set out to build our color system, the tools we built and the extensive research we performed resulted in a ten-hue, ten-luminosity set of colors that can be used to build digital products. These colors were built to be accessible — not just in terms of internal use, but for our customers. Take our blue hue scale, for example.

Dark Mode for the Cloudflare Dashboard
Our blue color scale, as used on the Cloudflare Dashboard. This shows color-contrast accessible text and background pairings for each step in the scale.

Each hue in our color scale contains ten colors, ordered by luminosity in ten increasing increments from low luminosity to high luminosity. This color scale allows us to filter down the choice of color from the 16,777,216 hex codes available on the web to a much simpler choice of just hue and brightness. As a result, we now have a methodology where designers know the first five steps in a scale have sufficient color contrast with white or lighter text, and the last five steps in a scale have sufficient contrast with black or darker text.

Color scales also allow us to make changes while designing in a far more fluid fashion. If a piece of text is too bright relative to its surroundings, drop down a step on the scale. If an element is too visually heavy, take a step-up. With the Design System and these color scales in place, we’ve been able to design and ship products at a rapid rate.

So, with this color system in place, how do we begin to ship a dark mode? It turns out there’s a simple solution to this, and it’s built into the JS standard library. We call reverse() and flip the luminosity scales.

Dark Mode for the Cloudflare Dashboard
Our blue color scale after calling reverse on it. High luminosity colors are now at the start of the scale, making them contrast accessible with darker backgrounds (and vice-versa).

By performing this small change within our dashboard’s React codebase and shipping a production preview deploy, we were able to see the Cloudflare Dashboard in dark mode with a whole new set of colors in a matter of minutes.

Dark Mode for the Cloudflare Dashboard
An early preview of the Cloudflare Dashboard after flipping our color scales.

While not perfect, this brief prototype gave us an incredibly solid baseline and validated the approach with a number of benefits.

Every product built using the Cloudflare Design System now had a dark mode theme built in for free, with no additional work required by teams.

Our color contrast principles remain sound — just as the first five colors in a scale would be accessible with light text, when flipped, the first five colors in the scale are accessible with dark text. Our scales aren’t perfectly symmetrical, but when using white and black, the principle still holds.

In a traditional approach of “inverting” colors, we face the issue of a color’s hue being changed too. When a color is broken down into its constituent hue, saturation, and luminosity values, inverting it would mean a vibrant light blue would become a dull dark orange. Our approach of just inverting the luminosity of a color means that we retain the saturation and hue of a color, meaning we retain Cloudflare’s brand aesthetic and the associated meaning of each hue (blue buttons as calls-to-action, and so on).

Of course, shipping a dark mode for a product as complex as the Cloudflare Dashboard can’t just be done in a matter of minutes.

Not Quite Just Turning the Lights Off

Although our prototype did meet our initial requirements of facilitating the dashboard in a dark theme, some details just weren’t quite right. The data visualization and mapping libraries we use, our icons, text, and various button and link states all had to be audited and required further iterations. One of the most obvious and prominent examples was the page background color. Our prototype had simply changed the background color from white (#FFFFFF) to black (#000000). It quickly became apparent that black wasn’t appropriate. We received feedback that it was “too intense” and “harsh.” We instead opted for off black, specifically what we refer to as “gray.0” or #1D1D1D. The difference may not seem noticeable, but at larger dimensions, the gray background is much less distracting.

Here is what it looks like in our design system:

Dark Mode for the Cloudflare Dashboard
Black background color contrast for white text
Dark Mode for the Cloudflare Dashboard
Gray background color contrast for white text

And here is a more realistic example:

Dark Mode for the Cloudflare Dashboard
lorem ipsum sample text on black background and on gray background

The numbers at the end of each row represent the contrast of the text color on the background. According to the Web Content Accessibility Guidelines (WCAG), the standard contrast ratio for text should be at least 4.5:1. In our case, while both of the above examples exceed the standard, the gray background ends up being less harsh to use across an entire application. This is not the case with light mode as dark text on white (#FFFFFF) background works well.

Our technique during the prototyping stage involved flipping our color scale; however, we additionally created a tool to let us replace any color within the scale arbitrarily. As the dashboard is made up of charts, icons, links, shadows, buttons and certainly other components, we needed to be able to see how they reacted in their various possible states. Importantly, we also wanted to improve the accessibility of these components and pay particular attention to color contrast.

Dark Mode for the Cloudflare Dashboard
Color picker tool screenshot showing a color scale

For example, a button is made up of four distinct states:

1) Default
2) Focus
3) Hover
4) Active

Dark Mode for the Cloudflare Dashboard
Example showing the various colors for states of buttons in light and dark mode

We wanted to ensure that each of these states would be at least compliant with the AA accessibility standards according to the WCAG. Using a combination of our design systems documentation and a prioritized list of components and pages based on occurrence and visits, we meticulously reviewed each state of our components to ensure their compliance.

Dark Mode for the Cloudflare Dashboard
Side by side comparison of the navbar in light and dark modes

The navigation bar used to select between the different applications was a component we wanted to treat differently compared to light mode. In light mode, the app icons are a solid blue with an outline of the icon; it’s a distinct look and certainly one that grabs your attention. However, for dark mode, the consensus was that it was too bright and distracting for the overall desired experience. We wanted the overall aesthetic of dark mode to be subtle, but it’s important to not conflate aesthetic with poor usability. With that in mind, we made the decision for the navigation bar to use outlines around each icon, instead of being filled in. Only the selected application has a filled state. By using outlines, we are able to create sufficient contrast between the current active application and the rest. Additionally, this provided a visually distinct way to present hover states, by displaying a filled state.

After applying the same methodology as described to other components like charts, icons, and links, we end up with a nicely tailored experience without requiring a substantial overhaul of our codebase. For any new UI that teams at Cloudflare build going forward, they will not have to worry about extra work to support dark mode. This means we get an improved customer experience without any impact to our long term ability to keep delivering amazing new capabilities — that’s a win-win!

Welcome to the Dark Side

We know many of you have been asking for this, and we are excited to bring dark mode to all. Without the investment into our design system by many folks at Cloudflare, dark mode would not have seen the light of day. You can enable dark mode on the Appearance card in your user profile. You can give feedback to shape the future of the dark theme with the feedback form in the card.

If you find these types of problems interesting, come help us tackle them! We are hiring across product, design, and engineering!

Introducing logs from the dashboard for Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/workers-dashboard-logs/

Introducing logs from the dashboard for Cloudflare Workers

Introducing logs from the dashboard for Cloudflare Workers

If you’re writing code: what can go wrong, will go wrong.

Many developers know the feeling: “It worked in the local testing suite, it worked in our staging environment, but… it’s broken in production?” Testing can reduce mistakes and debugging can help find them, but logs give us the tools to understand and improve what we are creating.

if (this === undefined) {
  console.log("there’s no way… right?") // Narrator: there was.
}

While logging can help you understand when the seemingly impossible is actually possible, it’s something that no developer really wants to set up or maintain on their own. That’s why we’re excited to launch a new addition to the Cloudflare Workers platform: logs and exceptions from the dashboard.

Starting today, you can view and filter the console.log output and exceptions from a Worker… at no additional cost with no configuration needed!

View logs, just a click away

When you view a Worker in the dashboard, you’ll now see a “Logs” tab which you can click on to view a detailed stream of logs and exceptions. Here’s what it looks like in action:

Each log entry contains an event with a list of logs, exceptions, and request headers if it was triggered by an HTTP request. We also automatically redact sensitive URLs and headers such as Authorization, Cookie, or anything else that appears to have a sensitive name.

If you are in the Durable Objects open beta, you will also be able to view the logs and requests sent to each Durable Object. This is a great tool to help you understand and debug the interactions between your Worker and a Durable Object.

For now, we support filtering by event status and type. Though, you can expect more filters to be added to the dashboard very soon! Today, we support advanced filtering with the wrangler CLI, which will be discussed later in this blog.

console.log(), and you’re all set

It’s really simple to get started with logging for Workers. Simply invoke one of the standard console APIs, such as console.log(), and we handle the rest. That’s it! There’s no extra setup, no configuration needed, and no hidden logging fees.

function logRequest (request) {
  const { cf, headers } = request
  const { city, region, country, colo, clientTcpRtt  } = cf
  
  console.log("Detected location:", [city, region, country].filter(Boolean).join(", "))
  if (clientTcpRtt) {
     console.debug("Round-trip time from client to", colo, "is", clientTcpRtt, "ms")
  }

  // You can also pass an object, which will be interpreted as JSON.
  // This is great if you want to define your own structured log schema.
  console.log({ headers })
}

In fact, you don’t even need to use console.log to view an event from the dashboard. If your Worker doesn’t generate any logs or exceptions, you will still be able to see the request headers from the event.

Advanced filters, from your terminal

If you need more advanced filters you can use wrangler, our command-line tool for deploying Workers. We’ve updated the wrangler tail command to support sampling and a new set of advanced filters. You also no longer need to install or configure cloudflared to use the command. Not to mention it’s much faster, no more waiting around for logs to appear. Here are a few examples:

# Filter by your own IP address, and if there was an uncaught exception.
wrangler tail --format=pretty --ip-address=self --status=error

# Filter by HTTP method, then apply a 10% sampling rate.
wrangler tail --format=pretty --method=GET --sampling-rate=0.1

# Filter using a generic search query.
wrangler tail --format=pretty --search="TypeError"

We recommend using the “pretty” format, since wrangler will output your logs in a colored, human-readable format. (We’re also working on a similar display for the dashboard.)

However, if you want to access structured logs, you can use the “json” format. This is great if you want to pipe your logs to another tool, such as jq, or save them to a file. Here are a few more examples:

# Parses each log event, but only outputs the url.
wrangler tail --format=json | jq .event.request?.url

# You can also specify --once to disconnect the tail after receiving the first log.
# This is useful if you want to run tests in a CI/CD environment.
wrangler tail --format=json --once > event.json

Try it out!

Both logs from the dashboard and wrangler tail are available and free for existing Workers customers. If you would like more information or a step-by-step guide, check out any of the resources below.

Internationalizing the Cloudflare Dashboard

Post Syndicated from James Culveyhouse original https://blog.cloudflare.com/internationalizing-the-cloudflare-dashboard/

Internationalizing the Cloudflare Dashboard

Cloudflare’s dashboard now supports four new languages (and multiple locales): Spanish (with country-specific locales: Chile, Ecuador, Mexico, Peru, and Spain), Brazilian Portuguese, Korean, and Traditional Chinese. Our customers are global and diverse, so in helping build a better Internet for everyone, it is imperative that we bring our products and services to customers in their native language.

Since last year Cloudflare has been hard at work internationalizing our dashboard. At the end of 2019, we launched our first language other than US English: German. At the end of March 2020, we released three additional languages: French, Japanese, and Simplified Chinese. If you want to start using the dashboard in any of these languages, you can change your language preference in the top right of the Cloudflare dashboard. The preference selected will be saved and used across all sessions.

Internationalizing the Cloudflare Dashboard

In this blog post, I want to help those unfamiliar with internationalization and localization to better understand how it works. I also would like to tell the story of how we made internationalizing and localizing our application a standard and repeatable process along with sharing a few tips that may help you as you do the same.

Beginning the journey

The first step in internationalization is externalizing all the strings in your application. In concrete terms this means taking any text that could be read by a user and extracting it from your application code into separate, stand-alone files. This needs to be done for a few reasons:

  • It enables translation teams to work on translating these strings without needing to view or change any application code.
  • Most translators typically use Translation Management applications which automate aspects of the workflow and provide them with useful utilities (like translation memory, change tracking, and a number of useful parsing and formatting tools). These applications expect standardized text formats (such as json, xml, md, or csv files).

From an engineering perspective, separating application code from translations allows for making changes to strings without re-compiling and/or re-deploying code. In our React based application, externalizing most of our strings boiled down to changing blocks of code like this:

<Button>Cancel</Button>
<Button>Next</Button>

Into this:

<Button><Trans id="signup.cancel" /></Button>
<Button><Trans id="signup.next" /></Button>
 
// And in a separate catalog.json file for en_US:
{
 "signup.cancel": "Cancel",
 "signup.next": "Next",
 // ...many more keys
}

The <Trans> component shown above is the fundamental i18n building block in our application. In this scheme, translated strings are kept in large dictionaries keyed by a translation id. We call these dictionaries “translation catalogs”, and there are a set of translation catalogs for each language that we support.

At runtime, the <Trans> component looks up the translation in the correct catalog for the provided key and then inserts this translation into the page (via the DOM). All of an application’s static text can be externalized with simple transformations like these.

However, when dynamic data needs to be intermixed with static text, the solution becomes a little more complicated. Consider the following seemingly straightforward example which is riddled with i18n landmines:

<span>You've selected { totalSelected } Page Rules.</span>

It may be tempting to externalize this sentence by chopping it up into a few parts, like so:

<span>
 <Trans id="selected.prefix" /> {totalSelected } <Trans id="pageRules" />
</span>
 
// English catalog.json
{
 "selected.prefix": "You've selected",
 "pageRules": "Page Rules",
 // ...
}
 
// Japanese catalog.json
{
 "selected.prefix": "選択しました",
 "pageRules": "ページ ルール",
 // ...
}
 
// German catalog.json
{
 "selected.prefix": "Sie haben ausgewählt",
 "pageRules": "Page Rules",
 // ...
}
 
// Portuguese (Brazil) catalog.json
{
 "selected.prefix": "Você selecionou",
 "pageRules": "Page Rules",
 // ...
}

This gets the job done and may even seem like an elegant solution. After all, both the selected.prefix and pageRules.suffix strings seem like they are destined to be reused. Unfortunately, chopping sentences up and then concatenating translated bits back together like this turns out to be the single largest pitfall when externalizing strings for internationalization.

The problem is that when translated, the various words that make up a sentence can be morphed in different ways based on context (singular vs plural contexts, due to word gender, subject/verb agreement, etc). This varies significantly from language to language, as does word order. For example in English, the sentence “We like them” follows a subject-verb-object order, while other languages might follow subject-object-verb (We them like), verb-subject-object (Like we them), or even other orderings. Because of these nuanced differences between languages, concatenating translated phrases into a sentence will almost always lead to localization errors.

The code example above contains actual translations we got back from our translation teams when we supplied them with “You’ve selected” and “Page Rules” as separate strings. Here’s how this sentence would look when rendered in the different languages:

Language Translation
Japanese 選択しました { totalSelected } ページ ルール。
German Sie haben ausgewählt { totalSelected } Page Rules
Portuguese (Brazil) Você selecionou { totalSelected } Page Rules.

To compare, we also gave them the sentence as a single string using a placeholder for the variable, and here’s the result:

Language Translation
Japanese %{ totalSelected } 件のページ ルールを選択しました。
German Sie haben %{ totalSelected } Page Rules ausgewählt.
Portuguese (Brazil) Você selecionou %{ totalSelected } Page Rules.

As you can see, the translations differ for Japanese and German. We’ve got a localization bug on our hands.

So, In order to guarantee that translators will be able to convey the true meaning of your text with fidelity, it’s important to keep each sentence intact as a single externalized string. Our <Trans> component allows for easy injection of values into template strings which allows us to do exactly that:

<span>
  <Trans id="pageRules.selectedForDeletion" values={{ count: totalSelected }} />
</span>

// English catalog.json
{
  "pageRules.selected": "You've selected %{ count } Page Rules.",
  // ...
}

// Japanese catalog.json
{
  "pageRules.selected": "%{ count } 件のページ ルールを選択しました。",
  // ...
}

// German catalog.json
{
  "pageRules.selected": "Sie haben %{ count } Page Rules ausgewählt.",
  // ...
}

// Portuguese(Brazil) catalog.json
{
  "pageRules.selected": "Você selecionou %{ count } Page Rules.",
  // ...
}

This allows translators to have the full context of the sentence, ensuring that all words will be translated with the correct inflection.

You may have noticed another potential issue. What happens in this example when totalSelected is just 1? With the above code, the user would see “You’ve selected 1 Page Rules for deletion”. We need to conditionally pluralize the sentence based on the value of our dynamic data. This turns out to be a fairly common use case, and our <Trans> component handles this automatically via the smart_count feature:

<span>
  <Trans id="pageRules.selectedForDeletion" values={{ smart_count: totalSelected }} />
</span>

// English catalog.json
{
  "pageRules.selected": "You've selected %{ smart_count } Page Rule. |||| You've selected %{ smart_count } Page Rules.",
}

// Japanese catalog.json
{
  "pageRules.selected": "%{ smart_count } 件のページ ルールを選択しました。 |||| %{ smart_count } 件のページ ルールを選択しました。",
}

// German catalog.json
{
  "pageRules.selected": "Sie haben %{ smart_count } Page Rule ausgewählt. |||| Sie haben %{ smart_count } Page Rules ausgewählt.",
}

// Portuguese (Brazil) catalog.json
{
  "pageRules.selected": "Você selecionou %{ smart_count } Page Rule. |||| Você selecionou %{ smart_count } Page Rules.",
}

Here, the singular and plural versions are delimited by ||||. <Trans> will automatically select the right translation to use depending on the value of the passed in totalSelected variable.

Yet another stumbling block occurs when markup is mixed in with a block of text we’d like to externalize as a single string. For example, what if you need some phrase in your sentence to be a link to another page?

<VerificationReminder>
  Don't forget to <Link>verify your email address.</Link>
</VerificationReminder>

To solve for this use case, the <Trans> component allows for arbitrary elements to be injected into placeholders in a translation string, like so:

<VerificationReminder>
  <Trans id="notification.email_verification" Components={[Link]} componentProps={[{ to: '/profile' }]} />
</VerificationReminder>

// catalog.json
{
  "notification.email_verification": "Don't forget to <0>verify your email address.</0>",
  // ...
}

In this example, the <Trans> component will replace placeholder elements (<0>,<1>, etc.) with instances of the component type located at that index in the Components array. It also passes along any data specified in componentProps to that instance. The example above would boil down to the following in React:

// en-US
<VerificationReminder>
  Don't forget to <Link to="/profile">verify your email address.</Link>
</VerificationReminder>

// es-ES
<VerificationReminder>
  No olvide <Link to="/profile">verificar la dirección de correo electrónico.</Link>
</VerificationReminder>

Safety third!

The functionality outlined above was enough for us to externalize our strings. However, it did at times result in bulky, repetitive code that was easy to mess up. A couple of pitfalls quickly became apparent.

The first was that small hardcoded strings were now easier to hide in plain sight, and because they weren’t glaringly obvious to a developer until the rest of the page had been translated, the feedback loop in finding these was often days or weeks. A common solution to surfacing these issues is introducing a pseudolocalization mode into your application during development which will transform all properly internationalized strings by replacing each character with a similar looking unicode character.

For example You've selected 3 Page Rules. might be transformed to Ýôú'Ʋè ƨèℓèçƭèδ 3 Þáϱè Rúℓèƨ.

Another handy feature at your disposal in a pseudolocalization mode is the ability to shrink or lengthen all strings by a fixed amount in order to plan for content width differences. Here’s the same pseudolocalized sentence increased in length by 50%: Ýôú'Ʋè ƨèℓèçƭèδ 3 Þáϱè Rúℓèƨ. ℓôřè₥ ïƥƨú₥ δô. This is useful in helping both engineers as well as designers spot places where content length could potentially be an issue. We first recognized this problem when rolling out support for German, which at times tends to have somewhat longer words than English.

This meant that in a lot of places the text in page elements would overflow, such as in this “Add” button:

Internationalizing the Cloudflare Dashboard

There aren’t a lot of easy fixes for these types of problems that don’t compromise the user experience.

For best results, variable content width needs to be baked into the design itself. Since fixing these bugs often means sending it back upstream to request a new design, the process tends to be time consuming. If you haven’t given much thought to content design in general, an internationalization effort can be a good time to start. Having standards and consistency around the copy used for various elements in your app can not only cut down on the number of words that need translating, but also eliminate the need to think through the content length pitfalls of using a novel phrase.

The other pitfall we ran into was that the translation ids — especially long and repetitive ones — are highly susceptible to typos.

Pop quiz, which of these translation keys will break our app: traffic.load_balancing.analytics.filters.origin_health_title or traffic.load_balancing.analytics.filters.origin_heath_title?

Nestled among hundreds of other lines of changes, these are hard to spot in code review. Most apps have a fallback so missing translations don’t result in a page breaking error. As a result a bug like this might go unnoticed entirely if it’s hidden well enough (in say, a help text flyout).

Fortunately, with a growing percentage of our codebase in TypeScript, we were able to leverage the type-checker to give developers feedback as they wrote the code. Here’s an example where our code editor is helpfully showing us a red underline to indicate that the id property is invalid (due to the missing “l”):

Internationalizing the Cloudflare Dashboard

Not only did it make the problems more obvious, but it also meant that violations would cause builds to fail, preventing bad code from entering the codebase.

Scaling locale files

In the beginning, you’ll probably start out with one translation file per locale that you support. In addition, the naming scheme you use for your keys can remain somewhat simple. As your app scales, your translation file will grow too large and need to be broken up into separate files. Files that are too large will overwhelm Translation Management applications, or if left unchecked, your code editor. All of our translation strings (not including keys), when lumped together into a single file, is around 50,000 words. For comparison, that’s roughly the same size as a copy of “The Hitchhiker’s Guide to the Galaxy” or “Slaughterhouse Five”.

We break up our translations into a number of “catalog” files roughly corresponding to feature verticals (like Firewall or Cloudflare Workers). This works out well for our developers since it provides a predictable place to find strings, and keeps the line count of a translation catalog down to a manageable length. It also works out well for the outside translation teams since a single feature vertical is a good unit of work for a translator (or small team).

In addition to per-feature catalogs, we have a common catalog file to hold strings that are re-used throughout the application. It allows us to keep ids short ( common.delete vs some_page.some_tab.some_feature.thing.delete ) and lowers the likelihood of duplication since developers habitually check the common catalog before adding new strings.

Libraries

So far we’ve talked at length about our <Trans> component and what it can do. Now, let’s talk about how it’s built.

Perhaps unsurprisingly, we didn’t want to reinvent the wheel and come up with a base i18n library from scratch. Due to prior efforts to internationalize the legacy parts of our application written in Backbone, we were already using Airbnb’s Polyglot library, a “tiny I18n helper library written in JavaScript” which, among other things, “provides a simple solution for interpolation and pluralization, based off of Airbnb’s experience adding I18n functionality to its Backbone.js and Node apps”.

We took a look at a few of the most popular libraries that had been purpose-built for internationalizing React applications, but ultimately decided to stick with Polyglot. We created our <Trans> component to bridge the gap to React. We chose this direction for a few reasons:

  • We didn’t want to re-internationalize the legacy code in our application in order to migrate to a new i18n support library.
  • We also didn’t want the combined overhead of supporting 2 different i18n schemes for new vs legacy code.
  • Writing our own trans component gave us the flexibility to write the interface we wanted. Since Trans is used just about everywhere, we wanted to make sure it was as ergonomic as possible to developers.

If you’re just getting started with i18n in a new React based web-app, react-intl and i18n-next are 2 popular libraries that supply a component similar to <Trans> described above.

The biggest pain point of the <Trans> component as outlined is that strings have to be kept in a separate file from your source code. Switching between multiple files as you author new code or modify existing features is just plain annoying. It’s even more annoying if the translation files are kept far away in the directory structure, as they often need to be.

There are some new i18n libraries such as jslingui that obviate this problem by taking an extraction based approach to handling translation catalogs. In this scheme, you still use a <Trans>component, but you keep your strings in the component itself, not a separate catalog:

<span>
  <Trans>Hmm... We couldn't find any matching websites.</Trans>
</span>

A tool that you run at build time then does the work of finding all of these strings and extracting then into catalogs for you. For example, the above would result in the following generated catalogs:

// locales/en_US.json
{
  "Hmm... We couldn't find any matching websites.": "Hmm... We couldn't find any matching websites.",
}

// locales/de_DE.json
{
  "Hmm... We couldn't find any matching websites.": "Hmm... Wir konnten keine übereinstimmenden Websites finden."
}

The obvious advantage to this approach is that we no longer have separate files! The other advantage is that there’s no longer any need for type checking ids since typos can’t happen anymore.

However, at least for our use case, there were a few downsides.

First, human translators sometimes appreciate the context of the translation keys. It helps with organization, and it gives some clues about the string’s purpose.

And although we no longer have to worry about typos in translation ids, we’re just as susceptible to slight copy mismatches (ex. “Verify your email” vs “Verify your e-mail”). This is almost worse, since in this case it would introduce a near duplication which would be hard to detect. We’d also have to pay for it.

Whichever tech stack you’re working with, there are likely a few i18n libraries that can help you out. Which one to pick is highly dependent on technical constraints of your application and the context of your team’s goals and culture.

Numbers, Dates, and Times

Earlier when we talked about injecting data translated strings, we glossed over a major issue: the data we’re injecting may also need to be formatted to conform to the user’s local customs. This is true for dates, times, numbers, currencies and some other types of data.

Let’s take our simple example from earlier:

<span>You've selected { totalSelected } Page Rules.</span>

Without proper formatting, this will appear correct for small numbers, but as soon as things get into the thousands, localization problems will arise, since the way that digits are grouped and separated with symbols varies by culture. Here’s how three-hundred thousand and three hundredths is formatted in a few different locales:

Language (Country) Code Formatted Date
German (Germany) de-DE 300.000,03
English (US) en-US 300,000.03
English (UK) en-GB 300,000.03
Spanish (Spain) es-ES 300.000,03
Spanish (Chile) es-CL 300.000,03
French (France) fr-FR 300 000,03
Hindi (India) hi-IN 3,00,000.03
Indonesian (Indonesia) in-ID 300.000,03
Japanese (Japan) ja-JP 300,000.03
Korean (South Korea) ko-KR 300,000.03
Portuguese (Brazil) pt-BR 300.000,03
Portuguese (Portugal) pt-PT 300 000,03
Russian (Russia) ru-RU 300 000,03


The way that dates are formatted varies significantly from country to country. If you’ve developed your UI mainly with a US audience in mind, you’re probably displaying dates in a way that will feel foreign and perhaps un-intuitive to users from just about any other place in the world. Among other things, date formatting can vary in terms of separator choice, whether single digits are zero padded, and in the way that the day, month, and year portions are ordered. Here’s how the March 4th of the current year is formatted in a few different locales:

Language (Country) Code Formatted Date
German (Germany) de-DE 4.3.2020
English (US) en-US 3/4/2020
English (UK) en-GB 04/03/2020
Spanish (Spain) es-ES 4/3/2020
Spanish (Chile) es-CL 04-03-2020
French (France) fr-FR 04/03/2020
Hindi (India) hi-IN 4/3/2020
Indonesian (Indonesia) in-ID 4/3/2020
Japanese (Japan) ja-JP 2020/3/4
Korean (South Korea) ko-KR 2020. 3. 4.
Portuguese (Brazil) pt-BR 04/03/2020
Portuguese (Portugal) pt-PT 04/03/2020
Russian (Russia) ru-RU 04.03.2020


Time format varies significantly as well. Here’s how time is formatted in a few selected locales:

Language (Country) Code Formatted Date
German (Germany) de-DE 14:02:37
English (US) en-US 2:02:37 PM
English (UK) en-GB 14:02:37
Spanish (Spain) es-ES 14:02:37
Spanish (Chile) es-CL 14:02:37
French (France) fr-FR 14:02:37
Hindi (India) hi-IN 2:02:37 pm
Indonesian (Indonesia) in-ID 14.02.37
Japanese (Japan) ja-JP 14:02:37
Korean (South Korea) ko-KR 오후 2:02:37
Portuguese (Brazil) pt-BR 14:02:37
Portuguese (Portugal) pt-PT 14:02:37
Russian (Russia) ru-RU 14:02:37


Libraries for Handling Numbers, Dates, and Times

Ensuring the correct format for all these types of data for all supported locales is no easy task. Fortunately, there are a number of mature, battle-tested libraries that can help you out.

When we kicked off our project, we were using the Moment.js library extensively for date and time formatting. This handy library abstracts away the details of formatting dates to different lengths (“Jul 9th 20”, “July 9th 2020”, vs “Thursday”), displaying relative dates (“2 days ago”), amongst many other things. Since almost all of our dates were already being formatted via Moment.js for readability, and since Moment.js already has i18n support for a large number of locales, it meant that we were able to flip a couple of switches and have properly localized dates with very little effort.

There are some strong criticisms of Moment.js (mainly bloat), but ultimately the benefits realized from switching to a lower footprint alternative when compared to the cost it would take to redo every date and time didn’t add up.

Numbers were a very different story. We had, as you might imagine, thousands of raw, unformatted numbers being displayed throughout the dashboard. Hunting them down was a laborious and often manual process.

To handle the actual formatting of numbers, we used the Intl API (the Internationalization library defined by the ECMAScript standard):

var number = 300000.03;
var formatted = number.toLocaleString('hi-IN'); // 3,00,000.03
// This probably works in the browser you're using right now!

Fortunately, browser support for Intl has come quite a long way in recent years, with all modern browsers having full support.

Some modern JavaScript engines like V8 have even moved away from self-hosted JavaScript implementations of these libraries in favor of C++ based builtins, resulting in significant speedup.

Support for older browsers can be somewhat lacking however. Here’s a simple demo site ( source code) that’s built with Cloudflare Workers that shows how dates, times, and numbers are rendered in a hand-full of locales.

Some combinations of old browsers and OS’s will yield less than ideal results. For example, here’s how the same dates and times from above are rendered on Windows 8 with IE 10:

Internationalizing the Cloudflare Dashboard Internationalizing the Cloudflare Dashboard

If you need to support older browsers, this can be solved with a polyfill.

Translating

With all strings externalized, and all injected data being carefully formatted to locale specific standards, the bulk of the engineering work is complete. At this point, we can now claim that we’ve internationalized our application, since we’ve adapted it in a way that makes it easy to localize.

Next comes the process of localization where we actually create varying content based on the user’s language and cultural norms.

This is no small feat. Like we mentioned before, the strings in our application added together are the size of a small novel. It takes a significant amount of coordination and human expertise to create a translated copy that both captures the information with fidelity and speaks to the user in a familiar way.

There are many ways to handle the translation work: leveraging multi-lingual staff members, contracting the work out to individual translators, agencies, or even going all in and hiring teams of in-house translators. Whatever the case may be, there needs to be a smooth process for both workflow signalling and moving assets between the translation and development teams.

A healthy i18n program will provide developers with black-box interface with the process — they put new strings in a translation catalog file and commit the change, and without any more effort on their part, the feature code they wrote is available in production for all supported locales a few days later. Similarly, in a well run process translators will remain blissfully unaware of the particulars of the development process and application architecture. They receive files that easily load in their tools and clearly indicate what translation work needs to be done.

So, how does it actually work in practice?

We have a set of automated scripts that can be run on-demand by the localization team to package up a snapshot of our localization catalogs for all supported languages. During this process, a few things happen:

  • JSON files are generated from catalog files authored in TypeScript
  • If any new catalog files were added in English, placeholder copies are created for all other supported languages.
  • Placeholder strings are added for all languages when new strings are added to our base catalog

From there, the translation catalogs are uploaded to the Translation Management system via the UI or automated calls to the API. Before handing it off to translators, the files are pre-processed by comparing each new string against a Translation Memory (a cache of previously translated strings and substrings). If a match is found, the existing translation is used. Not only does this save cost by not re-translating strings, but it improves quality by ensuring that previously reviewed and approved translations are used when possible.

Suppose your locale files end up looking something like this:

{
 "verify.button": "Verify Email",
 "other.verify.button": "Verify Email",
 "verify.proceed.link": "Verify Email to proceed",
 // ...
}

Here, we have strings that are duplicated verbatim, as well as sub-strings that are copied. Translation services are billed by the word — you don’t want to pay for something twice and run the risk of a consistency issue arising. To this end, having a well-maintained Translation Memory will ensure that these strings are taken care of in the pre-translation steps before translators even see the file.

Once the translation job is marked as ready, it can take translation teams anywhere from hours to weeks to complete return translated copies depending on a number of factors such as the size of the job, the availability of translators, and the contract terms. The concerns of this phase could constitute another blog article of similar length: sourcing the right translation team, controlling costs, ensuring quality and consistency, making sure the company’s brand is properly conveyed, etc. Since the focus of this article is largely technical, we’ll gloss over the details here, but make no mistake — getting this part wrong will tank your entire effort, even if you’ve achieved your technical objectives.

After translation teams signal that new files are ready for pickup, the assets are pulled from the server and unpacked into their correct locations in the application code. We then run a suite of automated checks to make sure that all files are valid and free of any formatting issues.

An optional (but highly recommended) step takes place at this stage — in-context review. A team of translation reviewers then look at the translated output in context to make sure everything looks perfect in its finalized state. Having support staff that are both highly proficient with the product and fluent in the target language are especially useful in this effort. Shoutout to all our team members from around the company that have taken the time and effort to do this. To make this possible for outside contractors, we prepare special preview versions of our app that allow them to test with development mode locales enabled.

And there you have it, everything it takes to deliver a localized version of your application to your users all around the world.

Continual Localization

It would be great to stop here, but what we’ve discussed up until this point is the effort required to do it once. As we all know, code changes. New strings will be gradually added, modified, and deleted over the course of ti me as new features are launched and tweaked.

Since translation is a highly human process that often involves effort from people in different corners of the world, there is a lower bound to the timeframe in which turnover is possible. Since our release cadence (daily) is often faster than this turnover rate (2-5 days), it means that developers making changes to features have to make a choice: slow down to match this cadence, or ship slightly ahead of the localization schedule without full coverage.

In order to ensure that features shipping ahead of translations don’t cause application-breaking errors, we fallback to our base locale (en_US) if a string doesn’t exist for the configured language.

Some applications have a slightly different fallback behavior: displaying raw translation keys (perhaps you’ve seen some.funny.dot.delimited.string in an app you’re using). There’s a tradeoff between velocity and correctness here, and we chose to optimize for velocity and minimal overhead. In some apps correctness is important enough to slow down cadence for i18n. In our case it wasn’t.

Finishing Touches

There are a few more things we can do to optimize the user experience in our newly localized application.

First, we want to make sure there isn’t any performance degradation. If our application made the user fetch all of its translated strings before rendering the page, this would surely happen. So, in order to keep everything running smoothly, the translation catalogs are fetched asynchronously and only as the application needs them to render some content on the page. This is easy to accomplish nowadays with the code splitting features available in module bundlers that support dynamic import statements such as Parcel or Webpack.

We also want to eliminate any friction the user might experience with needing to constantly select their desired language when visiting different Cloudflare properties. To this end, we made sure that any language preference a user selects on our marketing site or our support site persists as they navigate to and from our dashboard (all links are in French to belabor the point).

What’s next?

It’s been an exciting journey, and we’ve learned a lot from the process. It’s difficult (perhaps impossible) to call an i18n project truly complete.  Expanding into new languages will surface slippery bugs and expose new challenges. Budget pressure will challenge you to find ways of cutting costs and increasing efficiency. In addition, you will discover ways in which you can enhance the localized experience even more for users.

There’s a long list of things we’d like to improve upon, but here are some of the highlights:

  • Collation. String comparison is language sensitive, and as such, the code you’ve written to lexicographically sort lists and tables of data in your app is probably doing the wrong thing for some of your users. This is especially apparent in languages that use logographic writing systems (such as Chinese or Japanese) as opposed to languages that use alphabets (like English or Spanish).
  • Support for right-to-left languages like Arabic and Hebrew.
  • Localizing API responses is harder than localizing static copy in your user interface, as it takes a coordinated effort between teams. In the age of microservices, finding a solution that works well across the myriad of tech stacks that power each service can be very challenging.
  • Localizing maps. We’ll be working on making sure all content in our map-based visualizations is translated.
  • Machine translation has come a long way in recent years, but not far enough to churn our translations unsupervised. We would however like to experiment more with using machine translation as a first pass that translation reviewers then edit for correctness and tone.

I hope you have enjoyed this overview of how Cloudflare internationalized and localized our dashboard.  Check out our careers page for more information on full-time positions and internship roles across the globe.

Making DNS record changes more reliable

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/making-dns-record-changes-more-reliable/

Making DNS record changes more reliable

Making DNS record changes more reliable

DNS is the very first step in accessing any website, API, or pretty much anything on the Internet, which makes it mission-critical to keeping your site up and running. This week, we are launching two significant changes that allow our customers to better maintain and update their DNS records. For customers who use Cloudflare as their authoritative DNS provider, we’ve added a much asked for feature: confirmation to DNS record edits. For our secondary DNS customers, we’re excited to provide a brand new onboarding experience.

Confirm and Commit

One of the benefits of using Cloudflare DNS is that changes quickly propagate to our 200+ data centers. And I mean very quickly: DNS propagation typically takes <5 seconds worldwide. Our UI was set up to allow customers to edit records, click out of the input box, and boom! The record has propagated!

Making DNS record changes more reliable

There are a lot of advantages to fast DNS, but there’s also one clear downside – it leaves room for fat fingering. What if you accidentally toggle the proxy icon, or mistype the content of your DNS record? This could result in users not being able to access your website or API and could cause a significant outage. To protect customers from these kinds of mistakes, we’ve added a Save button for DNS record changes.

Now editing records in the DNS table allows you to take an extra look before committing the change.

Making DNS record changes more reliable

The new confirmation layout applies to all record types and affects any content, TTL, or proxy status changes.

Let us know what you think by filling out the feedback survey linked at the top of the DNS tab in the dashboard.

DeepLinks and ScrollAnchor

Post Syndicated from Drew Dowling original https://blog.cloudflare.com/deeplinks-and-scrollanchor/

DeepLinks and ScrollAnchor

To directly quote Wikipedia:

“Deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website (e.g. http://example.com/path/page), rather than the website’s home page (e.g., http://example.com). The URL contains all the information needed to point to a particular item.”

There are many user experiences in Cloudflare’s Dashboard that are enhanced by the use of deep linking, such as:

  • We’re able to direct users from marketing pages directly into the Dashboard so they can interact with new/changed features.
  • Troubleshooting docs can have clearer, more intently directions. e.g. “Enable SSL encryption here” vs “Log into the Dashboard, choose your account and zone, navigate to the security tab, change SSL encryption level, blah blah blah”.

One of the interesting challenges with deep linking in the Dashboard is that most interesting resources are “locked” behind the context of an account and a zone/domain/website. To illustrate this, look at a tree of possible URL paths into Cloudflare’s Dashboard:

dash.cloudflare.com/ -> root-level resources: login, sign-up, forgot-password, two-factor

dash.cloudflare.com/<accountId>/ -> account-level resources: analytics, workers, domains, stream, billing, audit-log

dash.cloudflare.com/<accountId>/<zoneId> -> zone-level resources: dns, ssl-tls, firewall, speed, caching, page-rules, traffic, etc.

You might notice that in order to deep link to anything more interesting than logging in, a deep linker will need to know a user’s account or zone beforehand. A troubleshooting doc might want to send a user to the Page Rules tab in Dashboard to help a user fix their zone, but the linker doesn’t know what that zone is.

Another highly desired feature was the ability for a deep link to scroll to a particular piece of content on a Dashboard page, making it even easier for users to navigate. Instead of a troubleshooting doc asking a user to fumble around to find a setting, we could helpfully scroll that setting right into view. Now that would be slick!

The solution we came up with involves 3 main parts:

  • Deep links URLs expose an intuitive schema for dynamic value resolution.
  • A React component, DeepLink, consolidates routing/resolving deep links.
  • A React component, ScrollAnchor, encapsulates a simple algorithm which scrolls its content into view when the DOM has “finished loading”.

Just to prove that it works, here’s a GIF of us deep linking to the “TLS 1.3” setting on the security settings page:

DeepLinks and ScrollAnchor

It works! I was asked to select one of my several accounts, then our DeepLink routing component was smart enough to know that I have only one zone within that account and auto-filled the rest of the URL path. After the page was fully loaded, we were automatically scrolled to the TLS 1.3 setting. If you’re curious how all of this works and want to jump into the nitty gritty details, read on!

If you were paying attention to the URL bar in the GIF above, you already know what’s coming. In order to deal with dynamic account/zone resolution, a deep link can use a to query parameter to specify a path into Dashboard. I think it reads quite nicely:

dash.cloudflare.com/?to=/:account/:zone/ssl-tls/edge-certificates

This example is saying that we’d like to link to the “Edge Certificates” section of the “SSL-TLS” product for some account and some zone that a user needs to manually resolve, as you saw above. It’s easy to imagine removing “?to=/” to transform the link URL into the resolved one:

dash.cloudflare.com/<resolvedAccount>/<resolvedZone>/ssl-tls/edge-certificates

The URL-like schema of the to parameter makes it very natural to support different variations such as account-level resources

dash.cloudflare.com/?to=/:account/billing

Or allowing the linker to supply known information

dash.cloudflare.com/?to=/1234567890abcdef/:zone/traffic

This link takes the user to the “Traffic” product tab for some zone inside of account 1234567890abcdef. Indeed, the :account and :zone symbols are placeholders for user-supplied values, but they can be replaced with any permutation of real, known values to speed up resolution time to provide a better UX.

These links are parsed and resolved in our top-level routing component, DeepLink. At a high level, this component contains a series of “resolvers” for unknown symbols that need automatic or user-interactive resolution (i.e. :account and :zone). But before we dive in, let’s take a step back and gain appreciation for how cool this component is.

Cloudflare’s Dashboard is a single page React app, which means we use React Router to create routing components that handle what’s rendered on different URLs:

<Switch>
  <Route path="/login"><Login /></Route>
  <Route path="/sign-up"><Signup /></Route>
  ...
  <AccountRoutes />
</Switch>

When a page is loaded, a lot of things need to happen: API calls need to be made to fetch all the data needed to render a page, like account/user/zone info not cached in the browser. Many components need to be rendered. It turns out that we can improve the UX of many users by blocking React Router to make specific queries to our API instead of rendering an entire page that anecdotally fetches the information we need. For example, there’s no need to render a zone selection page if a user only has one zone, like in our GIF above ☝️.

Resolvers

When a deep link gets parsed and split into parts, the framework iterates over those parts and tries to build a URL string that is later used to redirect users to a specific location in the dashboard.

// to=/:account/:zone/traffic
// parts = [‘:account’, ‘:zone’, ‘traffic’]
for (const part of parts) {
// do something with each part
}

We can build up the dynamic URL by looking at prefixes. If a part starts with “:”, it’s considered a symbol that needs to be resolved. Everything else is a static string that just gets appended.

const resolvedParts: string[] = [];
// parts = [‘:account’, ‘:zone’, ‘traffic’]
for (let part of parts) {
  if (part.startsWith(‘:’)) {
    //resolve
  }

  resolvedParts.push(part);
}
const finalUrl = resolvedParts.join(‘/’);

Symbols are handled by functions we call “resolvers”. A resolver is a function that:

  1. Is async.
  2. Has a context parameter.
  3. Always returns a string – the value it resolves to.

In JavaScript, async functions always return a promise. Return values that are not type of Promise are wrapped in a resolved promise implicitly. They also allow “await” to be used in them. The async/await syntax is used for resolvers so they can perform any kind of asynchronous work – such as calling the API, while being able to “pause” JavaScript with “await” until that asynchronous work is done.

Each dynamic symbol has its own resolver. We currently have two resolvers – for account and for zone.

const RESOLVERS: Resolvers = {
account: accountResolver,
zone: zoneResolver
};
const resolvedParts: string[] = [];
// parts = [‘:account’, ‘:zone’, ‘traffic’]
for (let part of parts) {
if (part.startsWith(‘:’)) {
// for :account, accountResolver is awaited and returns “abc123”
// for :zone, zoneResolver is awaited and returns “testsite.io”
part = await RESOLVERS[part.slice(1)];
}
resolvedParts.push(part);
}
const finalUrl = resolvedParts.join(‘/’);

The internal implementation is a little bit more complicated, but this is a rough overview of how our DeepLink works.

Resolver context

We mentioned that each resolver has a context parameter. Context is an object that is passed to resolvers from the DeepLink component and it contains a bunch of handy utilities that give resolvers control over any part of the app. For example, it has access to the Redux store (we use Redux.js in the Dashboard to help us manage the application’s state). It has access to previously resolved values, and to all other parts of the deep link. It also has functions to help with user interactions.

User interactions

In many cases, a resolver is not able to resolve without the user’s help. For example, if a user has multiple zones, the resolver working on :zone symbol needs to wait for the user to select a zone.

const zoneResolver: Resolver = async ctx => {
const zones = await fetchZone();
// Just one zone, :zone symbols can be resolved to zone.name without user’s help
if (zones.length === 1) return zones[0].name;
if (zones.length > 1) {
// need user’s help to pick a zone
}
};

We already have a page in the dashboard with a zone list that looks like this.

DeepLinks and ScrollAnchor

What we need to do is give the resolver the ability to somehow show this page, and wait for the result of the user’s interaction.You might be asking: “But how do we show this page? You just told me that DeepLink blocks the entire page!”That’s true!

We decided to block the React Router to prevent unnecessary API calls and DOM updates while a deep link is resolving. But there is no harm in showing some part of the UI, if needed. To be able to do that, we added two functions to context – unblockRouter and blockRouter. These functions just toggle the state that is gating our Router component.

const zoneResolver: Resolver = async ctx => {
// ...
if (zones.length > 1) {
// delegate to React Router to render the page with zone picker
ctx.unblockRouter();
// need users help to pick a zone
// block the router again
ctx.blockRouter();
}
};

Now, the last piece is to somehow observe user interactions from within the resolver. To be able to do that, we have written a powerful utility.

waitForPageAction

Resolvers are isolated functions that live outside of the application’s components. To be able to observe anything that happens in distant branches of React DOM, we created a function called waitForPageAction. This function takes two parameters:

1. pageToAwaitActionOn – URL string pointing to a page we want to await the user’s action on. For example, “dash.cloudflare.com/123abc”

2. actionType – Unique string describing the action. For example, ZONE_SELECTED.

As you may have guessed, waitForPageAction is an async function. It returns a promise that resolves with action metadata whenever that action happens on the page specified by pageToAwaitActionOn. The promise rejects when the user navigates away from pageToAwaitActionOn. Otherwise, it keeps waiting… forever.

This helps us to write a code that is very easy to understand.

const zoneResolver: Resolver = async ctx => {
// ...
if (zones.length > 1) {
// delegate to React Router to render the page with zone picker
ctx.unblockRouter();
// need users help to pick a zone. Wait for ‘ZONE_SELECTED’ action at ‘dash.cloudflare.com/abc123’
// action is an object with metadata about zone. It contains zoneName, which can be used in this resolver to resolve :zone symbol
const action = ctx.waitForPageAction(
‘dash.cloudflare.com/abc123’,
‘ZONE_SELECTED’
);
// block the router again
ctx.blockRouter();
return action.zoneName
}
};

How does waitForPageAction work?

As mentioned above, we use Redux to manage our state. The actionType parameter is nothing else than a type of Redux action. Whenever a zone is selected, React dispatches a Redux action in an onClick handler.

<ZoneCard onClick={zoneName => { dispatch({type: ‘ZONE_SELECTED’, zoneName}) }} />

Now, how does waitForPageAction know that ZONE_SELECTED’ has been dispatched? Aren’t we supposed to write a reducer?!

Not really. waitForPageAction is not changing any state, it’s just an observer that resolves whenever some action, that is dispatched, satisfies a predicate. And Redux has an API to subscribe to any store changes – store.subscribe(listener).

The listener will be called any time an action is dispatched, and some part of the state tree may have changed. Unfortunately, the listener does not have access to the currently dispatched action. We can only read the current state.

Solution? Store the action in the Redux store!

Redux actions are just plain objects (mostly), and thus easy to serialize. We added a simple reducer that stores all actions in the Redux state.

export function deepLinkReducer(
  state: State = DEFAULT_STATE,
  action: AnyAction
){
  const nextState = { ... state, lastAction: action };
  return nextState;
}

Anytime an action is dispatched, we can read that action’s metadata in store.getState().lastAction. Now, we have everything we need to finally implement waitForPageAction.

export function waitForPageAction = (store: Store<DashState>) =>(
pageToAwaitActionOn: string,
actionType: string
) =>
new Promise<AnyAction>((resolve, reject) => {
// Subscribe to redux store
const unsubscribe = store.subscribe(() => {
const state = store.getState();
const currentPage = state.router.location.pathname;
const lastAction = state.lastAction;
if (currentPage !== pageToAwaitActionOn) {
// user navigated away -unsubscribe and reject
unsubscribe();
reject(‘User navigated away’);
} else if (lastAction.type === actionType) {
// Action types match! Unsubscribe and resolve with action object
unsubscribe();
resolve(lastAction);
}
});
});

The listener reads the current state and grabs the currentPage and lastAction data. If currentPage doesn’t match pageToAwaitActionOn, it means the user navigated away, and there’s no need to continue resolving the deep link – we unsubscribe, and reject the promise. Deep link resolvers are stopped, and React Router unblocked.

Else, if lastAction.type matches the actionType parameter, it means the action we are waiting on just happened! Unsubscribe, and resolve the promise with action metadata. The deep link keeps resolving.

That’s it! We also added a similar function – waitForAction – which does exactly the same thing, but is not restricted to a specific page.

ScrollAnchor component

We implemented a wrapper component ScrollAnchor that will scroll to its wrapped content, making our deep links even more targeted. A client would wrap some content like this:

<ScrollAnchor id=”super-important-setting-card”>
  <SuperImportantSettingCard />
</ScrollAnchor>

And then reference it via a typical URL anchor:

dash.cloudflare.com/path/to/content#super-important-setting-card

Now I can hear you saying, “what’s the point? Can’t we get the same behavior with any old ID?”

<div id=”super-important-setting-card”>
  <SuperImportantSettingCard />
</div>

We thought so too! But it turns out that there are a few problems that prevent this super simple approach:

  • The Dashboard’s fixed header
  • DOM updates after page load

Since the Dashboard contains a fixed header at the top of the page, we can’t simply anchor to any ID, since the content will be scrolled to the top of the browser window behind the header. Fortunately, there’s a simple CSS solution using negative margins:

<div id=”super-important-setting-card” padding-top={headerOffset} margin-top={headerOffset}>
  <SuperImportantSettingCard />
</div>

DeepLinks and ScrollAnchor

This CSS trick alone would work for a static site with a fixed header, but the Dashboard is very dynamic. We found early on in testing that using a normal HTML ID anchor in a URL would cause the browser to jump to the tag on page load but the DOM would change in response to newly fetched information or re-rendering, and the anchored content would be pushed out of view.

A solution: scroll to the anchored content after the page content is fully loaded, i.e. after all API calls are resolved, spinners removed, content is rendered. Fortunately, there’s a good way to programmatically scroll a browser window: Element.scrollIntoView(). However, there isn’t a good way to tell when the DOM is finished changing, since it can be modified at any time after page load. Let’s consider two possible strategies for determining when to scroll anchored content into view.

Strategy #1: scroll after a fixed duration. If our goal is to make sure we only scroll to content after a page is “fully loaded”, we can simplify the problem by making some assumptions. Namely, we can assume a maximum amount of time it will take a given page to fetch resources from the backend and re-render the DOM. Let’s call this assumed max duration M milliseconds. We can then easily scroll to some content by running a timeout on page load:

setTimeout(() => scrollTo(htmlId), M)

The problem with this approach is that the DOM might finish updating before or after we scroll. We end up with vertical alignment problems (as the DOM is still settling) or a jarring, unexpected scroll (if we scroll long after the DOM is settled). Both options are bad UX, and in practice it’s difficult to choose a duration constant M that is “just right” for every single page.

Strategy #2: scroll after the DOM has “settled”. If we know that choosing a good duration M for every page isn’t practical, we should try to come up with an algorithm that can choose a better M:

  1. Define an arbitrary threshold of DOM “busyness”, B milliseconds.
  2. On page load, start a timer that will scroll to anchored content after B milliseconds.
  3. If we observe any changes to the DOM, reset the timer.
  4. Once the timer expires, we know that the DOM hasn’t changed in B milliseconds.

By varying our choice of B, we’re able to have some control over how long we’re willing to wait for a page to “finish loading”. If B is 0 milliseconds, we’ll scroll to the anchored content immediately. If it’s 1000 milliseconds, we’ll wait a full second after any DOM change before scrolling. This algorithm is more resilient than fixed threshold scrolling since it explicitly listens to the DOM, but the chosen threshold is somewhat arbitrary. After some trial and error loading a sample of Dashboard pages, we determined that a 500 millisecond busyness threshold was sufficient to allow all content to load onto a page. Here’s what the implementation looks like:

const SETTLE_THRESHOLD = 500;
const scrollThunk = (observer: MutationObserver) => {
  scrollToAnchor(id);
  observer.disconnect();
};

let domTimer: number;

const observer = new MutationObserver((_mutationsList, observer) => {
  domTimer = resetTimeout(domTimer, scrollTunk, SETTLE_THRESHOLD, observer);
});

observer.observe(document.body, {childList: true, subtree: true});

domTimer = window.setTimeout(scrollThunk, SETTLE_THRESHOLD, observer);

A key assumption is that API calls take roughly the same amount of time to resolve. If most fetches take 250ms to resolve but others take 1500ms, we might see that the DOM hasn’t been changed for a while and think that it’s settled. Who knew there would be so much work involved in scrolling!

Conclusion

There you have it. A fully-featured deep linking solution with an intuitive schema, React Router blocking, autofilling, and scrolling. Thanks for reading.