Tag Archives: dashboard

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

Post Syndicated from Emily Flannery original https://blog.cloudflare.com/project-a11y/

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

At Cloudflare, we believe the Internet should be accessible to everyone. And today, we’re happy to announce a more inclusive Cloudflare dashboard experience for our users with disabilities. Recent improvements mean our dashboard now adheres to industry accessibility standards, including Web Content Accessibility Guidelines (WCAG) 2.1 AA and Section 508 of the Rehabilitation Act.

Over the past several months, the Cloudflare team and our partners have been hard at work to make the Cloudflare dashboard1 as accessible as possible for every single one of our current and potential customers. This means incorporating accessibility features that comply with the latest Web Content Accessibility Guidelines (WCAG) and Section 508 of the US’s federal Rehabilitation Act. We are invested in working to meet or exceed these standards; to demonstrate that commitment and share openly about the state of accessibility on the Cloudflare dashboard, we have completed the Voluntary Product Accessibility Template (VPAT), a document used to evaluate our level of conformance today.

Conformance with a technical and legal spec is a bit abstract–but for us, accessibility simply means that as many people as possible can be successful users of the Cloudflare dashboard. This is important because each day, more and more individuals and businesses rely upon Cloudflare to administer and protect their websites.

For individuals with disabilities who work on technology, we believe that an accessible Cloudflare dashboard could mean improved economic and technical opportunities, safer websites, and equal access to tools that are shaping how we work and build on the Internet.

For designers and developers at Cloudflare, our accessibility remediation project has resulted in an overhaul of our component library. Our newly WCAG-compliant components expedite and simplify our work building accessible products. They make it possible for us to deliver on our commitment to an accessible dashboard going forward.

Our Journey to an Accessible Cloudflare Dashboard

In 2021, we initiated an audit with third party experts to identify accessibility challenges in the Cloudflare dashboard. This audit came back with a daunting 213-page document—a very, very long list of compliance gaps.

We learned from the audit that there were many users we had unintentionally failed to design and build for in Cloudflare dashboard user interfaces. Most especially, we had not done well accommodating keyboard users and screen reader users, who often rely upon these technologies because of a physical impairment. Those impairments include low vision or blindness, motor disabilities (examples include tremors and repetitive strain injury), or cognitive disabilities (examples include dyslexia and dyscalculia).

As a product and engineering organization, we had spent more than a decade in cycles of rapid growth and product development. While we’re proud of what we have built, the audit made clear to us that there was a great need to address the design and technical debt we had accrued along the way.

One year, four hundred Jira tickets, and over 25 new, accessible web components later, we’re ready to celebrate our progress with you. Major categories of work included:

  1. Forms: We re-wrote our internal form components with accessibility and developer experience top of mind. We improved form validation and error handling, labels, required field annotations, and made use of persistent input descriptions instead of placeholders. Then, we deployed those component upgrades across the dashboard.
  2. Data visualizations: After conducting a rigorous re-evaluation of their design, we re-engineered charts and graphs to be accessible to keyboard and screen reader users. See below for a brief case study.
  3. Heading tags: We corrected page structure throughout the dashboard by replacing all our heading tags (<h1>, <h2>, etc.) with a technique we borrowed from Heydon Pickering. This technique is an approach to heading level management that uses React Context and basic arithmetic.
  4. SVGs: We reworked how we create SVGs (Scalable Vector Graphics), so that they are labeled properly and only exposed to assistive technology when useful.
  5. Node modules: We jumped several major versions of old, inaccessible node modules that our UI components depend upon (and we broke many things along the way).
  6. Color: We overhauled our use of color, and contributed a new volume of accessible sequential colors to our design system.
  7. Bugs: We squashed a lot of bugs that had made their way into the dashboard over the years. The most common type of bug we encountered related to incorrect or unsemantic use of HTML elements—for example, using a <div> where we should have used a <td> (table data) or <tr> (table row) element within a table.

Case Study: Accessibility Work On Cloudflare Dashboard Data & Analytics

The Cloudflare dashboard is replete with analytics and data visualizations designed to offer deep insight into users’ websites’ performance, traffic, security, and more. Making those data visualizations accessible proved to be among the most complex and interdisciplinary issues we faced in the remediation work.

An example of a problem we needed to solve related to WCAG success criterion 1.4.1, which pertains to the use of color. 1.4.1 specifies that color cannot be the only means by which to convey information, such as the differentiation between two items compared in a chart or graph.

Our charts were clearly nonconforming with this standard, using color alone to represent different data being compared. For example, a typical graph might have used the color blue to show the number of requests to a website that were 200 OK, and the color orange to show 403 Forbidden, but failed to offer users another way to discern between the two status codes.

Our UI team went to work on the problem, and chose to focus our effort first on the Cloudflare dashboard time series graphs.

Interestingly, we found that design patterns recommended even by accessibility experts created wholly unusable visualizations when placed into the context of real world data. Examples of such recommended patterns include using different line weights, patterns (dashed, dotted or other line styles), and terminal glyphs (symbols set at the beginning and end of the lines) to differentiate items being compared.

We tried, and failed, to apply a number of these patterns; you can see the evolution of this work on our time series graph component in the three different images below.

v.1

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
Here is an early attempt at using both terminal glyphs and patterns to differentiate data in a time series graph. You can see that the terminal glyphs pile up and become indistinguishable; the differences among the line patterns are very hard to discern. This code never made it into production.

v.2

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
In this version, we eliminated terminal glyphs but kept line patterns. Additionally, we faded the unfocused items in the graph to help bring highlighted data to the forefront. This latter technique made it into our final solution.

v.3

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
Here we eliminated patterns altogether, simplified the user interface to only use the fading technique on unfocused items, and put our new, sequentially accessible colors to use. Finally, a visual design solution approved by accessibility and data visualization experts, as well as our design and engineering teams.

After arriving at our design solution, we had some engineering work to do.

In order to meet WCAG success criterion 2.1.1, we rewrote our time series graphs to be fully keyboard accessible by adding focus handling to every data point, and enabling the traversal of data using arrow keys.

Navigating time series data points by keyboard on the Cloudflare dashboard.

We did some fine-tuning, specifically to support screen readers: we eliminated auditory “chartjunk” (unnecessary clutter or information in a chart or graph) and cleaned up decontextualized data (a scenario in which numbers are exposed to and read by a screen reader, but contextualizing information, like x- and y-axis labels, is not).

And lastly, to meet WCAG 1.1.1, we engineered new UI component wrappers to make chart and graph data downloadable in CSV format. We deployed this part of the solution across all charts and graphs, not just the time series charts like those shown above. No matter how you browse and interact with the web, we hope you’ll notice this functionality around the Cloudflare dashboard and find value in it.

Making all of this data available to low vision, keyboard, and assistive technology users was an interesting challenge for us, and a true team effort. It necessitated a separate data visualization report conducted by another, more specialized team of third party experts, deep collaboration between engineering and design, and many weeks of development.

Applying this thorough treatment to all data visualizations on the Cloudflare dashboard is our goal, but still work in progress. Please stay tuned for more accessible updates to our chart and graph components.

Conclusion

There’s a lot of nuance to accessibility work, and we were novices at the beginning: researching and learning as we were doing. We also broke a lot of things in the process, which (as any engineering team knows!) can be stressful.

Overall, our team’s biggest challenge was figuring out how to complete a high volume of cross-functional work in the shortest time possible, while also setting a foundation for these improvements to persist over time.

As a frontend engineering and design team, we are very grateful for having had the opportunity to focus on this problem space and to learn from truly world-class accessibility experts along the way.

Accessibility matters to us, and we know it does to you. We’re proud of our progress, and there’s always more to do to make Cloudflare more usable for all of our customers. This is a critical piece of our foundation at Cloudflare, where we are building the most secure, performant and reliable solutions for the Internet. Stay tuned for what’s next!

Not using Cloudflare yet? Get started today and join us on our mission to build a better Internet.

1All references to “dashboard” in this post are specific to the primary user authenticated Cloudflare web platform. This does not include Cloudflare’s product-specific dashboards, marketing, support, educational materials, or third party integrations.

Now all customers can share access to their Cloudflare account with Role Based Access Controls

Post Syndicated from Joseph So original https://blog.cloudflare.com/rbac-for-everyone/

Now all customers can share access to their Cloudflare account with Role Based Access Controls

Now all customers can share access to their Cloudflare account with Role Based Access Controls

Cloudflare’s mission is to help build a better Internet. Pair that with our core belief that security is something that should be accessible to everyone and the outcome is a better and safer Internet for all. Previously, our FREE and PAYGO customers didn’t have the flexibility to give someone control of just part of their account, they had to give access to everything.

Starting today, role based access controls (RBAC), and all of our additional roles will be rolled out to users on every plan! Whether you are a small business or even a single user, you can ensure that you can add users only to parts of Cloudflare you deem appropriate.

Why should I limit access?

It is good practice with security in general to limit access to what a team member needs to do a job. Restricting access limits the overall threat surface if a given user was compromised, and ensures that you limit the surface that mistakes can be made.

If a malicious user was able to gain access to an account, but it only had read access, you’ll find yourself with less of a headache than someone who had administrative access, and could change how your site operates. Likewise, you can prevent users outside their role from accidentally making changes to critical features like firewall or DNS configuration.

What are roles?

Roles are a grouping of permissions that make sense together. At Cloudflare, this means grouping permissions together by access to a product suite.

Cloudflare is a critical piece of infrastructure for customers, and roles ensure that you can give your team the access they need, scoped to what they’ll do, and which products they interact with.

Once enabled for Role Based Access Controls, by going to “Manage Account” and “Members” in the left sidebar, you’ll have the following list of roles available, which each grant access to disparate subsets of the Cloudflare offering.

Role Name Role Description
Administrator Can access the full account, except for membership management and billing.
Administrator Read Only Can access the full account in read-only mode.
Analytics Can read Analytics.
Audit Logs Viewer Can view Audit Logs.
Billing Can edit the account’s billing profile and subscriptions.
Cache Purge Can purge the edge cache.
Cloudflare Access Can edit Cloudflare Access policies.
Cloudflare Gateway Can edit Cloudflare Gateway and read Access.
Cloudflare Images Can edit Cloudflare Images assets
Cloudflare Stream Can edit Cloudflare Stream media.
Cloudflare Workers Admin Can edit Cloudflare Workers.
Cloudflare Zero Trust Can edit Cloudflare Zero Trust.
Cloudflare Zero Trust PII Can access Cloudflare Zero Trust PII.
Cloudflare Zero Trust Read Only Can access Cloudflare for Zero Trust read only mode.
Cloudflare Zero Trust Reporting Can access Cloudflare for Zero Trust reporting data.
DNS Can edit DNS records.
Firewall Can edit WAF, IP Firewall, and Zone Lockdown settings.
HTTP Applications Can view and edit HTTP Applications
HTTP Applications Read Can view HTTP Applications
Load Balancer Can edit Load Balancers, Pools, Origins, and Health Checks.
Log Share Can edit Log Share configuration.
Log Share Reader Can read Enterprise Log Share.
Magic Network Monitoring Can view and edit MNM configuration
Magic Network Monitoring Admin Can view, edit, create, and delete MNM configuration
Magic Network Monitoring Read-Only Can view MNM configuration
Network Services Read (Magic) Grants read access to network configurations for Magic services.
Network Services Write (Magic) Grants write access to network configurations for Magic services.
SSL/TLS, Caching, Performance, Page Rules, and Customization Can edit most Cloudflare settings except for DNS and Firewall.
Trust and Safety Can view and request reviews for blocks
Zaraz Admin Can edit Zaraz configuration.
Zaraz Readonly Can read Zaraz configuration.

If you find yourself on a team that is growing, you may want to grant firewall and DNS access to a delegated network admin, billing access to your bookkeeper, and Workers access to your developer.

Each of these roles provides specific access to a portion of your Cloudflare account, scoping them to the appropriate set of products. Even Super Administrator is now available, allowing you to provide this access to somebody without handing over your password and 2FA.

How to use our roles

The first step to using RBAC is an analysis and review of the duties and tasks of your team. When a team member primarily interacts with a specific part of the Cloudflare offering, start off by giving them only access to that part(s). Our roles are built in a way that allows multiple to be assigned to a single user, such that when they require more access, you can grant them an additional role.

Rollout

At this point in time, we will be rolling out RBAC over the next few weeks. When the roles become available in your account, head over to our documentation to learn about each of the roles in detail.

We’ve shipped so many products the Cloudflare dashboard needed its own search engine

Post Syndicated from Emily Flannery original https://blog.cloudflare.com/quick-search-beta/

We've shipped so many products the Cloudflare dashboard needed its own search engine

We've shipped so many products the Cloudflare dashboard needed its own search engine

Today we’re proud to announce our first release of quick search for the Cloudflare dashboard, a beta version of our first ever cross-dashboard search tool to help you navigate our products and features. This first release is now available to a small percentage of our customers. Want to request early access? Let us know by filling out this form.

What we’re launching

We’re launching quick search to speed up common interactions with the Cloudflare dashboard. Our dashboard allows you to configure Cloudflare’s full suite of products and features, and quick search gives you a shortcut.

To get started, you can access the quick search tool from anywhere within the Cloudflare dashboard by clicking the magnifying glass button in the top navigation, or hitting Ctrl + K on Linux and Windows or ⌘ + K on Mac. (If you find yourself forgetting which key combination it is just remember that it’s or Ctrl-K-wik.) From there, enter a search term and then select from the results shown below.

We've shipped so many products the Cloudflare dashboard needed its own search engine
Access quick search from the top navigation bar, or use keyboard shortcuts Ctrl + K on Linux and Windows or ⌘ + K on Mac.

Current supported functionality

What functionality will you have access to? Below you’ll learn about the three core capabilities of quick search that are included in this release, as well as helpful tips for using the tool.

Search for a page in the dashboard

Start typing in the name of the product you’re looking for, and we’ll load matching terms after each key press. You will see results for any dashboard page that currently exists in your sidebar navigation. Then, just click the desired result to navigate directly there.

We've shipped so many products the Cloudflare dashboard needed its own search engine
Search for “page” and you’ll see results categorized into “website-only products” and “account-wide products.”
We've shipped so many products the Cloudflare dashboard needed its own search engine
Search for “ddos” and you’ll see results categorized into “websites,” “website-only products” and “account-wide products.”

Search for website-only products

For our customers who manage a website or domain in Cloudflare, you have access to a multitude of Cloudflare products and features to enhance your website’s security, performance and reliability. Quick search can be used to easily find those products and features, regardless of where you currently are in the dashboard (even from within another website!).

You may easily search for your website by name to navigate to your website’s Overview page:

We've shipped so many products the Cloudflare dashboard needed its own search engine

You may also navigate to the products and feature pages within your specific website(s). Note that you can perform a website-specific search from anywhere in your core dashboard using one of two different approaches, which are explained below.

First, you may search first for your website by name, then navigate search results from there:

We've shipped so many products the Cloudflare dashboard needed its own search engine

Alternatively, you may search first for the product or feature you’re looking for, then filter down by your website:

We've shipped so many products the Cloudflare dashboard needed its own search engine

Search for account-wide products

Many Cloudflare products and features are not tied directly to a website or domain that you have set up in Cloudflare, like Workers, R2, Magic Transit—not to mention their related sub-pages. Now, you may use quick search to more easily navigate to those sections of the dashboard.

We've shipped so many products the Cloudflare dashboard needed its own search engine

Here’s an overview of what’s next on our quick search roadmap (and not yet supported today):

  • Search results do not currently return results of product- and feature-specific names or configurations, such as Worker names, specific DNS records, IP addresses, Firewall Rules.
  • Search results do not currently return results from within the Zero Trust dashboard.
  • Search results do not currently return results for Cloudflare content living outside the dashboard, like Support or Developer documentation.

We’d love to hear what you think. What would you like to see added next? Let us know using the feedback link found at the bottom of the search window.

We've shipped so many products the Cloudflare dashboard needed its own search engine

Our vision for the future of the dashboard

We’re excited to launch quick search and to continue improving our dashboard experience for all customers. Over time, we’ll mature our search functionality to index any and all content you might be looking for — including search results for all product content, Support and Developer docs, extending search across accounts, caching your recent searches, and more.

Quick search is one of many important user experience improvements we are planning to tackle over the coming weeks, months and years. The dashboard is central to your Cloudflare experience, and we’re fully committed to making your experience delightful, useful, and easy. Stay tuned for an upcoming blog post outlining the vision for the Cloudflare dashboard, from our in-app home experience to our global navigation and beyond.

For now, keep your eye out for the little search icon that will help you in your day-to-day responsibilities in Cloudflare, and if you don’t see it yet, don’t worry—we can’t wait to ship it to you soon.

If you don’t yet see quick search in your Cloudflare dashboard, you can request early access by filling out this form.

Internship Experience: Software Development Intern

Post Syndicated from Ulysses Kee original https://blog.cloudflare.com/internship-experience-software-development-intern/

Internship Experience: Software Development Intern

Before we dive into my experience interning at Cloudflare, let me quickly introduce myself. I am currently a master’s student at the National University of Singapore (NUS) studying Computer Science. I am passionate about building software that improves people’s lives and making the Internet a better place for everyone. Back in December 2021, I joined Cloudflare as a Software Development Intern on the Partnerships team to help improve the experience that Partners have when using the platform. I was extremely excited about this opportunity and jumped at the prospect of working on serverless technology to build viable tools for our partners and customers. In this blog post, I detail my experience working at Cloudflare and the many highlights of my internship.

Interview Experience

The process began for me back when I was taking a software engineering module at NUS where one of my classmates had shared a job post for an internship at Cloudflare. I had known about Cloudflare’s DNS service prior and was really excited to learn more about the internship opportunity because I really resonated with the company’s mission to help build a better Internet.

I knew right away that this would be a great opportunity and submitted my application. Soon after, I heard back from the recruiting team and went through the interview process – the entire interview process was extremely accommodating and is definitely the most enjoyable interview experience I have had. Throughout the process, I was constantly asked about the kind of things I would like to work on and the relevance of the work that I would be doing. I felt that this thorough communication carried on throughout the internship and really was a cornerstone of my experience interning at Cloudflare.

My Internship

My internship began with onboarding and training, and then after, I had discussions with my mentor, Ayush Verma, on the projects we aimed to complete during the internship and the order of objectives. The main issues we wanted to address was the current manual process that our internal teams and partners go through when they want to duplicate the configuration settings on a zone, or when they want to compare one zone to other zones to ensure that there are no misconfigurations. As you can imagine, with the number of different configurations offered on the Cloudflare dashboard for customers, it could take significant time to copy over every setting and rule manually from one zone to another. Additionally, this process, when done manually, poses a potential threat for misconfigurations due to human error. Furthermore, as more and more customers onboard different zones onto Cloudflare, there needs to be a more automated and improved way for them to make these configuration setups.

Initially, we discussed using Terraform as Cloudflare already supports terraform automation. However, this approach would only cater towards customers and users that have more technical resources and, in true Cloudflare spirit, we wanted to keep it simple such that it could be used by any and everyone. Therefore, we decided to leverage the publicly available Cloudflare APIs and create a browser-based application that interacts with these APIs to display configurations and make changes easily from a simple UI.

With the end goal of simplifying the experience for our partners and customers in duplicating zone configurations, we decided to build a Zone Copier web application solely built on Cloudflare Workers. This tool would, in a click of a button, automatically copy over every setting that can be copied from one zone to another, significantly reducing the amount of time and effort required to make the changes.

Alongside the Zone Copier, we would have some auxiliary tools such as a Zone Viewer, and Zone Comparison, where a customer can easily have a full view of their configurations on a single webpage and be able to compare different zones that they use respectively. These other applications improve upon the existing methods through which Cloudflare users can view their zone configurations, and allow for the direct comparison between different zones.

Importantly, these applications are not to replace the Cloudflare Dashboard, but to complement it instead – for deeper dives into a single particular configuration setting, the Cloudflare Dashboard remains the way to go.

To begin building the web application, I spent the first few weeks diving into the publicly available APIs offered by Cloudflare as part of the v4 API to verify the outputs of each endpoint, and the type of data that would be sent as a response from a request. This took much longer than expected as certain endpoints provided different default responses for a zone that has either an empty setting – for example, not having any Firewall Rules created – or uses a nested structure for its relevant response. These different potential responses have to be examined so that when the web application calls the respective API endpoint, the responses are handled appropriately. This process was quite manual as each endpoint had to be verified individually to ensure the output would work seamlessly with the application.

Once I completed my research, I was able to start designing the web application. Building the web application was a very interesting experience as the stack rested solely on Workers, a serverless application platform. My prior experiences building web applications used servers that require the deployment of a server built using Express and Node.js, whereas for my internship project, I completely relied on a backend built using the itty-router library on Workers to interface with the publicly available Cloudflare APIs. I found this extremely exciting as building a serverless application required less overhead compared to setting up a server and deploying it, and using Workers itself has many other added benefits such as zero cold starts. This introduction to serverless technology and my experience deep-diving into the capabilities of Workers has really opened my eyes to the possibilities that Workers as a platform can offer. With Workers, you can deploy any application on Cloudflare’s global network like I did!

For the frontend of the web application, I used React and the Chakra-UI library to build the user interface for which the Zone Viewer, Zone Comparison, and Zone Copier, is based on. The routing between different pages was done using React Router and the application is deployed directly through Workers.

Here is a screenshot of the application:

Internship Experience: Software Development Intern

Presenting the prototype application

As developers will know, the best way to obtain feedback for the tool that you’re building is to directly have your customers use them and let you know what they think of your application and the kind of features they want to have built on top of it. Therefore, once we had a prototype version of the web application for the Zone Viewer and Zone Comparison complete, we presented the application to the Solutions Engineering team to hear their thoughts on the impact the tool would have on their work and additional features they would like to see built on the application. I found this process very enriching as they collectively mentioned how impactful the application would be for their work and the value add this project provides to them.

Some interesting feedback and feature requests I received were:

  1. The Zone Copier would definitely be very useful for our partners who have to replicate the configuration of one zone to another regularly, and it’s definitely going to help make sure there are less human errors in the process of configuring the setups.
  2. Besides duplicating configurations from zone-to-zone, could we use this to replicate the configurations from a best-in-class setup for different use cases and allow partners to deploy this with a few clicks?
  3. Can we use this tool to generate quarterly reports?
  4. The Zone Viewer would be very helpful for us when we produce documentation on a particular zone’s configuration as part of a POC report.
  5. The Zone Viewer will also give us much deeper insight to better understand the current zone configurations and provide recommendations to improve it.

It was also a very cool experience speaking to the broad Solutions Engineering team as I found that many were very technically inclined and had many valid suggestions for improving the architecture and development of the applications. A special thanks to Edwin Wong for setting up the sharing session with the internal team, and many thanks to Xin Meng, AQ Jiao, Yonggil Choi, Steve Molloy, Kyouhei Hayama, Claire Lim and Jamal Boutkabout for their great insight and suggestions!

Impact of Cloudflare outside of work

While Cloudflare is known for its impeccable transparency throughout the company, and the stellar products it provides in helping make the Internet better, I wanted to take this opportunity to talk about the other endeavors that the company has too.

Cloudflare is part of the Pledge 1%, where the company dedicates 1% of products and 1% of our time to give back to the local communities as well as all the communities we support online around the world.

I took part in one of these activities, where we spent a morning cleaning up parts of the East Coast Park beach, by picking up trash and litter that had been left behind by other park users. Here’s a picture of us from that morning:

Internship Experience: Software Development Intern

From day one, I have been thoroughly impressed by Cloudflare’s commitment to its culture and the effort everyone at Cloudflare puts in to make the company a great place to work and have a positive impact on the surrounding community.

In addition to giving back to the community, other aspects of company culture include having a good team spirit and safe working environment where you feel appreciated and taken care of. At Cloudflare, I have found that everyone is very understanding of work commitments. I faced a few challenges during the internship where I had to spend additional time on university related projects and work, and my manager has always been very supportive and understanding if I required additional time to complete parts of the internship project.

Concluding takeaways

My experience interning at Cloudflare has been extremely positive, and I have seen first hand how transparent the company is with not only its employees but also its customers, and it truly is a great place to work. Cloudflare’s collaborative culture allowed me to access members from different teams, to obtain their thoughts and assistance with certain issues that I faced from time to time. I would not have been able to produce an impactful project without the help of the different brilliant, and motivated, people I worked with across the span of the internship, and I am truly grateful for such a rewarding experience.

We are getting ready to open intern roles for this coming Fall, so we encourage you to visit our careers page frequently, to be up-to-date on all the opportunities we have within our teams.

Query and visualize Amazon Redshift operational metrics using the Amazon Redshift plugin for Grafana

Post Syndicated from Sergey Konoplev original https://aws.amazon.com/blogs/big-data/query-and-visualize-amazon-redshift-operational-metrics-using-the-amazon-redshift-plugin-for-grafana/

Grafana is a rich interactive open-source tool by Grafana Labs for visualizing data across one or many data sources. It’s used in a variety of modern monitoring stacks, allowing you to have a common technical base and apply common monitoring practices across different systems. Amazon Managed Grafana is a fully managed, scalable, and secure Grafana-as-a-service solution developed by AWS in collaboration with Grafana Labs.

Amazon Redshift is the most widely used data warehouse in the cloud. You can view your Amazon Redshift cluster’s operational metrics on the Amazon Redshift console, use AWS CloudWatch, and query Amazon Redshift system tables directly from your cluster. The first two options provide a set of predefined general metrics and visualizations. The last one allows you to use the flexibility of SQL to get deep insights into the details of the workload. However, querying system tables requires knowledge of system table structures. To address that, we came up with a consolidated Amazon Redshift Grafana dashboard that visualizes a set of curated operational metrics and works on top of the Amazon Redshift Grafana data source. You can easily add it to an Amazon Managed Grafana workspace, as well as to any other Grafana deployments where the data source is installed.

This post guides you through a step-by-step process to create an Amazon Managed Grafana workspace and configure an Amazon Redshift cluster with a Grafana data source for it. Lastly, we show you how to set up the Amazon Redshift Grafana dashboard to visualize the cluster metrics.

Solution overview

The following diagram illustrates the solution architecture.

Architecture Diagram

The solution includes the following components:

  • The Amazon Redshift cluster to get the metrics from.
  • Amazon Managed Grafana, with the Amazon Redshift data source plugin added to it. Amazon Managed Grafana communicates with the Amazon Redshift cluster via the Amazon Redshift Data Service API.
  • The Grafana web UI, with the Amazon Redshift dashboard using the Amazon Redshift cluster as the data source. The web UI communicates with Amazon Managed Grafana via an HTTP API.

We walk you through the following steps during the configuration process:

  1. Configure an Amazon Redshift cluster.
  2. Create a database user for Amazon Managed Grafana on the cluster.
  3. Configure a user in AWS Single Sign-On (AWS SSO) for Amazon Managed Grafana UI access.
  4. Configure an Amazon Managed Grafana workspace and sign in to Grafana.
  5. Set up Amazon Redshift as the data source in Grafana.
  6. Import the Amazon Redshift dashboard supplied with the data source.

Prerequisites

To follow along with this walkthrough, you should have the following prerequisites:

  • An AWS account
  • Familiarity with the basic concepts of the following services:
    • Amazon Redshift
    • Amazon Managed Grafana
    • AWS SSO

Configure an Amazon Redshift cluster

If you don’t have an Amazon Redshift cluster, create a sample cluster before proceeding with the following steps. For this post, we assume that the cluster identifier is called redshift-demo-cluster-1 and the admin user name is awsuser.

  1. On the Amazon Redshift console, choose Clusters in the navigation pane.
  2. Choose your cluster.
  3. Choose the Properties tab.

Redshift Cluster Properties

To make the cluster discoverable by Amazon Managed Grafana, you must add a special tag to it.

  1. Choose Add tags. Redshift Cluster Tags
  2. For Key, enter GrafanaDataSource.
  3. For Value, enter true.
  4. Choose Save changes.

Redshift Cluster Tags

Create a database user for Amazon Managed Grafana

Grafana will be directly querying the cluster, and it requires a database user to connect to the cluster. In this step, we create the user redshift_data_api_user and apply some security best practices.

  1. On the cluster details page, choose Query data and Query in query editor v2.Query Editor v2
  2. Choose the redshift-demo-cluster-1 cluster we created previously.
  3. For Database, enter the default dev.
  4. Enter the user name and password that you used to create the cluster.
  5. Choose Create connection.Redshift SU
  6. In the query editor, enter the following statements and choose Run:
CREATE USER redshift_data_api_user PASSWORD '&lt;password&gt;' CREATEUSER;
ALTER USER redshift_data_api_user SET readonly TO TRUE;
ALTER USER redshift_data_api_user SET query_group TO 'superuser';

The first statement creates a user with superuser privileges necessary to access system tables and views (make sure to use a unique password). The second prohibits the user from making modifications. The last statement isolates the queries the user can run to the superuser queue, so they don’t interfere with the main workload.

In this example, we use service managed permissions in Amazon Managed Grafana and a workspace AWS Identity and Access Management (IAM) role as an authentication provider in the Amazon Redshift Grafana data source. We create the database user redshift_data_api_user using the AmazonGrafanaRedshiftAccess policy.

Configure a user in AWS SSO for Amazon Managed Grafana UI access

Two authentication methods are available for accessing Amazon Managed Grafana: AWS SSO and SAML. In this example, we use AWS SSO.

  1. On the AWS SSO console, choose Users in the navigation pane.
  2. Choose Add user.
  3. In the Add user section, provide the required information.

SSO add user

In this post, we select Send an email to the user with password setup instructions. You need to be able to access the email address you enter because you use this email further in the process.

  1. Choose Next to proceed to the next step.
  2. Choose Add user.

An email is sent to the email address you specified.

  1. Choose Accept invitation in the email.

You’re redirected to sign in as a new user and set a password for the user.

  1. Enter a new password and choose Set new password to finish the user creation.

Configure an Amazon Managed Grafana workspace and sign in to Grafana

Now you’re ready to set up an Amazon Managed Grafana workspace.

  1. On the Amazon Grafana console, choose Create workspace.
  2. For Workspace name, enter a name, for example grafana-demo-workspace-1.
  3. Choose Next.
  4. For Authentication access, select AWS Single Sign-On.
  5. For Permission type, select Service managed.
  6. Chose Next to proceed.AMG Workspace configure
  7. For IAM permission access settings, select Current account.AMG permission
  8. For Data sources, select Amazon Redshift.
  9. Choose Next to finish the workspace creation.Redshift to workspace

You’re redirected to the workspace page.

Next, we need to enable AWS SSO as an authentication method.

  1. On the workspace page, choose Assign new user or group.SSO new user
  2. Select the previously created AWS SSO user under Users and Select users and groups tables.SSO User

You need to make the user an admin, because we set up the Amazon Redshift data source with it.

  1. Select the user from the Users list and choose Make admin.
  2. Go back to the workspace and choose the Grafana workspace URL link to open the Grafana UI.AMG workspace
  3. Sign in with the user name and password you created in the AWS SSO configuration step.

Set up an Amazon Redshift data source in Grafana

To visualize the data in Grafana, we need to access the data first. To do so, we must create a data source pointing to the Amazon Redshift cluster.

  1. On the navigation bar, choose the lower AWS icon (there are two) and then choose Redshift from the list.
  2. For Regions, choose the Region of your cluster.
  3. Select the cluster from the list and choose Add 1 data source.Choose Redshift Cluster
  4. On the Provisioned data sources page, choose Go to settings.
  5. For Name, enter a name for your data source.
  6. By default, Authentication Provider should be set as Workspace IAM Role, Default Region should be the Region of your cluster, and Cluster Identifier should be the name of the chosen cluster.
  7. For Database, enter dev.
  8. For Database User, enter redshift_data_api_user.
  9. Choose Save & Test.Settings for Data Source

A success message should appear.

Data source working

Import the Amazon Redshift dashboard supplied with the data source

As the last step, we import the default Amazon Redshift dashboard and make sure that it works.

  1. In the data source we just created, choose Dashboards on the top navigation bar and choose Import to import the Amazon Redshift dashboard.Dashboards in the plugin
  2. Under Dashboards on the navigation sidebar, choose Manage.
  3. In the dashboards list, choose Amazon Redshift.

The dashboard appear, showing operational data from your cluster. When you add more clusters and create data sources for them in Grafana, you can choose them from the Data source list on the dashboard.

Clean up

To avoid incurring unnecessary charges, delete the Amazon Redshift cluster, AWS SSO user, and Amazon Managed Grafana workspace resources that you created as part of this solution.

Conclusion

In this post, we covered the process of setting up an Amazon Redshift dashboard working under Amazon Managed Grafana with AWS SSO authentication and querying from the Amazon Redshift cluster under the same AWS account. This is just one way to create the dashboard. You can modify the process to set it up with SAML as an authentication method, use custom IAM roles to manage permissions with more granularity, query Amazon Redshift clusters outside of the AWS account where the Grafana workspace is, use an access key and secret or AWS Secrets Manager based connection credentials in data sources, and more. You can also customize the dashboard by adding or altering visualizations using the feature-rich Grafana UI.

Because the Amazon Redshift data source plugin is an open-source project, you can install it in any Grafana deployment, whether it’s in the cloud, on premises, or even in a container running on your laptop. That allows you to seamlessly integrate Amazon Redshift monitoring into virtually all your existing Grafana-based monitoring stacks.

For more details about the systems and processes described in this post, refer to the following:


About the Authors

Sergey Konoplev is a Senior Database Engineer on the Amazon Redshift team. Sergey has been focusing on automation and improvement of database and data operations for more than a decade.

Milind Oke is a Data Warehouse Specialist Solutions Architect based out of New York. He has been building data warehouse solutions for over 15 years and specializes in Amazon Redshift.

How to set up Amazon Quicksight dashboard for Amazon Pinpoint and Amazon SES engagement events

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-set-up-amazon-quicksight-dashboard-for-amazon-pinpoint-and-amazon-ses-events/

In this post, we will walk through using Amazon Pinpoint and Amazon Quicksight to create customizable messaging campaign reports. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service that allows customers to connect with users over channels like email, SMS, push, or voice. Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. This solution allows event and user data from Amazon Pinpoint to flow into Amazon Quicksight. Once in Quicksight, customers can build their own reports that shows campaign performance on a more granular level.

Engagement Event Dashboard

Customers want to view the results of their messaging campaigns in ever increasing levels of granularity and ensure their users see value from the email, SMS or push notifications they receive. Customers also want to analyze how different user segments respond to different messages, and how to optimize subsequent user communication. Previously, customers could only view this data in Amazon Pinpoint analytics, which offers robust reporting on: events, funnels, and campaigns. However, does not allow analysis across these different parameters and the building of custom reports. For example, show campaign revenue across different user segments, or show what events were generated after a user viewed a campaign in a funnel analysis. Customers would need to extract this data themselves and do the analysis in excel.

Prerequisites

  • Digital user engagement event database solution must be setup at 1st.
  • Customers should be prepared to purchase Amazon Quicksight because it has its own set of costs which is not covered within Amazon Pinpoint cost.

Solution Overview

This Solution uses the Athena tables created by Digital user engagement events database solution. The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about Amazon Pinpoint engagement events and log those in Amazon Athena in the form of Athena views. You still need to manually configure Amazon Quicksight dashboards to link to these newly generated Athena views. Please follow the steps below in order for further information.

Use case(s)

Event dashboard solutions have following use cases: –

  • Deep dive into engagement insights. (eg: SMS events, Email events, Campaign events, Journey events)
  • The ability to view engagement events at the individual user level.
  • Data/process mining turn raw event data into useful marking insights.
  • User engagement benchmarking and end user event funneling.
  • Compute campaign conversions (post campaign user analysis to show campaign effectiveness)
  • Build funnels that shows user progression.

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

Step 1 – Create AWS account, Pinpoint Project, Implement Event-Database-Solution.
As part of this step customers need to implement DUE Event database solution as the current solution (DUE event dashboard) is an extension of DUE event database solution. The basic assumption here is that the customer has already configured Amazon Pinpoint project or Amazon SES within the required AWS region before implementing this step.

The steps required to implement an event dashboard solution are as follows.

a/Follow the steps mentioned in Event database solution to implement the complete stack. Prior installing the complete stack copy and save the name Athena events database name as shown in the diagram. For my case it is due_eventdb. Database name is required as an input parameter for the current Event Dashboard solution.

b/Once the solution is deployed, navigate to the output page of the cloud formation stack, and copy, and save the following information, which will be required as input parameters in step 2 of the current Event Dashboard solution.

Step 2 – Deploy Cloud formation template for Event dashboard solution
This step generates a number of new Amazon Athena views that will serve as a data source for Amazon Quicksight. Continue with the following actions.

  • Download the cloud formation template(“Event-dashboard.yaml”) from AWS samples.
  • Navigate to Cloud formation page in AWS console, click up right on “Create stack” and select the option “With new resources (standard)”
  • Leave the “Prerequisite – Prepare template” to “Template is ready” and for the “Specify template” option, select “Upload a template file”. On the same page, click on “Choose file”, browse to find the file “Event-dashboard.yaml” file and select it. Once the file is uploaded, click “Next” and deploy the stack.

  • Enter following information under the section “Specify stack details”:
    • EventAthenaDatabaseName – As mentioned in Step 1-a.
    • S3DataLogBucket- As mentioned in Step 1-b
    • This solution will create additional 5 Athena views which are
      • All_email_events
      • All_SMS_events
      • All_custom_events (Custom events can be Mobile app/WebApp/Push Events)
      • All_campaign_events
      • All_journey_events

Step 3 – Create Amazon Quicksight engagement Dashboard
This step walks you through the process of creating an Amazon Quicksight dashboard for Amazon Pinpoint engagement events using the Athena views you created in step-2

  1. To Setup Amazon Quicksight for the 1st time please follow this link (this process is not needed if you have already setup Amazon Quicksight). Please make sure you are an Amazon Quicksight Administrator.
  2. Go/search Amazon Quicksight on AWS console.
  3. Create New Analysis and then select “New dataset”
  4. Select Athena as data source
  5. As a next step, you need to select what all analysis you need for respective events. This solution provides option to create 5 different set of analysis as mentioned in Step 2. They are a/All email events, b/All SMS Events, c/All Custom Events (Mobile/Web App, web push etc), d/ All Campaign events, e/All Journey events. Dashboard can be created from Quicksight analysis and same can be shared among the organization stake holders. Following are the steps to create analysis and dashboards for different type of events.
  6. Email Events –
    • For all email events, name the analysis “All-emails-events” (this can be any kind of customer preferred nomenclature), select Athena workgroup as primary, and then create a data source.
    • Once you create the data source Quicksight lists all the views and tables available under the specified database (in our case it is:-  due_eventdb). Select the email_all_events view as data source.
    • Select the event data location for analysis. There are mainly two options available which are a/ Import to Spice quicker analysis b/ Directly query your data. Please select the preferred options and then click on “visualize the data”.
    • Import to Spice quicker analysis – SPICE is the Amazon QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. (1 GB of storage is available for free for extra storage customer need to pay extra, please refer cost section in this document )
    • Directly query your data – This process enables Quicksight to query directly to the Athena or source database (In the current case it is Athena) and Quicksight will not store any data.
    • Now that you have selected a data source, you will be taken to a blank quick sight canvas (Blank analysis page) as shown in the following Image, please drag and drop what visualization type you need to visualize onto the auto-graph pane. Please note that Amazon QuickSight is a Busines intelligence platform, so customers are free to choose the desired visualization types to observe the individual engagement events.
    • As part of this blog, we have displayed how to create some simple analysis graphs to visualize the engagement events.
    • As an initial step please Select tabular Visualization as shown in the Image.
    • Select all the event dimensions that you want to put it as part of the Table in X axis. Amazon Quicksight table can be extended to show as many as tables columns, this completely depends upon the business requirement how much data marketers want to visualize.
    • Further filtering on the table can be done using Quicksight filters, you can apply the filter on specific granular values to enable further filtering. For Eg – If you want to apply filtering on the destination email Id then 1/Select the filter from left hand menu 2/Add destination field as the filtering criterion 3/ Tick on the destination field you are trying to filter or search for the Destination email ID that 4/ All the result in the table gets further filtered as per the filter criterion
    • As a next step please add another visual from top left corner “Add -> Add Visual”, then select the Donut Chart from Visual types pane. Donut charts are always used for displaying aggregation.
    • Then select the “event_type” as the Group to visualize the aggregated events, this helps marketers/business users to figure out how many email events occurred and what are the aggregated success ratio, click ratio, complain ratio or bounce ratio etc for the emails/Campaign that’s sent to end users.
    • To create a Quicksight dashboards from the Quicksight analysis click Share menu option at the top right corner then select publish dashboard”. Provide required dashboard name while publishing the dashboard”. Same dashboard can be shared with multiple audiences in the Organization.
    • Following is the final version of the dashboard. As mentioned above Quicksight dashboards can be shared with other stakeholders and also complete dashboard can be exported as excel sheet.
  7. SMS Events-
    • As shown above SMS events can be analyzed using Quicksight and dash boards can be created out of the analysis. Please repeat all of the sub-steps listed in step 6. Following is a sample SMS dashboard.
  8. Custom Events-
    • After you integrate your application (app) with Amazon Pinpoint, Amazon Pinpoint can stream event data about user activity, different type custom events, and message deliveries for the app. Eg :- Session.start, Product_page_view, _session.stop etc. Do repeat all of the sub-steps listed in step 6 create a custom event dashboards.
  9. Campaign events
    • As shown before campaign also can be included in the same dashboard or you can create new dashboard only for campaign events.

Cost for Event dashboard solution
You are responsible for the cost of the AWS services used while running this solution. As of the date of publication, the cost for running this solution with default settings in the US West (Oregon) Region is approximately $65 a month. The cost estimate includes the cost of AWS Lambda, Amazon Athena, Amazon Quicksight. The estimate assumes querying 1TB of data in a month, and two authors managing Amazon Quicksight every month, four Amazon Quicksight readers witnessing the events dashboard unlimited times in a month, and a Quicksight spice capacity is 50 GB per month. Prices are subject to change. For full details, see the pricing webpage for each AWS service you will be using in this solution.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. On the CloudFormation console, select your stack and choose Delete. This cleans up all the resources created by the stack,
  2. Delete the Amazon Quicksight Dashboards and data sets that you have created.

Conclusion

In this blog post, I have demonstrated how marketers, business users, and business analysts can utilize Amazon Quicksight dashboards to evaluate and exploit user engagement data from Amazon SES and Pinpoint event streams. Customers can also utilize this solution to understand how Amazon Pinpoint campaigns lead to business conversions, in addition to analyzing multi-channel communication metrics at the individual user level.

Next steps

The personas for this blog are both the tech team and the marketing analyst team, as it involves a code deployment to create very simple Athena views, as well as the steps to create an Amazon Quicksight dashboard to analyse Amazon SES and Amazon Pinpoint engagement events at the individual user level. Customers may then create their own Amazon Quicksight dashboards to illustrate the conversion ratio and propensity trends in real time by integrating campaign events with app-level events such as purchase conversions, order placement, and so on.

Extending the solution

You can download the AWS Cloudformation templates, code for this solution from our public GitHub repository and modify it to fit your needs.


About the Author


Satyasovan Tripathy works at Amazon Web Services as a Senior Specialist Solution Architect. He is based in Bengaluru, India, and specialises on the AWS Digital User Engagement product portfolio. He likes reading and travelling outside of work.

Dark Mode for the Cloudflare Dashboard

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/dark-mode/

Dark Mode for the Cloudflare Dashboard

Dark Mode for the Cloudflare Dashboard

Today, dark mode is available for the Cloudflare Dashboard in beta! From your user profile, you can configure the Cloudflare Dashboard in light mode, dark mode, or match it to your system settings.

For those unfamiliar, dark mode, or light on dark color schemes, uses light text on dark backgrounds instead of the typical dark text on light (usually white) backgrounds. In low-light environments, this can help reduce eyestrain and actually reduce power consumption on OLED screens. For many though, dark mode is simply a preference supported widely by applications and devices.

Dark Mode for the Cloudflare Dashboard
Side by side comparing the Cloudflare dashboard in dark mode and in light mode

How to enable dark mode

  1. Log into Cloudflare.
  2. Go to your user profile.
  3. Under Appearance, select an option: Light, Dark, or Use system setting. For the time being, your choice is saved into local storage.
Dark Mode for the Cloudflare Dashboard
The appearance card in the dashboard for modifying color themes

There are many primers and how-tos on implementing dark mode, and you can find articles talking about the general complications of implementing a dark mode including this straightforward explanation. Instead, we will talk about what enabled us to be able to implement dark mode in only a matter of weeks.

Cloudflare’s Design System – Our Secret Weapon

Before getting into the specifics of how we implemented dark mode, it helps to understand the system that underpins all product design and UI work at Cloudflare – the Cloudflare Design System.

Dark Mode for the Cloudflare Dashboard
The six pillars of the design system: logo, typography, color, layout, icons, videos

Cloudflare’s Design System defines and documents the interface elements and patterns used to build products at Cloudflare. The system can be used to efficiently build consistent experiences for Cloudflare customers. In practice, the Design System defines primitives like typography, color, layout, and icons in a clear and standard fashion. What this means is that anytime a new interface is designed, or new UI code is written, an easily referenceable, highly detailed set of documentation is available to ensure that the work matches previous work. This increases productivity, especially for new employees, and prevents repetitious discussions about style choices and interaction design.

Built on top of these design primitives, we also have our own component library. This is a set of ready to use components that designers and engineers can combine to form the products our customers use every day. They adhere to the design system, are battle tested in terms of code quality, and enhance the user experience by providing consistent implementations of common UI components. Any button, table, or chart you see looks and works the same because it is the same underlying code with the relevant data changed for the specific use case.

So, what does all of this have to do with dark mode? Everything, it turns out. Due to the widespread adoption of the design system across the dashboard, changing a set of variables like background color and text color in a specific way and seeing the change applied nearly everywhere at once becomes much easier. Let’s take a closer look at how we did that.

Turning Out the Lights

The use of color at Cloudflare has a well documented history. When we originally set out to build our color system, the tools we built and the extensive research we performed resulted in a ten-hue, ten-luminosity set of colors that can be used to build digital products. These colors were built to be accessible — not just in terms of internal use, but for our customers. Take our blue hue scale, for example.

Dark Mode for the Cloudflare Dashboard
Our blue color scale, as used on the Cloudflare Dashboard. This shows color-contrast accessible text and background pairings for each step in the scale.

Each hue in our color scale contains ten colors, ordered by luminosity in ten increasing increments from low luminosity to high luminosity. This color scale allows us to filter down the choice of color from the 16,777,216 hex codes available on the web to a much simpler choice of just hue and brightness. As a result, we now have a methodology where designers know the first five steps in a scale have sufficient color contrast with white or lighter text, and the last five steps in a scale have sufficient contrast with black or darker text.

Color scales also allow us to make changes while designing in a far more fluid fashion. If a piece of text is too bright relative to its surroundings, drop down a step on the scale. If an element is too visually heavy, take a step-up. With the Design System and these color scales in place, we’ve been able to design and ship products at a rapid rate.

So, with this color system in place, how do we begin to ship a dark mode? It turns out there’s a simple solution to this, and it’s built into the JS standard library. We call reverse() and flip the luminosity scales.

Dark Mode for the Cloudflare Dashboard
Our blue color scale after calling reverse on it. High luminosity colors are now at the start of the scale, making them contrast accessible with darker backgrounds (and vice-versa).

By performing this small change within our dashboard’s React codebase and shipping a production preview deploy, we were able to see the Cloudflare Dashboard in dark mode with a whole new set of colors in a matter of minutes.

Dark Mode for the Cloudflare Dashboard
An early preview of the Cloudflare Dashboard after flipping our color scales.

While not perfect, this brief prototype gave us an incredibly solid baseline and validated the approach with a number of benefits.

Every product built using the Cloudflare Design System now had a dark mode theme built in for free, with no additional work required by teams.

Our color contrast principles remain sound — just as the first five colors in a scale would be accessible with light text, when flipped, the first five colors in the scale are accessible with dark text. Our scales aren’t perfectly symmetrical, but when using white and black, the principle still holds.

In a traditional approach of “inverting” colors, we face the issue of a color’s hue being changed too. When a color is broken down into its constituent hue, saturation, and luminosity values, inverting it would mean a vibrant light blue would become a dull dark orange. Our approach of just inverting the luminosity of a color means that we retain the saturation and hue of a color, meaning we retain Cloudflare’s brand aesthetic and the associated meaning of each hue (blue buttons as calls-to-action, and so on).

Of course, shipping a dark mode for a product as complex as the Cloudflare Dashboard can’t just be done in a matter of minutes.

Not Quite Just Turning the Lights Off

Although our prototype did meet our initial requirements of facilitating the dashboard in a dark theme, some details just weren’t quite right. The data visualization and mapping libraries we use, our icons, text, and various button and link states all had to be audited and required further iterations. One of the most obvious and prominent examples was the page background color. Our prototype had simply changed the background color from white (#FFFFFF) to black (#000000). It quickly became apparent that black wasn’t appropriate. We received feedback that it was “too intense” and “harsh.” We instead opted for off black, specifically what we refer to as “gray.0” or #1D1D1D. The difference may not seem noticeable, but at larger dimensions, the gray background is much less distracting.

Here is what it looks like in our design system:

Dark Mode for the Cloudflare Dashboard
Black background color contrast for white text
Dark Mode for the Cloudflare Dashboard
Gray background color contrast for white text

And here is a more realistic example:

Dark Mode for the Cloudflare Dashboard
lorem ipsum sample text on black background and on gray background

The numbers at the end of each row represent the contrast of the text color on the background. According to the Web Content Accessibility Guidelines (WCAG), the standard contrast ratio for text should be at least 4.5:1. In our case, while both of the above examples exceed the standard, the gray background ends up being less harsh to use across an entire application. This is not the case with light mode as dark text on white (#FFFFFF) background works well.

Our technique during the prototyping stage involved flipping our color scale; however, we additionally created a tool to let us replace any color within the scale arbitrarily. As the dashboard is made up of charts, icons, links, shadows, buttons and certainly other components, we needed to be able to see how they reacted in their various possible states. Importantly, we also wanted to improve the accessibility of these components and pay particular attention to color contrast.

Dark Mode for the Cloudflare Dashboard
Color picker tool screenshot showing a color scale

For example, a button is made up of four distinct states:

1) Default
2) Focus
3) Hover
4) Active

Dark Mode for the Cloudflare Dashboard
Example showing the various colors for states of buttons in light and dark mode

We wanted to ensure that each of these states would be at least compliant with the AA accessibility standards according to the WCAG. Using a combination of our design systems documentation and a prioritized list of components and pages based on occurrence and visits, we meticulously reviewed each state of our components to ensure their compliance.

Dark Mode for the Cloudflare Dashboard
Side by side comparison of the navbar in light and dark modes

The navigation bar used to select between the different applications was a component we wanted to treat differently compared to light mode. In light mode, the app icons are a solid blue with an outline of the icon; it’s a distinct look and certainly one that grabs your attention. However, for dark mode, the consensus was that it was too bright and distracting for the overall desired experience. We wanted the overall aesthetic of dark mode to be subtle, but it’s important to not conflate aesthetic with poor usability. With that in mind, we made the decision for the navigation bar to use outlines around each icon, instead of being filled in. Only the selected application has a filled state. By using outlines, we are able to create sufficient contrast between the current active application and the rest. Additionally, this provided a visually distinct way to present hover states, by displaying a filled state.

After applying the same methodology as described to other components like charts, icons, and links, we end up with a nicely tailored experience without requiring a substantial overhaul of our codebase. For any new UI that teams at Cloudflare build going forward, they will not have to worry about extra work to support dark mode. This means we get an improved customer experience without any impact to our long term ability to keep delivering amazing new capabilities — that’s a win-win!

Welcome to the Dark Side

We know many of you have been asking for this, and we are excited to bring dark mode to all. Without the investment into our design system by many folks at Cloudflare, dark mode would not have seen the light of day. You can enable dark mode on the Appearance card in your user profile. You can give feedback to shape the future of the dark theme with the feedback form in the card.

If you find these types of problems interesting, come help us tackle them! We are hiring across product, design, and engineering!

Introducing logs from the dashboard for Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/workers-dashboard-logs/

Introducing logs from the dashboard for Cloudflare Workers

Introducing logs from the dashboard for Cloudflare Workers

If you’re writing code: what can go wrong, will go wrong.

Many developers know the feeling: “It worked in the local testing suite, it worked in our staging environment, but… it’s broken in production?” Testing can reduce mistakes and debugging can help find them, but logs give us the tools to understand and improve what we are creating.

if (this === undefined) {
  console.log("there’s no way… right?") // Narrator: there was.
}

While logging can help you understand when the seemingly impossible is actually possible, it’s something that no developer really wants to set up or maintain on their own. That’s why we’re excited to launch a new addition to the Cloudflare Workers platform: logs and exceptions from the dashboard.

Starting today, you can view and filter the console.log output and exceptions from a Worker… at no additional cost with no configuration needed!

View logs, just a click away

When you view a Worker in the dashboard, you’ll now see a “Logs” tab which you can click on to view a detailed stream of logs and exceptions. Here’s what it looks like in action:

Each log entry contains an event with a list of logs, exceptions, and request headers if it was triggered by an HTTP request. We also automatically redact sensitive URLs and headers such as Authorization, Cookie, or anything else that appears to have a sensitive name.

If you are in the Durable Objects open beta, you will also be able to view the logs and requests sent to each Durable Object. This is a great tool to help you understand and debug the interactions between your Worker and a Durable Object.

For now, we support filtering by event status and type. Though, you can expect more filters to be added to the dashboard very soon! Today, we support advanced filtering with the wrangler CLI, which will be discussed later in this blog.

console.log(), and you’re all set

It’s really simple to get started with logging for Workers. Simply invoke one of the standard console APIs, such as console.log(), and we handle the rest. That’s it! There’s no extra setup, no configuration needed, and no hidden logging fees.

function logRequest (request) {
  const { cf, headers } = request
  const { city, region, country, colo, clientTcpRtt  } = cf
  
  console.log("Detected location:", [city, region, country].filter(Boolean).join(", "))
  if (clientTcpRtt) {
     console.debug("Round-trip time from client to", colo, "is", clientTcpRtt, "ms")
  }

  // You can also pass an object, which will be interpreted as JSON.
  // This is great if you want to define your own structured log schema.
  console.log({ headers })
}

In fact, you don’t even need to use console.log to view an event from the dashboard. If your Worker doesn’t generate any logs or exceptions, you will still be able to see the request headers from the event.

Advanced filters, from your terminal

If you need more advanced filters you can use wrangler, our command-line tool for deploying Workers. We’ve updated the wrangler tail command to support sampling and a new set of advanced filters. You also no longer need to install or configure cloudflared to use the command. Not to mention it’s much faster, no more waiting around for logs to appear. Here are a few examples:

# Filter by your own IP address, and if there was an uncaught exception.
wrangler tail --format=pretty --ip-address=self --status=error

# Filter by HTTP method, then apply a 10% sampling rate.
wrangler tail --format=pretty --method=GET --sampling-rate=0.1

# Filter using a generic search query.
wrangler tail --format=pretty --search="TypeError"

We recommend using the “pretty” format, since wrangler will output your logs in a colored, human-readable format. (We’re also working on a similar display for the dashboard.)

However, if you want to access structured logs, you can use the “json” format. This is great if you want to pipe your logs to another tool, such as jq, or save them to a file. Here are a few more examples:

# Parses each log event, but only outputs the url.
wrangler tail --format=json | jq .event.request?.url

# You can also specify --once to disconnect the tail after receiving the first log.
# This is useful if you want to run tests in a CI/CD environment.
wrangler tail --format=json --once > event.json

Try it out!

Both logs from the dashboard and wrangler tail are available and free for existing Workers customers. If you would like more information or a step-by-step guide, check out any of the resources below.

Internationalizing the Cloudflare Dashboard

Post Syndicated from James Culveyhouse original https://blog.cloudflare.com/internationalizing-the-cloudflare-dashboard/

Internationalizing the Cloudflare Dashboard

Cloudflare’s dashboard now supports four new languages (and multiple locales): Spanish (with country-specific locales: Chile, Ecuador, Mexico, Peru, and Spain), Brazilian Portuguese, Korean, and Traditional Chinese. Our customers are global and diverse, so in helping build a better Internet for everyone, it is imperative that we bring our products and services to customers in their native language.

Since last year Cloudflare has been hard at work internationalizing our dashboard. At the end of 2019, we launched our first language other than US English: German. At the end of March 2020, we released three additional languages: French, Japanese, and Simplified Chinese. If you want to start using the dashboard in any of these languages, you can change your language preference in the top right of the Cloudflare dashboard. The preference selected will be saved and used across all sessions.

Internationalizing the Cloudflare Dashboard

In this blog post, I want to help those unfamiliar with internationalization and localization to better understand how it works. I also would like to tell the story of how we made internationalizing and localizing our application a standard and repeatable process along with sharing a few tips that may help you as you do the same.

Beginning the journey

The first step in internationalization is externalizing all the strings in your application. In concrete terms this means taking any text that could be read by a user and extracting it from your application code into separate, stand-alone files. This needs to be done for a few reasons:

  • It enables translation teams to work on translating these strings without needing to view or change any application code.
  • Most translators typically use Translation Management applications which automate aspects of the workflow and provide them with useful utilities (like translation memory, change tracking, and a number of useful parsing and formatting tools). These applications expect standardized text formats (such as json, xml, md, or csv files).

From an engineering perspective, separating application code from translations allows for making changes to strings without re-compiling and/or re-deploying code. In our React based application, externalizing most of our strings boiled down to changing blocks of code like this:

<Button>Cancel</Button>
<Button>Next</Button>

Into this:

<Button><Trans id="signup.cancel" /></Button>
<Button><Trans id="signup.next" /></Button>
 
// And in a separate catalog.json file for en_US:
{
 "signup.cancel": "Cancel",
 "signup.next": "Next",
 // ...many more keys
}

The <Trans> component shown above is the fundamental i18n building block in our application. In this scheme, translated strings are kept in large dictionaries keyed by a translation id. We call these dictionaries “translation catalogs”, and there are a set of translation catalogs for each language that we support.

At runtime, the <Trans> component looks up the translation in the correct catalog for the provided key and then inserts this translation into the page (via the DOM). All of an application’s static text can be externalized with simple transformations like these.

However, when dynamic data needs to be intermixed with static text, the solution becomes a little more complicated. Consider the following seemingly straightforward example which is riddled with i18n landmines:

<span>You've selected { totalSelected } Page Rules.</span>

It may be tempting to externalize this sentence by chopping it up into a few parts, like so:

<span>
 <Trans id="selected.prefix" /> {totalSelected } <Trans id="pageRules" />
</span>
 
// English catalog.json
{
 "selected.prefix": "You've selected",
 "pageRules": "Page Rules",
 // ...
}
 
// Japanese catalog.json
{
 "selected.prefix": "選択しました",
 "pageRules": "ページ ルール",
 // ...
}
 
// German catalog.json
{
 "selected.prefix": "Sie haben ausgewählt",
 "pageRules": "Page Rules",
 // ...
}
 
// Portuguese (Brazil) catalog.json
{
 "selected.prefix": "Você selecionou",
 "pageRules": "Page Rules",
 // ...
}

This gets the job done and may even seem like an elegant solution. After all, both the selected.prefix and pageRules.suffix strings seem like they are destined to be reused. Unfortunately, chopping sentences up and then concatenating translated bits back together like this turns out to be the single largest pitfall when externalizing strings for internationalization.

The problem is that when translated, the various words that make up a sentence can be morphed in different ways based on context (singular vs plural contexts, due to word gender, subject/verb agreement, etc). This varies significantly from language to language, as does word order. For example in English, the sentence “We like them” follows a subject-verb-object order, while other languages might follow subject-object-verb (We them like), verb-subject-object (Like we them), or even other orderings. Because of these nuanced differences between languages, concatenating translated phrases into a sentence will almost always lead to localization errors.

The code example above contains actual translations we got back from our translation teams when we supplied them with “You’ve selected” and “Page Rules” as separate strings. Here’s how this sentence would look when rendered in the different languages:

Language Translation
Japanese 選択しました { totalSelected } ページ ルール。
German Sie haben ausgewählt { totalSelected } Page Rules
Portuguese (Brazil) Você selecionou { totalSelected } Page Rules.

To compare, we also gave them the sentence as a single string using a placeholder for the variable, and here’s the result:

Language Translation
Japanese %{ totalSelected } 件のページ ルールを選択しました。
German Sie haben %{ totalSelected } Page Rules ausgewählt.
Portuguese (Brazil) Você selecionou %{ totalSelected } Page Rules.

As you can see, the translations differ for Japanese and German. We’ve got a localization bug on our hands.

So, In order to guarantee that translators will be able to convey the true meaning of your text with fidelity, it’s important to keep each sentence intact as a single externalized string. Our <Trans> component allows for easy injection of values into template strings which allows us to do exactly that:

<span>
  <Trans id="pageRules.selectedForDeletion" values={{ count: totalSelected }} />
</span>

// English catalog.json
{
  "pageRules.selected": "You've selected %{ count } Page Rules.",
  // ...
}

// Japanese catalog.json
{
  "pageRules.selected": "%{ count } 件のページ ルールを選択しました。",
  // ...
}

// German catalog.json
{
  "pageRules.selected": "Sie haben %{ count } Page Rules ausgewählt.",
  // ...
}

// Portuguese(Brazil) catalog.json
{
  "pageRules.selected": "Você selecionou %{ count } Page Rules.",
  // ...
}

This allows translators to have the full context of the sentence, ensuring that all words will be translated with the correct inflection.

You may have noticed another potential issue. What happens in this example when totalSelected is just 1? With the above code, the user would see “You’ve selected 1 Page Rules for deletion”. We need to conditionally pluralize the sentence based on the value of our dynamic data. This turns out to be a fairly common use case, and our <Trans> component handles this automatically via the smart_count feature:

<span>
  <Trans id="pageRules.selectedForDeletion" values={{ smart_count: totalSelected }} />
</span>

// English catalog.json
{
  "pageRules.selected": "You've selected %{ smart_count } Page Rule. |||| You've selected %{ smart_count } Page Rules.",
}

// Japanese catalog.json
{
  "pageRules.selected": "%{ smart_count } 件のページ ルールを選択しました。 |||| %{ smart_count } 件のページ ルールを選択しました。",
}

// German catalog.json
{
  "pageRules.selected": "Sie haben %{ smart_count } Page Rule ausgewählt. |||| Sie haben %{ smart_count } Page Rules ausgewählt.",
}

// Portuguese (Brazil) catalog.json
{
  "pageRules.selected": "Você selecionou %{ smart_count } Page Rule. |||| Você selecionou %{ smart_count } Page Rules.",
}

Here, the singular and plural versions are delimited by ||||. <Trans> will automatically select the right translation to use depending on the value of the passed in totalSelected variable.

Yet another stumbling block occurs when markup is mixed in with a block of text we’d like to externalize as a single string. For example, what if you need some phrase in your sentence to be a link to another page?

<VerificationReminder>
  Don't forget to <Link>verify your email address.</Link>
</VerificationReminder>

To solve for this use case, the <Trans> component allows for arbitrary elements to be injected into placeholders in a translation string, like so:

<VerificationReminder>
  <Trans id="notification.email_verification" Components={[Link]} componentProps={[{ to: '/profile' }]} />
</VerificationReminder>

// catalog.json
{
  "notification.email_verification": "Don't forget to <0>verify your email address.</0>",
  // ...
}

In this example, the <Trans> component will replace placeholder elements (<0>,<1>, etc.) with instances of the component type located at that index in the Components array. It also passes along any data specified in componentProps to that instance. The example above would boil down to the following in React:

// en-US
<VerificationReminder>
  Don't forget to <Link to="/profile">verify your email address.</Link>
</VerificationReminder>

// es-ES
<VerificationReminder>
  No olvide <Link to="/profile">verificar la dirección de correo electrónico.</Link>
</VerificationReminder>

Safety third!

The functionality outlined above was enough for us to externalize our strings. However, it did at times result in bulky, repetitive code that was easy to mess up. A couple of pitfalls quickly became apparent.

The first was that small hardcoded strings were now easier to hide in plain sight, and because they weren’t glaringly obvious to a developer until the rest of the page had been translated, the feedback loop in finding these was often days or weeks. A common solution to surfacing these issues is introducing a pseudolocalization mode into your application during development which will transform all properly internationalized strings by replacing each character with a similar looking unicode character.

For example You've selected 3 Page Rules. might be transformed to Ýôú'Ʋè ƨèℓèçƭèδ 3 Þáϱè Rúℓèƨ.

Another handy feature at your disposal in a pseudolocalization mode is the ability to shrink or lengthen all strings by a fixed amount in order to plan for content width differences. Here’s the same pseudolocalized sentence increased in length by 50%: Ýôú'Ʋè ƨèℓèçƭèδ 3 Þáϱè Rúℓèƨ. ℓôřè₥ ïƥƨú₥ δô. This is useful in helping both engineers as well as designers spot places where content length could potentially be an issue. We first recognized this problem when rolling out support for German, which at times tends to have somewhat longer words than English.

This meant that in a lot of places the text in page elements would overflow, such as in this “Add” button:

Internationalizing the Cloudflare Dashboard

There aren’t a lot of easy fixes for these types of problems that don’t compromise the user experience.

For best results, variable content width needs to be baked into the design itself. Since fixing these bugs often means sending it back upstream to request a new design, the process tends to be time consuming. If you haven’t given much thought to content design in general, an internationalization effort can be a good time to start. Having standards and consistency around the copy used for various elements in your app can not only cut down on the number of words that need translating, but also eliminate the need to think through the content length pitfalls of using a novel phrase.

The other pitfall we ran into was that the translation ids — especially long and repetitive ones — are highly susceptible to typos.

Pop quiz, which of these translation keys will break our app: traffic.load_balancing.analytics.filters.origin_health_title or traffic.load_balancing.analytics.filters.origin_heath_title?

Nestled among hundreds of other lines of changes, these are hard to spot in code review. Most apps have a fallback so missing translations don’t result in a page breaking error. As a result a bug like this might go unnoticed entirely if it’s hidden well enough (in say, a help text flyout).

Fortunately, with a growing percentage of our codebase in TypeScript, we were able to leverage the type-checker to give developers feedback as they wrote the code. Here’s an example where our code editor is helpfully showing us a red underline to indicate that the id property is invalid (due to the missing “l”):

Internationalizing the Cloudflare Dashboard

Not only did it make the problems more obvious, but it also meant that violations would cause builds to fail, preventing bad code from entering the codebase.

Scaling locale files

In the beginning, you’ll probably start out with one translation file per locale that you support. In addition, the naming scheme you use for your keys can remain somewhat simple. As your app scales, your translation file will grow too large and need to be broken up into separate files. Files that are too large will overwhelm Translation Management applications, or if left unchecked, your code editor. All of our translation strings (not including keys), when lumped together into a single file, is around 50,000 words. For comparison, that’s roughly the same size as a copy of “The Hitchhiker’s Guide to the Galaxy” or “Slaughterhouse Five”.

We break up our translations into a number of “catalog” files roughly corresponding to feature verticals (like Firewall or Cloudflare Workers). This works out well for our developers since it provides a predictable place to find strings, and keeps the line count of a translation catalog down to a manageable length. It also works out well for the outside translation teams since a single feature vertical is a good unit of work for a translator (or small team).

In addition to per-feature catalogs, we have a common catalog file to hold strings that are re-used throughout the application. It allows us to keep ids short ( common.delete vs some_page.some_tab.some_feature.thing.delete ) and lowers the likelihood of duplication since developers habitually check the common catalog before adding new strings.

Libraries

So far we’ve talked at length about our <Trans> component and what it can do. Now, let’s talk about how it’s built.

Perhaps unsurprisingly, we didn’t want to reinvent the wheel and come up with a base i18n library from scratch. Due to prior efforts to internationalize the legacy parts of our application written in Backbone, we were already using Airbnb’s Polyglot library, a “tiny I18n helper library written in JavaScript” which, among other things, “provides a simple solution for interpolation and pluralization, based off of Airbnb’s experience adding I18n functionality to its Backbone.js and Node apps”.

We took a look at a few of the most popular libraries that had been purpose-built for internationalizing React applications, but ultimately decided to stick with Polyglot. We created our <Trans> component to bridge the gap to React. We chose this direction for a few reasons:

  • We didn’t want to re-internationalize the legacy code in our application in order to migrate to a new i18n support library.
  • We also didn’t want the combined overhead of supporting 2 different i18n schemes for new vs legacy code.
  • Writing our own trans component gave us the flexibility to write the interface we wanted. Since Trans is used just about everywhere, we wanted to make sure it was as ergonomic as possible to developers.

If you’re just getting started with i18n in a new React based web-app, react-intl and i18n-next are 2 popular libraries that supply a component similar to <Trans> described above.

The biggest pain point of the <Trans> component as outlined is that strings have to be kept in a separate file from your source code. Switching between multiple files as you author new code or modify existing features is just plain annoying. It’s even more annoying if the translation files are kept far away in the directory structure, as they often need to be.

There are some new i18n libraries such as jslingui that obviate this problem by taking an extraction based approach to handling translation catalogs. In this scheme, you still use a <Trans>component, but you keep your strings in the component itself, not a separate catalog:

<span>
  <Trans>Hmm... We couldn't find any matching websites.</Trans>
</span>

A tool that you run at build time then does the work of finding all of these strings and extracting then into catalogs for you. For example, the above would result in the following generated catalogs:

// locales/en_US.json
{
  "Hmm... We couldn't find any matching websites.": "Hmm... We couldn't find any matching websites.",
}

// locales/de_DE.json
{
  "Hmm... We couldn't find any matching websites.": "Hmm... Wir konnten keine übereinstimmenden Websites finden."
}

The obvious advantage to this approach is that we no longer have separate files! The other advantage is that there’s no longer any need for type checking ids since typos can’t happen anymore.

However, at least for our use case, there were a few downsides.

First, human translators sometimes appreciate the context of the translation keys. It helps with organization, and it gives some clues about the string’s purpose.

And although we no longer have to worry about typos in translation ids, we’re just as susceptible to slight copy mismatches (ex. “Verify your email” vs “Verify your e-mail”). This is almost worse, since in this case it would introduce a near duplication which would be hard to detect. We’d also have to pay for it.

Whichever tech stack you’re working with, there are likely a few i18n libraries that can help you out. Which one to pick is highly dependent on technical constraints of your application and the context of your team’s goals and culture.

Numbers, Dates, and Times

Earlier when we talked about injecting data translated strings, we glossed over a major issue: the data we’re injecting may also need to be formatted to conform to the user’s local customs. This is true for dates, times, numbers, currencies and some other types of data.

Let’s take our simple example from earlier:

<span>You've selected { totalSelected } Page Rules.</span>

Without proper formatting, this will appear correct for small numbers, but as soon as things get into the thousands, localization problems will arise, since the way that digits are grouped and separated with symbols varies by culture. Here’s how three-hundred thousand and three hundredths is formatted in a few different locales:

Language (Country) Code Formatted Date
German (Germany) de-DE 300.000,03
English (US) en-US 300,000.03
English (UK) en-GB 300,000.03
Spanish (Spain) es-ES 300.000,03
Spanish (Chile) es-CL 300.000,03
French (France) fr-FR 300 000,03
Hindi (India) hi-IN 3,00,000.03
Indonesian (Indonesia) in-ID 300.000,03
Japanese (Japan) ja-JP 300,000.03
Korean (South Korea) ko-KR 300,000.03
Portuguese (Brazil) pt-BR 300.000,03
Portuguese (Portugal) pt-PT 300 000,03
Russian (Russia) ru-RU 300 000,03


The way that dates are formatted varies significantly from country to country. If you’ve developed your UI mainly with a US audience in mind, you’re probably displaying dates in a way that will feel foreign and perhaps un-intuitive to users from just about any other place in the world. Among other things, date formatting can vary in terms of separator choice, whether single digits are zero padded, and in the way that the day, month, and year portions are ordered. Here’s how the March 4th of the current year is formatted in a few different locales:

Language (Country) Code Formatted Date
German (Germany) de-DE 4.3.2020
English (US) en-US 3/4/2020
English (UK) en-GB 04/03/2020
Spanish (Spain) es-ES 4/3/2020
Spanish (Chile) es-CL 04-03-2020
French (France) fr-FR 04/03/2020
Hindi (India) hi-IN 4/3/2020
Indonesian (Indonesia) in-ID 4/3/2020
Japanese (Japan) ja-JP 2020/3/4
Korean (South Korea) ko-KR 2020. 3. 4.
Portuguese (Brazil) pt-BR 04/03/2020
Portuguese (Portugal) pt-PT 04/03/2020
Russian (Russia) ru-RU 04.03.2020


Time format varies significantly as well. Here’s how time is formatted in a few selected locales:

Language (Country) Code Formatted Date
German (Germany) de-DE 14:02:37
English (US) en-US 2:02:37 PM
English (UK) en-GB 14:02:37
Spanish (Spain) es-ES 14:02:37
Spanish (Chile) es-CL 14:02:37
French (France) fr-FR 14:02:37
Hindi (India) hi-IN 2:02:37 pm
Indonesian (Indonesia) in-ID 14.02.37
Japanese (Japan) ja-JP 14:02:37
Korean (South Korea) ko-KR 오후 2:02:37
Portuguese (Brazil) pt-BR 14:02:37
Portuguese (Portugal) pt-PT 14:02:37
Russian (Russia) ru-RU 14:02:37


Libraries for Handling Numbers, Dates, and Times

Ensuring the correct format for all these types of data for all supported locales is no easy task. Fortunately, there are a number of mature, battle-tested libraries that can help you out.

When we kicked off our project, we were using the Moment.js library extensively for date and time formatting. This handy library abstracts away the details of formatting dates to different lengths (“Jul 9th 20”, “July 9th 2020”, vs “Thursday”), displaying relative dates (“2 days ago”), amongst many other things. Since almost all of our dates were already being formatted via Moment.js for readability, and since Moment.js already has i18n support for a large number of locales, it meant that we were able to flip a couple of switches and have properly localized dates with very little effort.

There are some strong criticisms of Moment.js (mainly bloat), but ultimately the benefits realized from switching to a lower footprint alternative when compared to the cost it would take to redo every date and time didn’t add up.

Numbers were a very different story. We had, as you might imagine, thousands of raw, unformatted numbers being displayed throughout the dashboard. Hunting them down was a laborious and often manual process.

To handle the actual formatting of numbers, we used the Intl API (the Internationalization library defined by the ECMAScript standard):

var number = 300000.03;
var formatted = number.toLocaleString('hi-IN'); // 3,00,000.03
// This probably works in the browser you're using right now!

Fortunately, browser support for Intl has come quite a long way in recent years, with all modern browsers having full support.

Some modern JavaScript engines like V8 have even moved away from self-hosted JavaScript implementations of these libraries in favor of C++ based builtins, resulting in significant speedup.

Support for older browsers can be somewhat lacking however. Here’s a simple demo site ( source code) that’s built with Cloudflare Workers that shows how dates, times, and numbers are rendered in a hand-full of locales.

Some combinations of old browsers and OS’s will yield less than ideal results. For example, here’s how the same dates and times from above are rendered on Windows 8 with IE 10:

Internationalizing the Cloudflare Dashboard Internationalizing the Cloudflare Dashboard

If you need to support older browsers, this can be solved with a polyfill.

Translating

With all strings externalized, and all injected data being carefully formatted to locale specific standards, the bulk of the engineering work is complete. At this point, we can now claim that we’ve internationalized our application, since we’ve adapted it in a way that makes it easy to localize.

Next comes the process of localization where we actually create varying content based on the user’s language and cultural norms.

This is no small feat. Like we mentioned before, the strings in our application added together are the size of a small novel. It takes a significant amount of coordination and human expertise to create a translated copy that both captures the information with fidelity and speaks to the user in a familiar way.

There are many ways to handle the translation work: leveraging multi-lingual staff members, contracting the work out to individual translators, agencies, or even going all in and hiring teams of in-house translators. Whatever the case may be, there needs to be a smooth process for both workflow signalling and moving assets between the translation and development teams.

A healthy i18n program will provide developers with black-box interface with the process — they put new strings in a translation catalog file and commit the change, and without any more effort on their part, the feature code they wrote is available in production for all supported locales a few days later. Similarly, in a well run process translators will remain blissfully unaware of the particulars of the development process and application architecture. They receive files that easily load in their tools and clearly indicate what translation work needs to be done.

So, how does it actually work in practice?

We have a set of automated scripts that can be run on-demand by the localization team to package up a snapshot of our localization catalogs for all supported languages. During this process, a few things happen:

  • JSON files are generated from catalog files authored in TypeScript
  • If any new catalog files were added in English, placeholder copies are created for all other supported languages.
  • Placeholder strings are added for all languages when new strings are added to our base catalog

From there, the translation catalogs are uploaded to the Translation Management system via the UI or automated calls to the API. Before handing it off to translators, the files are pre-processed by comparing each new string against a Translation Memory (a cache of previously translated strings and substrings). If a match is found, the existing translation is used. Not only does this save cost by not re-translating strings, but it improves quality by ensuring that previously reviewed and approved translations are used when possible.

Suppose your locale files end up looking something like this:

{
 "verify.button": "Verify Email",
 "other.verify.button": "Verify Email",
 "verify.proceed.link": "Verify Email to proceed",
 // ...
}

Here, we have strings that are duplicated verbatim, as well as sub-strings that are copied. Translation services are billed by the word — you don’t want to pay for something twice and run the risk of a consistency issue arising. To this end, having a well-maintained Translation Memory will ensure that these strings are taken care of in the pre-translation steps before translators even see the file.

Once the translation job is marked as ready, it can take translation teams anywhere from hours to weeks to complete return translated copies depending on a number of factors such as the size of the job, the availability of translators, and the contract terms. The concerns of this phase could constitute another blog article of similar length: sourcing the right translation team, controlling costs, ensuring quality and consistency, making sure the company’s brand is properly conveyed, etc. Since the focus of this article is largely technical, we’ll gloss over the details here, but make no mistake — getting this part wrong will tank your entire effort, even if you’ve achieved your technical objectives.

After translation teams signal that new files are ready for pickup, the assets are pulled from the server and unpacked into their correct locations in the application code. We then run a suite of automated checks to make sure that all files are valid and free of any formatting issues.

An optional (but highly recommended) step takes place at this stage — in-context review. A team of translation reviewers then look at the translated output in context to make sure everything looks perfect in its finalized state. Having support staff that are both highly proficient with the product and fluent in the target language are especially useful in this effort. Shoutout to all our team members from around the company that have taken the time and effort to do this. To make this possible for outside contractors, we prepare special preview versions of our app that allow them to test with development mode locales enabled.

And there you have it, everything it takes to deliver a localized version of your application to your users all around the world.

Continual Localization

It would be great to stop here, but what we’ve discussed up until this point is the effort required to do it once. As we all know, code changes. New strings will be gradually added, modified, and deleted over the course of ti me as new features are launched and tweaked.

Since translation is a highly human process that often involves effort from people in different corners of the world, there is a lower bound to the timeframe in which turnover is possible. Since our release cadence (daily) is often faster than this turnover rate (2-5 days), it means that developers making changes to features have to make a choice: slow down to match this cadence, or ship slightly ahead of the localization schedule without full coverage.

In order to ensure that features shipping ahead of translations don’t cause application-breaking errors, we fallback to our base locale (en_US) if a string doesn’t exist for the configured language.

Some applications have a slightly different fallback behavior: displaying raw translation keys (perhaps you’ve seen some.funny.dot.delimited.string in an app you’re using). There’s a tradeoff between velocity and correctness here, and we chose to optimize for velocity and minimal overhead. In some apps correctness is important enough to slow down cadence for i18n. In our case it wasn’t.

Finishing Touches

There are a few more things we can do to optimize the user experience in our newly localized application.

First, we want to make sure there isn’t any performance degradation. If our application made the user fetch all of its translated strings before rendering the page, this would surely happen. So, in order to keep everything running smoothly, the translation catalogs are fetched asynchronously and only as the application needs them to render some content on the page. This is easy to accomplish nowadays with the code splitting features available in module bundlers that support dynamic import statements such as Parcel or Webpack.

We also want to eliminate any friction the user might experience with needing to constantly select their desired language when visiting different Cloudflare properties. To this end, we made sure that any language preference a user selects on our marketing site or our support site persists as they navigate to and from our dashboard (all links are in French to belabor the point).

What’s next?

It’s been an exciting journey, and we’ve learned a lot from the process. It’s difficult (perhaps impossible) to call an i18n project truly complete.  Expanding into new languages will surface slippery bugs and expose new challenges. Budget pressure will challenge you to find ways of cutting costs and increasing efficiency. In addition, you will discover ways in which you can enhance the localized experience even more for users.

There’s a long list of things we’d like to improve upon, but here are some of the highlights:

  • Collation. String comparison is language sensitive, and as such, the code you’ve written to lexicographically sort lists and tables of data in your app is probably doing the wrong thing for some of your users. This is especially apparent in languages that use logographic writing systems (such as Chinese or Japanese) as opposed to languages that use alphabets (like English or Spanish).
  • Support for right-to-left languages like Arabic and Hebrew.
  • Localizing API responses is harder than localizing static copy in your user interface, as it takes a coordinated effort between teams. In the age of microservices, finding a solution that works well across the myriad of tech stacks that power each service can be very challenging.
  • Localizing maps. We’ll be working on making sure all content in our map-based visualizations is translated.
  • Machine translation has come a long way in recent years, but not far enough to churn our translations unsupervised. We would however like to experiment more with using machine translation as a first pass that translation reviewers then edit for correctness and tone.

I hope you have enjoyed this overview of how Cloudflare internationalized and localized our dashboard.  Check out our careers page for more information on full-time positions and internship roles across the globe.

Making DNS record changes more reliable

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/making-dns-record-changes-more-reliable/

Making DNS record changes more reliable

Making DNS record changes more reliable

DNS is the very first step in accessing any website, API, or pretty much anything on the Internet, which makes it mission-critical to keeping your site up and running. This week, we are launching two significant changes that allow our customers to better maintain and update their DNS records. For customers who use Cloudflare as their authoritative DNS provider, we’ve added a much asked for feature: confirmation to DNS record edits. For our secondary DNS customers, we’re excited to provide a brand new onboarding experience.

Confirm and Commit

One of the benefits of using Cloudflare DNS is that changes quickly propagate to our 200+ data centers. And I mean very quickly: DNS propagation typically takes <5 seconds worldwide. Our UI was set up to allow customers to edit records, click out of the input box, and boom! The record has propagated!

Making DNS record changes more reliable

There are a lot of advantages to fast DNS, but there’s also one clear downside – it leaves room for fat fingering. What if you accidentally toggle the proxy icon, or mistype the content of your DNS record? This could result in users not being able to access your website or API and could cause a significant outage. To protect customers from these kinds of mistakes, we’ve added a Save button for DNS record changes.

Now editing records in the DNS table allows you to take an extra look before committing the change.

Making DNS record changes more reliable

The new confirmation layout applies to all record types and affects any content, TTL, or proxy status changes.

Let us know what you think by filling out the feedback survey linked at the top of the DNS tab in the dashboard.

DeepLinks and ScrollAnchor

Post Syndicated from Drew Dowling original https://blog.cloudflare.com/deeplinks-and-scrollanchor/

DeepLinks and ScrollAnchor

To directly quote Wikipedia:

“Deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website (e.g. http://example.com/path/page), rather than the website’s home page (e.g., http://example.com). The URL contains all the information needed to point to a particular item.”

There are many user experiences in Cloudflare’s Dashboard that are enhanced by the use of deep linking, such as:

  • We’re able to direct users from marketing pages directly into the Dashboard so they can interact with new/changed features.
  • Troubleshooting docs can have clearer, more intently directions. e.g. “Enable SSL encryption here” vs “Log into the Dashboard, choose your account and zone, navigate to the security tab, change SSL encryption level, blah blah blah”.

One of the interesting challenges with deep linking in the Dashboard is that most interesting resources are “locked” behind the context of an account and a zone/domain/website. To illustrate this, look at a tree of possible URL paths into Cloudflare’s Dashboard:

dash.cloudflare.com/ -> root-level resources: login, sign-up, forgot-password, two-factor

dash.cloudflare.com/<accountId>/ -> account-level resources: analytics, workers, domains, stream, billing, audit-log

dash.cloudflare.com/<accountId>/<zoneId> -> zone-level resources: dns, ssl-tls, firewall, speed, caching, page-rules, traffic, etc.

You might notice that in order to deep link to anything more interesting than logging in, a deep linker will need to know a user’s account or zone beforehand. A troubleshooting doc might want to send a user to the Page Rules tab in Dashboard to help a user fix their zone, but the linker doesn’t know what that zone is.

Another highly desired feature was the ability for a deep link to scroll to a particular piece of content on a Dashboard page, making it even easier for users to navigate. Instead of a troubleshooting doc asking a user to fumble around to find a setting, we could helpfully scroll that setting right into view. Now that would be slick!

The solution we came up with involves 3 main parts:

  • Deep links URLs expose an intuitive schema for dynamic value resolution.
  • A React component, DeepLink, consolidates routing/resolving deep links.
  • A React component, ScrollAnchor, encapsulates a simple algorithm which scrolls its content into view when the DOM has “finished loading”.

Just to prove that it works, here’s a GIF of us deep linking to the “TLS 1.3” setting on the security settings page:

DeepLinks and ScrollAnchor

It works! I was asked to select one of my several accounts, then our DeepLink routing component was smart enough to know that I have only one zone within that account and auto-filled the rest of the URL path. After the page was fully loaded, we were automatically scrolled to the TLS 1.3 setting. If you’re curious how all of this works and want to jump into the nitty gritty details, read on!

If you were paying attention to the URL bar in the GIF above, you already know what’s coming. In order to deal with dynamic account/zone resolution, a deep link can use a to query parameter to specify a path into Dashboard. I think it reads quite nicely:

dash.cloudflare.com/?to=/:account/:zone/ssl-tls/edge-certificates

This example is saying that we’d like to link to the “Edge Certificates” section of the “SSL-TLS” product for some account and some zone that a user needs to manually resolve, as you saw above. It’s easy to imagine removing “?to=/” to transform the link URL into the resolved one:

dash.cloudflare.com/<resolvedAccount>/<resolvedZone>/ssl-tls/edge-certificates

The URL-like schema of the to parameter makes it very natural to support different variations such as account-level resources

dash.cloudflare.com/?to=/:account/billing

Or allowing the linker to supply known information

dash.cloudflare.com/?to=/1234567890abcdef/:zone/traffic

This link takes the user to the “Traffic” product tab for some zone inside of account 1234567890abcdef. Indeed, the :account and :zone symbols are placeholders for user-supplied values, but they can be replaced with any permutation of real, known values to speed up resolution time to provide a better UX.

These links are parsed and resolved in our top-level routing component, DeepLink. At a high level, this component contains a series of “resolvers” for unknown symbols that need automatic or user-interactive resolution (i.e. :account and :zone). But before we dive in, let’s take a step back and gain appreciation for how cool this component is.

Cloudflare’s Dashboard is a single page React app, which means we use React Router to create routing components that handle what’s rendered on different URLs:

<Switch>
  <Route path="/login"><Login /></Route>
  <Route path="/sign-up"><Signup /></Route>
  ...
  <AccountRoutes />
</Switch>

When a page is loaded, a lot of things need to happen: API calls need to be made to fetch all the data needed to render a page, like account/user/zone info not cached in the browser. Many components need to be rendered. It turns out that we can improve the UX of many users by blocking React Router to make specific queries to our API instead of rendering an entire page that anecdotally fetches the information we need. For example, there’s no need to render a zone selection page if a user only has one zone, like in our GIF above ☝️.

Resolvers

When a deep link gets parsed and split into parts, the framework iterates over those parts and tries to build a URL string that is later used to redirect users to a specific location in the dashboard.

// to=/:account/:zone/traffic
// parts = [‘:account’, ‘:zone’, ‘traffic’]
for (const part of parts) {
// do something with each part
}

We can build up the dynamic URL by looking at prefixes. If a part starts with “:”, it’s considered a symbol that needs to be resolved. Everything else is a static string that just gets appended.

const resolvedParts: string[] = [];
// parts = [‘:account’, ‘:zone’, ‘traffic’]
for (let part of parts) {
  if (part.startsWith(‘:’)) {
    //resolve
  }

  resolvedParts.push(part);
}
const finalUrl = resolvedParts.join(‘/’);

Symbols are handled by functions we call “resolvers”. A resolver is a function that:

  1. Is async.
  2. Has a context parameter.
  3. Always returns a string – the value it resolves to.

In JavaScript, async functions always return a promise. Return values that are not type of Promise are wrapped in a resolved promise implicitly. They also allow “await” to be used in them. The async/await syntax is used for resolvers so they can perform any kind of asynchronous work – such as calling the API, while being able to “pause” JavaScript with “await” until that asynchronous work is done.

Each dynamic symbol has its own resolver. We currently have two resolvers – for account and for zone.

const RESOLVERS: Resolvers = {
account: accountResolver,
zone: zoneResolver
};
const resolvedParts: string[] = [];
// parts = [‘:account’, ‘:zone’, ‘traffic’]
for (let part of parts) {
if (part.startsWith(‘:’)) {
// for :account, accountResolver is awaited and returns “abc123”
// for :zone, zoneResolver is awaited and returns “testsite.io”
part = await RESOLVERS[part.slice(1)];
}
resolvedParts.push(part);
}
const finalUrl = resolvedParts.join(‘/’);

The internal implementation is a little bit more complicated, but this is a rough overview of how our DeepLink works.

Resolver context

We mentioned that each resolver has a context parameter. Context is an object that is passed to resolvers from the DeepLink component and it contains a bunch of handy utilities that give resolvers control over any part of the app. For example, it has access to the Redux store (we use Redux.js in the Dashboard to help us manage the application’s state). It has access to previously resolved values, and to all other parts of the deep link. It also has functions to help with user interactions.

User interactions

In many cases, a resolver is not able to resolve without the user’s help. For example, if a user has multiple zones, the resolver working on :zone symbol needs to wait for the user to select a zone.

const zoneResolver: Resolver = async ctx => {
const zones = await fetchZone();
// Just one zone, :zone symbols can be resolved to zone.name without user’s help
if (zones.length === 1) return zones[0].name;
if (zones.length > 1) {
// need user’s help to pick a zone
}
};

We already have a page in the dashboard with a zone list that looks like this.

DeepLinks and ScrollAnchor

What we need to do is give the resolver the ability to somehow show this page, and wait for the result of the user’s interaction.You might be asking: “But how do we show this page? You just told me that DeepLink blocks the entire page!”That’s true!

We decided to block the React Router to prevent unnecessary API calls and DOM updates while a deep link is resolving. But there is no harm in showing some part of the UI, if needed. To be able to do that, we added two functions to context – unblockRouter and blockRouter. These functions just toggle the state that is gating our Router component.

const zoneResolver: Resolver = async ctx => {
// ...
if (zones.length > 1) {
// delegate to React Router to render the page with zone picker
ctx.unblockRouter();
// need users help to pick a zone
// block the router again
ctx.blockRouter();
}
};

Now, the last piece is to somehow observe user interactions from within the resolver. To be able to do that, we have written a powerful utility.

waitForPageAction

Resolvers are isolated functions that live outside of the application’s components. To be able to observe anything that happens in distant branches of React DOM, we created a function called waitForPageAction. This function takes two parameters:

1. pageToAwaitActionOn – URL string pointing to a page we want to await the user’s action on. For example, “dash.cloudflare.com/123abc”

2. actionType – Unique string describing the action. For example, ZONE_SELECTED.

As you may have guessed, waitForPageAction is an async function. It returns a promise that resolves with action metadata whenever that action happens on the page specified by pageToAwaitActionOn. The promise rejects when the user navigates away from pageToAwaitActionOn. Otherwise, it keeps waiting… forever.

This helps us to write a code that is very easy to understand.

const zoneResolver: Resolver = async ctx => {
// ...
if (zones.length > 1) {
// delegate to React Router to render the page with zone picker
ctx.unblockRouter();
// need users help to pick a zone. Wait for ‘ZONE_SELECTED’ action at ‘dash.cloudflare.com/abc123’
// action is an object with metadata about zone. It contains zoneName, which can be used in this resolver to resolve :zone symbol
const action = ctx.waitForPageAction(
‘dash.cloudflare.com/abc123’,
‘ZONE_SELECTED’
);
// block the router again
ctx.blockRouter();
return action.zoneName
}
};

How does waitForPageAction work?

As mentioned above, we use Redux to manage our state. The actionType parameter is nothing else than a type of Redux action. Whenever a zone is selected, React dispatches a Redux action in an onClick handler.

<ZoneCard onClick={zoneName => { dispatch({type: ‘ZONE_SELECTED’, zoneName}) }} />

Now, how does waitForPageAction know that ZONE_SELECTED’ has been dispatched? Aren’t we supposed to write a reducer?!

Not really. waitForPageAction is not changing any state, it’s just an observer that resolves whenever some action, that is dispatched, satisfies a predicate. And Redux has an API to subscribe to any store changes – store.subscribe(listener).

The listener will be called any time an action is dispatched, and some part of the state tree may have changed. Unfortunately, the listener does not have access to the currently dispatched action. We can only read the current state.

Solution? Store the action in the Redux store!

Redux actions are just plain objects (mostly), and thus easy to serialize. We added a simple reducer that stores all actions in the Redux state.

export function deepLinkReducer(
  state: State = DEFAULT_STATE,
  action: AnyAction
){
  const nextState = { ... state, lastAction: action };
  return nextState;
}

Anytime an action is dispatched, we can read that action’s metadata in store.getState().lastAction. Now, we have everything we need to finally implement waitForPageAction.

export function waitForPageAction = (store: Store<DashState>) =>(
pageToAwaitActionOn: string,
actionType: string
) =>
new Promise<AnyAction>((resolve, reject) => {
// Subscribe to redux store
const unsubscribe = store.subscribe(() => {
const state = store.getState();
const currentPage = state.router.location.pathname;
const lastAction = state.lastAction;
if (currentPage !== pageToAwaitActionOn) {
// user navigated away -unsubscribe and reject
unsubscribe();
reject(‘User navigated away’);
} else if (lastAction.type === actionType) {
// Action types match! Unsubscribe and resolve with action object
unsubscribe();
resolve(lastAction);
}
});
});

The listener reads the current state and grabs the currentPage and lastAction data. If currentPage doesn’t match pageToAwaitActionOn, it means the user navigated away, and there’s no need to continue resolving the deep link – we unsubscribe, and reject the promise. Deep link resolvers are stopped, and React Router unblocked.

Else, if lastAction.type matches the actionType parameter, it means the action we are waiting on just happened! Unsubscribe, and resolve the promise with action metadata. The deep link keeps resolving.

That’s it! We also added a similar function – waitForAction – which does exactly the same thing, but is not restricted to a specific page.

ScrollAnchor component

We implemented a wrapper component ScrollAnchor that will scroll to its wrapped content, making our deep links even more targeted. A client would wrap some content like this:

<ScrollAnchor id=”super-important-setting-card”>
  <SuperImportantSettingCard />
</ScrollAnchor>

And then reference it via a typical URL anchor:

dash.cloudflare.com/path/to/content#super-important-setting-card

Now I can hear you saying, “what’s the point? Can’t we get the same behavior with any old ID?”

<div id=”super-important-setting-card”>
  <SuperImportantSettingCard />
</div>

We thought so too! But it turns out that there are a few problems that prevent this super simple approach:

  • The Dashboard’s fixed header
  • DOM updates after page load

Since the Dashboard contains a fixed header at the top of the page, we can’t simply anchor to any ID, since the content will be scrolled to the top of the browser window behind the header. Fortunately, there’s a simple CSS solution using negative margins:

<div id=”super-important-setting-card” padding-top={headerOffset} margin-top={headerOffset}>
  <SuperImportantSettingCard />
</div>

DeepLinks and ScrollAnchor

This CSS trick alone would work for a static site with a fixed header, but the Dashboard is very dynamic. We found early on in testing that using a normal HTML ID anchor in a URL would cause the browser to jump to the tag on page load but the DOM would change in response to newly fetched information or re-rendering, and the anchored content would be pushed out of view.

A solution: scroll to the anchored content after the page content is fully loaded, i.e. after all API calls are resolved, spinners removed, content is rendered. Fortunately, there’s a good way to programmatically scroll a browser window: Element.scrollIntoView(). However, there isn’t a good way to tell when the DOM is finished changing, since it can be modified at any time after page load. Let’s consider two possible strategies for determining when to scroll anchored content into view.

Strategy #1: scroll after a fixed duration. If our goal is to make sure we only scroll to content after a page is “fully loaded”, we can simplify the problem by making some assumptions. Namely, we can assume a maximum amount of time it will take a given page to fetch resources from the backend and re-render the DOM. Let’s call this assumed max duration M milliseconds. We can then easily scroll to some content by running a timeout on page load:

setTimeout(() => scrollTo(htmlId), M)

The problem with this approach is that the DOM might finish updating before or after we scroll. We end up with vertical alignment problems (as the DOM is still settling) or a jarring, unexpected scroll (if we scroll long after the DOM is settled). Both options are bad UX, and in practice it’s difficult to choose a duration constant M that is “just right” for every single page.

Strategy #2: scroll after the DOM has “settled”. If we know that choosing a good duration M for every page isn’t practical, we should try to come up with an algorithm that can choose a better M:

  1. Define an arbitrary threshold of DOM “busyness”, B milliseconds.
  2. On page load, start a timer that will scroll to anchored content after B milliseconds.
  3. If we observe any changes to the DOM, reset the timer.
  4. Once the timer expires, we know that the DOM hasn’t changed in B milliseconds.

By varying our choice of B, we’re able to have some control over how long we’re willing to wait for a page to “finish loading”. If B is 0 milliseconds, we’ll scroll to the anchored content immediately. If it’s 1000 milliseconds, we’ll wait a full second after any DOM change before scrolling. This algorithm is more resilient than fixed threshold scrolling since it explicitly listens to the DOM, but the chosen threshold is somewhat arbitrary. After some trial and error loading a sample of Dashboard pages, we determined that a 500 millisecond busyness threshold was sufficient to allow all content to load onto a page. Here’s what the implementation looks like:

const SETTLE_THRESHOLD = 500;
const scrollThunk = (observer: MutationObserver) => {
  scrollToAnchor(id);
  observer.disconnect();
};

let domTimer: number;

const observer = new MutationObserver((_mutationsList, observer) => {
  domTimer = resetTimeout(domTimer, scrollTunk, SETTLE_THRESHOLD, observer);
});

observer.observe(document.body, {childList: true, subtree: true});

domTimer = window.setTimeout(scrollThunk, SETTLE_THRESHOLD, observer);

A key assumption is that API calls take roughly the same amount of time to resolve. If most fetches take 250ms to resolve but others take 1500ms, we might see that the DOM hasn’t been changed for a while and think that it’s settled. Who knew there would be so much work involved in scrolling!

Conclusion

There you have it. A fully-featured deep linking solution with an intuitive schema, React Router blocking, autofilling, and scrolling. Thanks for reading.

Thinking about color

Post Syndicated from Sam Mason de Caires original https://blog.cloudflare.com/thinking-about-color/

Color is my day-long obsession, joy and torment – Claude Monet

Thinking about color

Thinking about color

Over the last two years we’ve tried to improve our usage of color at Cloudflare. There were a number of forcing functions that made this work a priority. As a small team of designers and engineers we had inherited a bunch of design work that was a mix of values built by multiple teams. As a result it was difficult and unnecessarily time consuming to add new colors when building new components.

We also wanted to improve our accessibility. While we were doing pretty well, we had room for improvement, largely around how we used green. As our UI is increasingly centered around visualizations of large data sets we wanted to push the boundaries of making our analytics as visually accessible as possible.

Cloudflare had also undergone a rebrand around 2016. While our marketing site had rolled out an updated set of visuals, our product ui as well as a number of existing web properties were still using various versions of our old palette.

Our product palette wasn’t well balanced by itself. Many colors had been chosen one or two at a time. You can see how we chose blueberry, ice, and water at a different point in time than marine and thunder.

Thinking about color
The color section of our theme file was partially ordered chronologically

Lacking visual cohesion within our own product, we definitely weren’t providing a cohesive visual experience between our marketing site and our product. The transition from the nice blues and purples to our green CTAs wasn’t the streamlined experience we wanted to afford our users.

Thinking about color
Our app dashboard in 2017

Reworking our Palette

Our first step was to audit what we already had. Cloudflare has been around long enough to have more than one website. Beyond cloudflare.com we have dozens of web properties that are publicly accessible. From our community forums, support docs, blog, status page, to numerous micro-sites.

All-in-all we have dozens of front-end codebases that each represent one more chance to introduce entropy to our visual language. So we were curious to answer the question – what colors were we currently using? Were there consistent patterns we could document for further reuse? Could we build a living style guide that didn’t cover just one site, but all of them?

Thinking about color
Screenshots of pages from cloudflare.com contrasted with screenshots from our product in 2017

Our curiosity got the best of us and we went about exploring ways we could visualize our design language across all of our sites.

Thinking about color
Above – our product palette. Below – our marketing palette.

A time machine for color

As we first started to identify the scale of our color problems, we tried to think outside the box on how we might explore the problem space. After an initial brainstorming session we combined the Internet Archive’s Wayback Machine with the Css Stats API to build an audit tool that shows how our various websites visual properties change over time. We can dynamically select which sites we want to compare and scrub through time to see changes.

Below is a visualization of palettes from 9 different websites changing over a period of 6 years. Above the palettes is a component that spits out common colors, across all of these sites. The only two common colors across all properties (appearing for only a brief flash) were #ffffff (white) and transparent. Over time we haven’t been very consistent with ourselves.

Thinking about color

If we drill in to look at our marketing site compared to our dashboard app – it looks like the video below. We see a bit more overlap at first and then a significant divergence at the 16 second mark when our product palette grew significantly. At the 22 second mark you can see the marketing palette completely change as a result of the rebrand while our product palette stays the same. As time goes on you can see us becoming more and more inconsistent across the two code bases.

Thinking about color

As a product team we had some catching up to do improve our usage of color and to align ourselves with the company brand. The good news was, there was no where to go but up.

This style of historical audit gives us a visual indication with real data. We can visualize for stakeholders how consistent and similar our usage of color is across products and if we are getting better or worse over time. Having this type of feedback loop was invaluable for us – as auditing this manually is incredibly time consuming so it often doesn’t get done. Hopefully in the future as it’s standard to track various performance metrics over time at a company it will be standard to be able to visualize your current levels of design entropy.

Picking colors

After our initial audit revealed there wasn’t a lot of consistency across sites, we went to work to try and construct a color palette that could potentially be used for sites the product team owned. It was time to get our hands dirty and start “picking colors.”

Hindsight of course is always 20/20. We didn’t start out on day one trying to generate scales based on our brand palette. No, our first bright idea, was to generate the entire palette from a single color.

Our logo is made up of two oranges. Both of these seemed like prime candidates to generate a palette from.

Thinking about color

We played around with a number of algorithms that took a single color and created a palette. From the initial color we generated an array scales for each hue. Initial attempts found us applying the exact same curves for luminosity to each hue, but as visual perception of hues is so different, this resulted in wildly different contrasts at each step of the scale.

Below are a few of our initial attempts at palette generation. Jeeyoung Jung did a brilliant writeup around designing palettes last year.

Thinking about color
Visualizing peaks of intensity across hues

We can see the intensity of the colors change across hue in peaks, with yellow and green being the most dominant. One of the downsides of this, is when you are rapidly iterating through theming options, the inconsistent relationships between steps across hues can make it time consuming or impossible to keep visual harmony in your interface.

The video below is another way to visualize this phenomenon. The dividing line in the color picker indicates which part of the palette will be accessible with black and white. Notice how drastically the line changes around green and yellow. And then look back at the charts above.

Thinking about color
Demo of https://kevingutowski.github.io/color.html

After fiddling with a few different generative algorithms (we made a lot of ugly palettes…) we decided to try a more manual approach. We pursued creating custom curves for each hue in an effort to keep the contrast scales as optically balanced as possible.

Thinking about color
Heavily muted palette

Thinking about color

Generating different color palettes makes you confront a basic question. How do you tell if a palette is good? Are some palettes better than others? In an effort to answer this question we constructed various feedback loops to help us evaluate palettes as quickly as possible. We tried a few methods to stress test a palette. At first we attempted to grab the “nearest color” for a bunch of our common UI colors. This wasn’t always helpful as sometimes you actually want the step above or below the closest existing color. But it was helpful to visualize for a few reasons.

Thinking about color
Generated palette above a set of components previewing the old and new palette for comparison

Sometime during our exploration in this space, we stumbled across this tweet thread about building a palette for pixel art. There are a lot of places web and product designers can draw inspiration from game designers.

Thinking about color
Two color palettes visualized to create 3d objects

Thinking about color
A color palette applied in a few different contexts

Here we see a similar concept where a number of different palettes are applied to the same component. This view shows us two things, the different ways a single palette can be applied to a sphere, and also the different aesthetics across color palettes.

Thinking about color
Different color palettes previewed against a common component

It’s almost surprising that the default way to construct a color palette for apps and sites isn’t to build it while previewing its application against the most common UI patterns. As designers, there are a lot of consistent uses of color we could have baselines for. Many enterprise apps are centered around white background with blue as the primary color with mixtures of grays to add depth around cards and page sections. Red is often used for destructive actions like deleting some type of record. Gray for secondary actions. Maybe it’s an outline button with the primary color for secondary actions. Either way – the margins between the patterns aren’t that large in the grand scheme of things.

Consider the use case of designing UI while the palette or usage of color hasn’t been established. Given a single palette, you might want to experiment with applying that palette in a variety of ways that will output a wide variety of aesthetics. Alternatively you may need to test out several different palettes. These are two different modes of exploration that can be extremely time consuming to work through . It can be non-trivial to keep an in progress design synced with several different options for color application, even with the best use of layer comps or symbols.

How do we visualize the various ways a palette will look when applied to an interface? Here are examples of how palettes are shown on a palette list for pixel artists.

Thinking about color

Thinking about color
https://lospec.com/palette-list/vines-flexible-linear-ramps

One method of visualization is to define a common set of primitive ui elements and show each one of them with a single set of colors applied. In isolation this can be helpful. This mode would make it easy to vet a single combination of colors and which ui elements it might be best applied to.

Alternatively we might want to see a composed interface with the closest colors from the palette applied. Consider a set of buttons that includes red, green, blue, and gray button styles. Seeing all of these together can help us visualize the relative nature of these buttons side by side. Given a baseline palette for common UI, we could swap to a new palette and replace each color with the “closest” color. This isn’t always a full-proof solution as there are many edge cases to cover. e.g. what happens when replacing a palette of 134 colors with a palette of 24 colors? Even still, this could allow us to quickly take a stab at automating how existing interfaces would change their appearance given a change to the underlying system. Whether locally or against a live site, this mode of working would allow for designers to view a color in multiple contets to truly asses its quality.

Thinking about color

After moving on from the idea of generating a palette from a single color, we attempted to use our logo colors as well as our primary brand colors to drive the construction of modular scales. Our goal was to create a palette that would improve contrast for accessibility, stay true to our visual brand, work predictably for developers, work for data visualizations, and provide the ability to design visually balanced and attractive interfaces. No sweat.

Thinking about color
Brand colors showing Hue and Saturation level

While we knew going in we might not use every step in every hue, we wanted full coverage across the spectrum so that each hue had a consistent optical difference between each step. We also had no idea which steps across which hues we were going to need just yet. As they would just be variables in a theme file it didn’t add any significant code footprint to expose the full generated palette either.

One of the more difficult parts, was deciding on a number of steps for the scales. This would allow us to edit the palette in the future to a variety of aesthetics and swap the palette out at the theme level without needing to update anything else.

In the future if when we did need to augment the available colors, we could edit the entire palette instead of adding a one-off addition as we had found this was a difficult way to work over time. In addition to our primary brand colors we also explored adding scales for yellow / gold, violet, teal as well as a gray scale.

The first interface we built for this work was to output all of the scales vertically, with their contrast scores with both white and black on the right hand side. To aid scannability we bolded the values that were above the 4.5 threshold. As we edited the curves, we could see how the contrast ratios were affected at each step. Below you can see an early starting point before the scales were balanced. Red has 6 accessible combos with white, while yellow only has 1. We initially explored having the gray scale be larger than the others.

Thinking about color
Early iteration of palette preview during development

As both screen luminosity and ambient light can affect perception of color we developed on two monitors, one set to maximum and one set to minimum brightness levels. We also replicated the color scales with a grayscale filter immediately below to help illustrate visual contrast between steps AND across hues. Bouncing back and forth between the grayscale and saturated version of the scale serves as a great baseline reference. We found that going beyond 10 steps made it difficult to keep enough contrast between each step to keep them distinguishable from one another.

Taking a page from our game design friends – as we were balancing the scales and exploring how many steps we wanted in the scales, we were also stress testing the generated colors against various analytics components from our component library.

Our slightly random collection of grays had been a particular pain point as they appeared muddy in a number of places within our interface. For our new palette we used the slightest hint of blue to keep our grays consistent and just a bit off from being purely neutral.

Thinking about color
Optically balanced scales

With a palette consisting of 90 colors, the amount of combinations and permutations that can be applied to data visualizations is vast and can result in a wide variety of aesthetic directions. The same palette applied to both line and bar charts with different data sets can look substantially different, enough that they might not be distinguishable as being the exact same palette. Working with some of our engineering counterparts, we built a pipeline that would put up the same components rendered against different data sets, to simulate the various shapes and sizes the graph elements would appear in. This allowed us to rapidly test the appearance of different palettes. This workflow gave us amazing insights into how a palette would look in our interface. No matter how many hours we spent staring at a palette, we couldn’t get an accurate sense of how the colors would look when composed within an interface.

Thinking about color
Analytics charts with a blues and oranges. Telling the colors of the lines apart is a different visual experience than separating out the dots in sequential order as they appear in the legend.

We experimented with a number of ideas on visualizing different sizes and shapes of colors and how they affected our perception of how much a color was changing element to element. In the first frame it is most difficult to tell the values at 2% and 6% apart given the size and shape of the elements.

Thinking about color
Stress testing the application of a palette to many shapes and sizes

We’ve begun to package up some of this work into a web app others can use to create or import a palette and preview multiple depths of accessible combinations against a set of UI elements.

The goal is to make it easier for anyone to work seamlessly with color and build beautiful interfaces with accessible color contrasts.

Thinking about color
Color by Cloudflare Design

In an effort to make sure everything we are building will be visually accessible – we built a react component that will preview how a design would look if you were colorblind. The component overlays SVG filters to simulate alternate ways someone can perceive color.

Thinking about color
Analytics component previewed against 8 different types of color blindness

While this is previewing an analytics component, really any component or page can be previewed with this method.

import React from "react"

const filters = [
  'achromatopsia',
  'protanomaly',
  'protanopia',
  'deuteranomaly',
  'deuteranopia',
  'tritanomaly',
  'tritanopia',
  'achromatomaly',
]

const ColorBlindFilter = ({itemPadding, itemWidth, ...props }) => {
  return (
      <div  {...props}>
        {filters.map((filter, i) => (
          <div
            style={{filter: 'url(/filters.svg#'+filter+')'}}
            width={itemWidth}
            px={itemPadding}
            key={i+filter}
          >
            {props.children}
          </div>
        ))}
      </div>
  )
}

ColorBlindFilter.defaultProps = {
  display: 'flex',
  justifyContent: 'space-around',
  flexWrap: 'wrap',
  width: 1,
  itemWidth: 1/4
}

export default ColorBlindFilter

We’ve also released a Figma plugin that simulates this visualization for a component.

After quite a few iterations, we had finally come up with a color palette. Each scale was optically aligned with our brand colors. The 5th step in each scale is the closest to the original brand color, but adjusted slightly so it’s accessible with both black and white.

Thinking about color
Our preview panel for palette development, showing a fully desaturated version of the palette for reference

Lyft’s writeup “Re-approaching color” and Jeeyoung Jung’s “Designing Systematic Colors are some of the best write-ups on how to work with color at scale you can find.

Color migrations

Thinking about color
A visual representation of how the legacy palette colors would translate to the new scales.

Getting a team of people to agree on a new color palette is a journey in and of itself. By the time you get everyone to consensus it’s tempting to just collapse into a heap and never think about colors ever again. Unfortunately the work doesn’t stop at this point. Now that we’ve picked our palette, it’s time to get it implemented so this bike shed is painted once and for all.

If you are porting an old legacy part of your app to be updated to the new style guide like we were, even the best color documentation can fall short in helping someone make the necessary changes.

We found it was more common than expected that engineers and designers wanted to know what the new version of a color they were familiar with was. During the transition between palettes we had an interface people could input any color and get the closest color within our palette.

There are times when migrating colors, the closest color isn’t actually what you want. Given the scenario where your brand color has changed from blue to purple, you might want to be porting all blues to the closest purple within the palette, not the closest blues which might still exist in your palette. To help visualize migrations as well as get suggestions on how to consolidate values within the old scale, we of course built a little tool. Here we can define those translations and import a color palette from a URL import. As we still have. a number of web properties to update to our palette, this simple tool has continued to prove useful.

Thinking about color

We wanted to be as gentle as possible in transitioning to the new palette in usage. While the developers found string names for colors brittle and unpredictable, it was still a more familiar system for some than this new one. We first just added in our new palette to the existing theme for usage moving forward. Then we started to port colors for existing components and pages.

For our colleagues, we wrote out desired translations and offered warnings in the console that a color was deprecated, with a reference to the new theme value to use.

Thinking about color
Example of console warning when using deprecated color

Thinking about color
Example of how to check for usage of deprecated values

While we had a few bugs along the way, the team was supportive and helped us fix bugs almost as quickly as we could find them.

We’re still in the process of updating our web properties with our new palette, largely prioritizing accessibility first while trying to create a more consistent visual brand as a nice by-product of the work. A small example of this is our system status page. In the first image, the blue links in the header, the green status bar, and the about copy, were all inaccessible against their backgrounds.

A lot of the changes have been subtle. Most notably the green we use in the dashboard is a lot more inline with our brand colors than before. In addition we’ve also been able to add visual balance by not just using straight black text on background colors. Here we added one of the darker steps from the corresponding scale, to give it a bit more visual balance.

Thinking about color
Example page within our Dashboard in 2017 vs 2019

While we aren’t perfect yet, we’re making progress towards more visual cohesion across our marketing materials and products.

2017

Thinking about color
Our app dashboard in 2017

2019

Next steps

Trying to keep dozens of sites all using the same palette in a consistent manner across time is a task that you can never complete. It’s an ongoing maintenance problem. Engineers familiar with the color system leave, new engineers join and need to learn how the system works. People still launch sites using a different palette that doesn’t meet accessibility standards. Our work continues to be cut out for us. As they say, a garden doesn’t tend itself.

If we do ever revisit our brand colors, we’re excited to have infrastructure in place to update our apps and several of our satellite sites with significantly less effort than our first time around.

Resources

Some of our favorite materials and resources we found while exploring this problem space.

Apps

Writing

Code

Videos

Supercharging Firewall Events for Self-Serve

Post Syndicated from Alex Cruz Farmer original https://blog.cloudflare.com/supercharging-firewall-events-for-self-serve/

Supercharging Firewall Events for Self-Serve

Today, I’m very pleased to announce the release of a completely overhauled version of our Firewall Event log to our Free, Pro and Business customers. This new Firewall Events log is now available in your Dashboard, and you are not required to do anything to receive this new capability.

Supercharging Firewall Events for Self-Serve

No more modals!

We have done away with those pesky modals, providing a much smoother user experience. To review more detailed information about an event, you simply click anywhere on the event list row.

Supercharging Firewall Events for Self-Serve

In the expanded view, you are provided with all the information you may need to identify or diagnose issues with your Firewall or find more details about a potential threat to your application.

Additional matches per event

Cloudflare has several Firewall features to give customers granular control of their security. With this control comes some complexity when debugging why a request was stopped by the Firewall. To help clarify what happened, we have provided an “Additional matches” count at the bottom for events triggered by multiple services or rules for the same request. Clicking the number expands a list showing each rule and service along with the corresponding action.

Supercharging Firewall Events for Self-Serve

Search for any field within a Firewall Event

This is one of my favourite parts of our new Firewall Event Log. Many of our customers have expressed their frustration with the difficulty of pinpointing specific events. This is where our new search capabilities come into their own. Customers can now filter and freeform search for any field that is visible in a Firewall Event!

Let’s say you want to find all the requests originating from a specific ISP or country where your Firewall Rules issued a JavaScript challenge. There are two different ways to do this in the UI.

Firstly, when in the detail view, you can create an include or exclude filter for that field value.

Supercharging Firewall Events for Self-Serve

Secondly, you can create a freeform filter using the “+ Add Filter” button at the top, or edit one of the already filtered fields:

Supercharging Firewall Events for Self-Serve

As illustrated above, with our WAF Managed Rules enabled in log only, we can see all the rules which would have triggered if this was a legitimate attack. This allows you to confirm that your configuration is working as expected.

Scoping your search to a specific date and time

In our old Firewall Event Log, to find an event, users had to traverse through many pages to find Events from a specific date. The last major change we have added is the capability to select a time window to view events between two points in time over the last 2 weeks. In the time selection window, Free and Pro customers can choose a 24 hour time window and our Business customers can view up to 72 hours.

Supercharging Firewall Events for Self-Serve

We want your feedback!

We need your help! Please feel free to leave any feedback on our Community forums, or open a Support ticket with any problems you find. Your feedback is critical to our product improvement process, and we look forward to hearing from you.

One more thing… new Speed Page

Post Syndicated from Andrew Galloni original https://blog.cloudflare.com/new-speed-page/

Congratulations on making it through Speed Week. In the last week, Cloudflare has: described how our global network speeds up the Internet, launched a HTTP/2 prioritisation model that will improve web experiences on all browsers, launched an image resizing service which will deliver the optimal image to every device, optimized live video delivery, detailed how to stream progressive images so that they render twice as fast – using the flexibility of our new HTTP/2 prioritisation model and finally, prototyped a new over-the-wire format for JavaScript that could improve application start-up performance especially on mobile devices. As a bonus, we’re also rolling out one more new feature: “TCP Turbo” automatically chooses the TCP settings to further accelerate your website.

As a company, we want to help every one of our customers improve web experiences. The growth of Cloudflare, along with the increase in features, has often made simple questions difficult to answer:

  • How fast is my website?
  • How should I be thinking about performance features?
  • How much faster would the site be if I were to enable a particular feature?

This post will describe the exciting changes we have made to the Speed Page on the Cloudflare dashboard to give our customers a much clearer understanding of how their websites are performing and how they can be made even faster. The new Speed Page consists of :

  • A visual comparison of your website loading on Cloudflare, with caching enabled, compared to connecting directly to the origin.
  • The measured improvement expected if any performance feature is enabled.
  • A report describing how fast your website is on desktop and mobile.

We want to simplify the complexity of making web experiences fast and give our customers control.  Take a look – We hope you like it.

Why do fast web experiences matter?

Customer experience : No one likes slow service. Imagine if you go to a restaurant and the service is slow, especially when you arrive; you are not likely to go back or recommend it to your friends. It turns out the web works in the same way and Internet customers are even more demanding. As many as 79% of customers who are “dissatisfied” with a website’s performance are less likely to buy from that site again.

Engagement and Revenue : There are many studies explaining how speed affects customer engagement, bounce rates and revenue.

Reputation : There is also brand reputation to consider as customers associate an online experience to the brand. A study found that for 66% of the sample website performance influences their impression of the company.

Diversity : Mobile traffic has grown to be larger than its desktop counterpart over the last few years. Mobile customers’ expectations have becoming increasingly demanding and expect seamless Internet access regardless of location.

Mobile provides a new set of challenges that includes the diversity of device specifications. When testing, be aware that the average mobile device is significantly less capable than the top-of-the-range models. For example, there can be orders-of-magnitude disparity in the time different mobile devices take to run JavaScript. Another challenge is the variance in mobile performance, as customers move from a strong, high quality office network to mobile networks of different speeds (3G/5G), and quality within the same browsing session.

New Speed Page

There is compelling evidence that a faster web experience is important for anyone online. Most of the major studies involve the largest tech companies, who have whole teams dedicated to measuring and improving web experiences for their own services. At Cloudflare we are on a mission to help build a better and faster Internet for everyone – not just the selected few.

Delivering fast web experiences is not a simple matter. That much is clear.
To know what to send and when requires a deep understanding of every layer of the stack, from TCP tuning, protocol level prioritisation, content delivery formats through to the intricate mechanics of browser rendering.  You will also need a global network that strives to be within 10 ms of every Internet user. The intrinsic value of such a network, should be clear to everyone. Cloudflare has this network, but it also offers many additional performance features.

With the Speed Page redesign, we are emphasizing the performance benefits of using Cloudflare and the additional improvements possible from our features.

The de facto standard for measuring website performance has been WebPageTest. Having its creator in-house at Cloudflare encouraged us to use it as the basis for website performance measurement. So, what is the easiest way to understand how a web page loads? A list of statistics do not paint a full picture of actual user experience. One of the cool features of WebPageTest is that it can generate a filmstrip of screen snapshots taken during a web page load, enabling us to quantify how a page loads, visually. This view makes it significantly easier to determine how long the page is blank for, and how long it takes for the most important content to render. Being able to look at the results in this way, provides the ability to empathise with the user.

How fast on Cloudflare ?

After moving your website to Cloudflare, you may have asked: How fast did this decision make my website? Well, now we provide the answer:

Comparison of website performance using Cloudflare. 

As well as the increase in speed, we provide filmstrips of before and after, so that it is easy to compare and understand how differently a user will experience the website. If our tests are unable to reach your origin and you are already setup on Cloudflare, we will test with development mode enabled, which disables caching and minification.

Site performance statistics

How can we measure the user experience of a website?

Traditionally, page load was the important metric. Page load is a technical measurement used by browser vendors that has no bearing on the presentation or usability of a page. The metric reports on how long it takes not only to load the important content but also all of the 3rd party content (social network widgets, advertising, tracking scripts etc.). A user may very well not see anything until after all the page content has loaded, or they may be able to interact with a page immediately, while content continues to load.

A user will not decide whether a page is fast by a single measure or moment. A user will perceive how fast a website is from a combination of factors:

  • when they see any response
  • when they see the content they expect
  • when they can interact with the page
  • when they can perform the task they intended

Experience has shown that if you focus on one measure, it will likely be to the detriment of the others.

Importance of Visual response

If an impatient user navigates to your site and sees no content for several seconds or no valuable content, they are likely to get frustrated and leave. The paint timing spec defines a set of paint metrics, when content appears on a page, to measure the key moments in how a user perceives performance.

First Contentful Paint (FCP) is the time when the browser first renders any DOM content.

First Meaningful Paint (FMP) is the point in time when the page’s “primary” content appears on the screen. This metric should relate to what the user has come to the site to see and is designed as the point in time when the largest visible layout change happens.

Speed Index attempts to quantify the value of the filmstrip rather than using a single paint timing. The speed index measures the rate at which content is displayed – essentially the area above the curve. In the chart below from our progressive image feature you can see reaching 80% happens much earlier for the parallelized (red) load rather than the regular (blue).

Image Description

Importance of interactivity

The same impatient user is now happy that the content they want to see has appeared. They will still become frustrated if they are unable to interact with the site.
Time to Interactive is the time it takes for content to be rendered and the page is ready to receive input from the user. Technically this is defined as when the browser’s main processing thread has been idle for several seconds after first meaningful paint.

The Speed Tab displays these key metrics for mobile and desktop.

How much faster on Cloudflare ?

The Cloudflare Dashboard provides a list of performance features which can, admittedly, be both confusing and daunting. What would be the benefit of turning on Rocket Loader and on which performance metrics will it have the most impact ? If you upgrade to Pro what will be the value of the enhanced HTTP/2 prioritisation ? The optimization section answers these questions.

Tests are run with each performance feature turned on and off. The values for the tests for the appropriate performance metrics are displayed, along with the improvement. You can enable or upgrade the feature from this view. Here are a few examples :

If Rocket Loader were enabled for this website, the render-blocking JavaScript would be deferred causing first paint time to drop from 1.25s to 0.81s – an improvement of 32% on desktop.

Image heavy sites do not perform well on slow mobile connections. If you enable Mirage, your customers on 3G connections would see meaningful content 1s sooner – an improvement of 29.4%.

So how about our new features?

We tested the enhanced HTTP/2 prioritisation feature on an Edge browser on desktop and saw meaningful content display 2s sooner – an improvement of 64%.

This is a more interesting result taken from the blog example used to illustrate the progressive image streaming. At first glance the improvement of 29% in speed index is good. The filmstrip comparison shows a more significant difference. In this case the page with no images shown is already 43% visually complete for both scenarios after 1.5s. At 2.5s the difference is 77% compared to 50%.

This is a great example of how metrics do not tell the full story. They cannot completely replace viewing the page loading flow and understanding what is important for your site.

How to try

This is our first iteration of the new Speed Page and we are eager to get your feedback. We will be rolling this out to beta customers who are interested in seeing how their sites perform. To be added to the queue for activation of the new Speed Page please click on the banner on the overview page,

or click on the banner on the existing Speed Page.

Protecting coral reefs with Nemo-Pi, the underwater monitor

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/coral-reefs-nemo-pi/

The German charity Save Nemo works to protect coral reefs, and they are developing Nemo-Pi, an underwater “weather station” that monitors ocean conditions. Right now, you can vote for Save Nemo in the Google.org Impact Challenge.

Nemo-Pi — Save Nemo

Save Nemo

The organisation says there are two major threats to coral reefs: divers, and climate change. To make diving saver for reefs, Save Nemo installs buoy anchor points where diving tour boats can anchor without damaging corals in the process.

reef damaged by anchor
boat anchored at buoy

In addition, they provide dos and don’ts for how to behave on a reef dive.

The Nemo-Pi

To monitor the effects of climate change, and to help divers decide whether conditions are right at a reef while they’re still on shore, Save Nemo is also in the process of perfecting Nemo-Pi.

Nemo-Pi schematic — Nemo-Pi — Save Nemo

This Raspberry Pi-powered device is made up of a buoy, a solar panel, a GPS device, a Pi, and an array of sensors. Nemo-Pi measures water conditions such as current, visibility, temperature, carbon dioxide and nitrogen oxide concentrations, and pH. It also uploads its readings live to a public webserver.

Inside the Nemo-Pi device — Save Nemo
Inside the Nemo-Pi device — Save Nemo
Inside the Nemo-Pi device — Save Nemo

The Save Nemo team is currently doing long-term tests of Nemo-Pi off the coast of Thailand and Indonesia. They are also working on improving the device’s power consumption and durability, and testing prototypes with the Raspberry Pi Zero W.

web dashboard — Nemo-Pi — Save Nemo

The web dashboard showing live Nemo-Pi data

Long-term goals

Save Nemo aims to install a network of Nemo-Pis at shallow reefs (up to 60 metres deep) in South East Asia. Then diving tour companies can check the live data online and decide day-to-day whether tours are feasible. This will lower the impact of humans on reefs and help the local flora and fauna survive.

Coral reefs with fishes

A healthy coral reef

Nemo-Pi data may also be useful for groups lobbying for reef conservation, and for scientists and activists who want to shine a spotlight on the awful effects of climate change on sea life, such as coral bleaching caused by rising water temperatures.

Bleached coral

A bleached coral reef

Vote now for Save Nemo

If you want to help Save Nemo in their mission today, vote for them to win the Google.org Impact Challenge:

  1. Head to the voting web page
  2. Click “Abstimmen” in the footer of the page to vote
  3. Click “JA” in the footer to confirm

Voting is open until 6 June. You can also follow Save Nemo on Facebook or Twitter. We think this organisation is doing valuable work, and that their projects could be expanded to reefs across the globe. It’s fantastic to see the Raspberry Pi being used to help protect ocean life.

The post Protecting coral reefs with Nemo-Pi, the underwater monitor appeared first on Raspberry Pi.

New – Pay-per-Session Pricing for Amazon QuickSight, Another Region, and Lots More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-pay-per-session-pricing-for-amazon-quicksight-another-region-and-lots-more/

Amazon QuickSight is a fully managed cloud business intelligence system that gives you Fast & Easy to Use Business Analytics for Big Data. QuickSight makes business analytics available to organizations of all shapes and sizes, with the ability to access data that is stored in your Amazon Redshift data warehouse, your Amazon Relational Database Service (RDS) relational databases, flat files in S3, and (via connectors) data stored in on-premises MySQL, PostgreSQL, and SQL Server databases. QuickSight scales to accommodate tens, hundreds, or thousands of users per organization.

Today we are launching a new, session-based pricing option for QuickSight, along with additional region support and other important new features. Let’s take a look at each one:

Pay-per-Session Pricing
Our customers are making great use of QuickSight and take full advantage of the power it gives them to connect to data sources, create reports, and and explore visualizations.

However, not everyone in an organization needs or wants such powerful authoring capabilities. Having access to curated data in dashboards and being able to interact with the data by drilling down, filtering, or slicing-and-dicing is more than adequate for their needs. Subscribing them to a monthly or annual plan can be seen as an unwarranted expense, so a lot of such casual users end up not having access to interactive data or BI.

In order to allow customers to provide all of their users with interactive dashboards and reports, the Enterprise Edition of Amazon QuickSight now allows Reader access to dashboards on a Pay-per-Session basis. QuickSight users are now classified as Admins, Authors, or Readers, with distinct capabilities and prices:

Authors have access to the full power of QuickSight; they can establish database connections, upload new data, create ad hoc visualizations, and publish dashboards, all for $9 per month (Standard Edition) or $18 per month (Enterprise Edition).

Readers can view dashboards, slice and dice data using drill downs, filters and on-screen controls, and download data in CSV format, all within the secure QuickSight environment. Readers pay $0.30 for 30 minutes of access, with a monthly maximum of $5 per reader.

Admins have all authoring capabilities, and can manage users and purchase SPICE capacity in the account. The QuickSight admin now has the ability to set the desired option (Author or Reader) when they invite members of their organization to use QuickSight. They can extend Reader invites to their entire user base without incurring any up-front or monthly costs, paying only for the actual usage.

To learn more, visit the QuickSight Pricing page.

A New Region
QuickSight is now available in the Asia Pacific (Tokyo) Region:

The UI is in English, with a localized version in the works.

Hourly Data Refresh
Enterprise Edition SPICE data sets can now be set to refresh as frequently as every hour. In the past, each data set could be refreshed up to 5 times a day. To learn more, read Refreshing Imported Data.

Access to Data in Private VPCs
This feature was launched in preview form late last year, and is now available in production form to users of the Enterprise Edition. As I noted at the time, you can use it to implement secure, private communication with data sources that do not have public connectivity, including on-premises data in Teradata or SQL Server, accessed over an AWS Direct Connect link. To learn more, read Working with AWS VPC.

Parameters with On-Screen Controls
QuickSight dashboards can now include parameters that are set using on-screen dropdown, text box, numeric slider or date picker controls. The default value for each parameter can be set based on the user name (QuickSight calls this a dynamic default). You could, for example, set an appropriate default based on each user’s office location, department, or sales territory. Here’s an example:

To learn more, read about Parameters in QuickSight.

URL Actions for Linked Dashboards
You can now connect your QuickSight dashboards to external applications by defining URL actions on visuals. The actions can include parameters, and become available in the Details menu for the visual. URL actions are defined like this:

You can use this feature to link QuickSight dashboards to third party applications (e.g. Salesforce) or to your own internal applications. Read Custom URL Actions to learn how to use this feature.

Dashboard Sharing
You can now share QuickSight dashboards across every user in an account.

Larger SPICE Tables
The per-data set limit for SPICE tables has been raised from 10 GB to 25 GB.

Upgrade to Enterprise Edition
The QuickSight administrator can now upgrade an account from Standard Edition to Enterprise Edition with a click. This enables provisioning of Readers with pay-per-session pricing, private VPC access, row-level security for dashboards and data sets, and hourly refresh of data sets. Enterprise Edition pricing applies after the upgrade.

Available Now
Everything I listed above is available now and you can start using it today!

You can try QuickSight for 60 days at no charge, and you can also attend our June 20th Webinar.

Jeff;

 

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.

Summary

SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see: