Tag Archives: accessibility

GitHub’s Engineering Fundamentals program: How we deliver on availability, security, and accessibility

Post Syndicated from Deepthi Rao Coppisetty original https://github.blog/2024-02-08-githubs-engineering-fundamentals-program-how-we-deliver-on-availability-security-and-accessibility/


How do we ensure over 100 million users across the world have uninterrupted access to GitHub’s products and services on a platform that is always available, secure, and accessible? From our beginnings as a platform for open source to now also supporting 90% of the Fortune 100, that is the ongoing challenge we face and hold ourselves accountable for delivering across our engineering organization.

Establishing engineering governance

To meet the needs of our increased number of enterprise customers and our continuing innovation across the GitHub platform, we needed to address tech debt, improve reliability, and enhance observability of our engineering systems. This led to the birth of GitHub’s engineering governance program called the Fundamentals program. Our goal was to work cross-functionally to define, measure, and sustain engineering excellence with a vision to ensure our products and services are built right for all users.

What is the Fundamentals program?

In order for such a large-scale program to be successful, we needed to tackle not only the processes but also influence GitHub’s engineering culture. The Fundamentals program helps the company continue to build trust and lead the industry in engineering excellence, by ensuring that there is clear prioritization of the work needed in order for us to guarantee the success of our platform and the products that you love.

We do this via the lens of three program pillars, which help our organization understand the focus areas that we emphasize today:

  1. Accessibility (A11Y): Truly be the home for all developers
  2. Security: Serve as the most trustworthy platform for developers
  3. Availability: Always be available and on for developers

In order for this to be successful, we’ve relied on both grass-roots support from individual teams and strong and consistent sponsorship from our engineering leadership. In addition, it requires meaningful investment in the tools and processes to make it easy for engineers to measure progress against their goals. No one in this industry loves manual processes and here at GitHub we understand anything that is done more than once must be automated to the best of our ability.

How do we measure progress?

We use Fundamental Scorecards to measure progress against our Availability, Security, and Accessibility goals across the engineering organization. The scorecards are designed to let us know that a particular service or feature in GitHub has reached some expected level of performance against our standards. Scorecards align to the fundamentals pillars. For example, the secret scanning scorecard aligns to the Security pillar, Durable Ownership aligns to Availability, etc. These are iteratively evolved by enhancing or adding requirements to ensure our services are meeting our customer’s changing needs. We expect that some scorecards will eventually become concrete technical controls such that any deviation is treated as an incident and other automated safety and security measures may be taken, such as freezing deployments for a particular service until the issue is resolved.

Each service has a set of attributes that are captured and strictly maintained in a YAML file, such as a service tier (tier 0 to 3 based on criticality to business), quality of service (QoS values include critical, best effort, maintenance and so on based on the service tier), and service type that lives right in the service’s repo. In addition, this file also has the ownership information of the service, such as the sponsor, team name, and contact information. The Fundamental scorecards read the service’s YAML file and start monitoring the applicable services based on their attributes. If the service does not meet the requirements of the applicable Fundamental scorecard, an action item is generated with an SLA for effective resolution. A corresponding issue is automatically generated in the service’s repository to seamlessly tie into the developer’s workflow and meet them where they are to make it easy to find and resolve the unmet fundamental action items.

Through the successful implementation of the Fundamentals program, we have effectively managed several scorecards that align with our Availability, Security, and Accessibility goals, including:

  • Durable ownership: maintains ownership of software assets and ensures communication channels are defined. Adherence to this fundamental supports GitHub’s Availability and Security.
  • Code scanning: tracks security vulnerabilities in GitHub software and uses CodeQL to detect vulnerabilities during development. Adherence to this fundamental supports GitHub’s Security.
  • Secret scanning: tracks secrets in GitHub’s repositories to mitigate risks. Adherence to this fundamental supports GitHub’s Security.
  • Incident readiness: ensures services are configured to alert owners, determine incident cause, and guide on-call engineers. Adherence to this fundamental supports GitHub’s Availability.
  • Accessibility: ensures products and services follow our accessibility standards. Adherence to this fundamental enables developers with disabilities to build on GitHub.
An example secret scanning scorecard showing 100% compliance for having no secrets in the repository, push protection enabled, and secret scanning turned on.
Example secret scanning scorecard

A culture of accountability

As much emphasis as we put on Fundamentals, it’s not the only thing we do: we ship products, too!

We call it the Fundamentals program because we also make sure that:

  1. We include Fundamentals in our strategic plans. This means our organization prioritizes this work and allocates resources to accomplish the fundamental goals we each quarter. We track the goals on a weekly basis and address the roadblocks.
  2. We surface and manage risks across all services to the leaders so they can actively address them before they materialize into actual problems.
  3. We provide support to teams as they work to mitigate fundamental action items.
  4. It’s clearly understood that all services, regardless of team, have a consistent set of requirements from Fundamentals.

Planning, managing, and executing fundamentals is a team affair, with a program management umbrella.

Designated Fundamentals champions and delegates help maintain scorecard compliance, and our regular check-ins with engineering leaders help us identify high-risk services and commit to actions that will bring them back into compliance. This includes:

  1. Executive sponsor. The executive sponsor is a senior leader who supports the program by providing resources, guidance, and strategic direction.
  2. Pillar sponsor. The pillar sponsor is an engineering leader who oversees the overarching focus of a given pillar across the organization as in Availability, Security, and Accessibility.
  3. Directly responsible individual (DRI). The DRI is an individual responsible for driving the program by collaborating across the organization to make the right decisions, determine the focus, and set the tempo of the program.
  4. Scorecard champion. The scorecard champion is an individual responsible for the maintenance of the scorecard. They add, update, and deprecate the scorecard requirements to keep the scorecard relevant.
  5. Service sponsors. The sponsor oversees the teams that maintain services and is accountable for the health of the service(s).
  6. Fundamentals delegate. The delegate is responsible for coordinating Fundamentals work with the service owners within their org, supporting the Sponsor to ensure the work is prioritized, and resources committed so that it gets completed.

Results-driven execution

Making the data readily available is a critical part of the puzzle. We created a Fundamentals dashboard that shows all the services with unmet scorecards sorted by service tier and type and filtered by service owners and teams. This makes it easier for our engineering leaders and delegates to monitor and take action towards Fundamental scorecards’ adherence within their orgs.

As a result:

  • Our services comply with durable ownership requirements. For example, the service must have an executive sponsor, a team, and a communication channel on Slack as part of the requirements.
  • We resolved active secret scanning alerts in repositories affiliated with the services in the GitHub organization. Some of the repositories were 15 years old and as a part of this effort we ensured that these repos are durably owned.
  • Business critical services are held to greater incident readiness standards that are constantly evolving to support our customers.
  • Service tiers are audited and accurately updated so that critical services are held to the highest standards.

Example layout and contents of Fundamentals dashboard

Tier 1 Services Out of Compliance [Count: 2]
Service Name Service Tier Unmet Scorecard Exec Sponsor Team
service_a 1 incident-readiness john_doe github/team_a
service_x 1 code-scanning jane_doe github/team_x

Continuous monitoring and iterative enhancement for long-term success

By setting standards for engineering excellence and providing pathways to meet through standards through culture and process, GitHub’s Fundamentals program has delivered business critical improvements within the engineering organization and, as a by-product, to the GitHub platform. This success was possible by setting the right organizational priorities and committing to them. We keep all levels of the organization engaged and involved. Most importantly, we celebrate the wins publicly, however small they may seem. Building the culture of collaboration, support, and true partnership has been key to sustaining the ongoing momentum of an organization-wide engineering governance program, and the scorecards that monitor the availability, security, and accessibility of our platform so you can consistently rely on us to achieve your goals.

Want to learn more about how we do engineering GitHub? Check out how we build containerized services, how we’ve scaled our CI to 15,000 jobs every hour using GitHub Actions larger runners, and how we communicate effectively across time zones, teams, and tools.

Interested in joining GitHub? Check out our open positions or learn more about our platform.

The post GitHub’s Engineering Fundamentals program: How we deliver on availability, security, and accessibility appeared first on The GitHub Blog.

Prompting GitHub Copilot Chat to become your personal AI assistant for accessibility

Post Syndicated from Ed Summers original https://github.blog/2023-10-09-prompting-github-copilot-chat-to-become-your-personal-ai-assistant-for-accessibility/

Large Language Models (LLMs) are trained on vast quantities of data. As a result, they have the capacity to generate a wide range of results. By default, the results from LLMs may not meet your expectations. So, how do you coax an LLM to generate results that are more aligned with your goals? You must use prompt engineering, which is a rapidly evolving art and science of constructing inputs, also known as prompts, that elicit the desired outputs.

For example, GitHub Copilot includes a sophisticated built-in prompt engineering facility that works behind the scenes. It uses contextual information from your code, comments, open tabs within your editor, and other sources to improve results. But wouldn’t it be great to optimize results and ask questions about coding by talking directly to the LLM?

That’s why we created GitHub Copilot Chat. GitHub Copilot Chat complements the code completion capabilities of GitHub Copilot by providing a chat interface directly within your favorite editor. GitHub Copilot Chat has access to all the context previously mentioned and you can also provide additional context directly via prompts within the chat window. Together, the functionalities make GitHub Copilot a personal AI assistant that can help you learn faster and write code that not only meets functional requirements, but also meets non-functional requirements that are table stakes for modern enterprise software such as security, scalability, and accessibility.

In this blog, we’ll focus specifically on the non-functional requirement of accessibility. We’ll provide a sample foundational prompt that can help you learn about accessibility directly within your editor and suggest code that is optimized for better accessibility. We’ll break down the sample prompt to understand its significance. Finally, we’ll share specific examples of accessibility-related questions and results that demonstrate the power of prompting GitHub Copilot Chat to deliver results that improve accessibility. When using GitHub Copilot Chat, we recommend that you think of yourself as a lead developer who is working with a more junior developer (GitHub Copilot Chat). As the lead developer, it is your responsibility to verify information that is provided by GitHub Copilot Chat and ensure it meets all of your requirements.

Foundational accessibility prompt

You can copy this prompt and paste it directly into GitHub Copilot Chat for Visual Studio Code:

“I need to learn about accessibility and need to write code that conforms with the WCAG 2.1 level A and AA success criteria defined at https://www.w3.org/TR/WCAG21/. I want you to be my accessibility coach, a subject-matter expert that makes me think and account for all accessibility requirements and usability enhancements. When you answer questions about accessibility please use reputable sources such as w3.org, webaim.org, developer.mozilla.org, and https://www.ibm.com/able/. When possible, please provide links and references for additional learning. When you suggest code please use semantic HTML, ensure it is operable using the keyboard, follow WCAG 2.1 sufficient techniques, and follow the ARIA Authoring Practices Guide and related design patterns. Do you understand these instructions?”

What does this do?

This prompt can serve as your baseline in your codebase, whether you are just starting a project or in the thick of an extensive project.

The prompt consists of:

  1. Providing solid context and background information to help narrow down the scope of solutions.
  2. Defining GitHub Copilot’s role and setting expectations about what feedback we are looking for.
  3. Being specific in the quality of answers we are looking to reference from reputable sources.
  4. Asking for additional resources and context to learn more and ensure the answers we receive are correct. This helps provide feedback to GitHub Copilot.
  5. Dictating one final set of requirements to even further refine the scope based on the resources we have provided.

Note: At any time you can ask GitHub Copilot if it’s still referencing the foundation prompt you provided and it will inform you of its scope.

Query input: Are you still using the instructions I provided?
GitHub Copilot response: Yes, I am still adhering to the instructions you provided. I will continue to assist you in learning about accessibility and writing code that conforms with the WCAG 2.1 level A and AA success criteria. I will use reputable sources for references and additional learning, and ensure that the code suggestions I provide use semantic HTML, are operable using the keyboard, follow WCAG 2.1 sufficient techniques, and follow the ARIA Authoring Practices Guide and related design patterns.

Examples after the setting the foundation prompt

Note: These prompts were submitted in a brand new workspace that contained no source code or files to reference. Repeating these prompts in your codebase may produce different results as GitHub Copilot can use context provided from open tabs and existing files in the codebase. In addition, please keep in mind that results from LLMs are non-deterministic.

Accessibility testing

Query input: How can I test for web accessibility?
GitHub Copilot response: There are several ways to test for web accessibility:

  1. Manual Testing: This involves navigating your website using only a keyboard, checking color contrast, and resizing text to 200% to ensure readability.
  2. Automated Testing: There are several tools available for automated accessibility testing. Some popular ones include:
    1. WAVE: A free web accessibility evaluation tool provided by WebAIM.
    2. Axe: An accessibility testing toolkit built on a library of accessibility rules.
    3. Lighthouse: An open source, automated tool for improving the quality of web pages. It has audits for performance, accessibility, progressive web apps, SEO, and more.
  3. Screen Reader Testing: This involves testing your website using a screen reader tool. Some popular screen readers include NVDA, JAWS, and VoiceOver.
  4. User Testing: This involves testing your website with people with disabilities. This can provide valuable insights into how accessible your website is in practice.

Remember, no single tool or method can catch all accessibility issues, so a combination of methods is often the best approach.

For more information, you can refer to the following resources:

Tables

Query input: What are some best practices for creating accessible tables in HTML?
GitHub Copilot response: Creating accessible tables in HTML involves several best practices:

  1. Use <th> for table headers: This helps screen readers identify headers. Also, use the scope attribute to specify whether the header is for a row or a column.
    <table>
    <tr>
    <th scope="col">Header 1</th>
    <th scope="col">Header 2</th>
    </tr>
    <tr>
    <td>Data 1</td>
    <td>Data 2</td>
    </tr>
    </table>
    
  2. Provide captions: Use the <caption> element to provide a brief overview of the table. This is especially helpful for people using screen readers.
    <table>
    <caption>Overview of sales</caption>
    <!-- rest of the table -->
    </table>
    

    Use <thead>, <tbody>, and <tfoot>: These elements help to group different sections of the table and can improve accessibility.

    <table>
    <thead> <!-- header group -->
    <tbody> <!-- body group -->
    <tfoot> <!-- footer group -->
    </table>
    
  3. Avoid complex tables: If possible, avoid using tables for complex data structures. If you must, use rowspan and colspan attributes carefully and provide additional information in cell content or with the title attribute.

  4. Ensure readability without styles: The table should be readable and make sense even when styles are turned off.
  5. Use aria-labelledby for complex tables: If a table has multiple levels of headers, use the aria-labelledby attribute to associate data cells with headers.

For more information, you can refer to the WebAIM guide on creating accessible tables.

Forms

Query input: What are some best practices for creating accessible forms in HTML?
GitHub Copilot response: Creating accessible forms in HTML involves several best practices:

  1. Use <label> for form controls: Each form control should have a corresponding <label> element. This helps screen readers identify the purpose of the form control.
    <label for="name">Name:</label>
    <input id="name" name="name" type="text" />
    
  2. Group related form controls: Use the <fieldset> element to group related form controls and <legend> to provide a description for the group.
    <fieldset>
    <legend>Choose your favorite pet:</legend>
    <input type="radio" id="dog" name="pet">
    <label for="dog">Dog</label><br>
    <input type="radio" id="cat" name="pet">
    <label for="cat">Cat</label><br>
    </fieldset>
    
  3. Provide error messages and feedback: If the user makes an error, provide clear and specific feedback. Use the aria-describedby attribute to associate the form control with the error message.
    <input type="text" id="name" name="name" aria-describedby="nameError">
    <span id="nameError">Please enter your name.</span>
    
  4. Use correct input types: Use the correct input type for the form control (for example, email, number, date). This can provide the user with the appropriate keyboard on mobile devices.

  5. Make form controls keyboard accessible: Ensure that all form controls and buttons can be accessed and used with the keyboard.
  6. Use ARIA roles and properties when necessary: If a form control requires more complex interaction, use ARIA roles and properties to provide additional information to screen reader users.

For more information, you can refer to the WebAIM guide on creating accessible forms.

Keyboard support

Query input: What are some best practices for creating proper keyboard support in HTML?
GitHub Copilot response: Creating proper keyboard support in HTML involves several best practices:

  1. Logical Tab Order: Ensure that the tab order of the page follows the visual flow of the content. This can be achieved by structuring your HTML in a logical order. Avoid using positive tabindex values as they can disrupt the natural tab order.
  2. Keyboard Focus Indication: It’s important that users can visually identify which element currently has keyboard focus. This can be achieved using CSS to style the :focus state of interactive elements.
    button:focus, a:focus {
    outline: 2px solid blue;
    }
    
  3. Skip Links: Provide a mechanism to skip to main content or different sections of the page. This can be a link at the top of the page that becomes visible when it receives keyboard focus.
    <a href="#maincontent">Skip to main content</a>
    
  4. Accessible Interactive Elements: Ensure all interactive elements are accessible via keyboard. This includes links, buttons, form controls, and custom interactive widgets. Use semantic HTML elements whenever possible as they have built-in keyboard accessibility.

  5. Keyboard Support for Custom Widgets: If you’re creating custom interactive widgets (like a dropdown menu or modal dialog), you’ll need to provide additional keyboard support. This often involves handling keypress events for keys like Enter, Space, Escape, and the arrow keys. The WAI-ARIA Authoring Practices Guide provides detailed guidance on how to add keyboard support to various types of widgets.
  6. Avoid Keyboard Traps: Ensure that keyboard users can navigate to and from all parts of the page. Avoid situations where keyboard focus can be trapped in a particular page element.

For more information, you can refer to the WebAIM guide on keyboard accessibility.

Aria

Query input: What are some best practices around using proper ARIA roles in HTML?
GitHub Copilot response: ARIA (Accessible Rich Internet Applications) roles can be used to improve accessibility, especially when creating complex interactive elements. Here are some best practices:

  1. Use Semantic HTML First: Before using ARIA, consider whether you can use semantic HTML elements. These elements have built-in accessibility features that often make ARIA unnecessary. For example, use a <button> element instead of a <div> with a role="button".
  2. Don’t Change Native Semantics: Avoid using ARIA to change the semantics of native HTML elements. For example, don’t use role="button" on a link (<a>). Instead, use a <button> element.
  3. Use Appropriate Roles: When you do need to use ARIA, ensure you’re using the appropriate role for the element. The ARIA roles model provides a list of roles and their intended use.
  4. Provide Required States and Properties: Some roles require additional ARIA states or properties. For example, a checkbox requires the aria-checked state.
    <div role="checkbox" aria-checked="false">Check me</div>
    
  5. Use aria-labelledby and aria-describedby: These attributes can be used to provide additional descriptive text for an element.
    <button aria-describedby="info">More Info</button>
    <div id="info">Clicking this button will open more information.</div>
    
  6. Test with Screen Readers: After adding ARIA roles, test your page with a screen reader to ensure it’s providing the intended information.

Remember, ARIA should be used as a last resort when semantic HTML isn’t sufficient. The WAI-ARIA Authoring Practices Guide provides detailed guidance on how to use ARIA correctly.

Analysis

Reviewing the questions we asked GitHub Copilot, it suggested a lot of good information for us to learn more about best accessibility and usability practices. A few responses may be open to interpretation, while other responses lack details or specificity. Some examples include:

  • <tfoot> is an HTML element that calls attention to a set of rows summarizing the columns of the table. Including <tfoot> can help ensure the proper understanding of a table. However, not all tables require such rows–therefore, <tfoot> is not mandatory to have a usable table if the data doesn’t require it.
  • The response for the question about forms stressed the importance of the <label> element, which can help identify different labels for their various inputs. However, that response didn’t include any description about adding “for” attributes with the same value as the <input>’s ID. The response did include “for” attributes in the code suggestion, but an explanation about proper use of the “for” attribute would have been helpful.
  • The response to our question about keyboard accessibility included an explanation about interactive controls and buttons that can be accessed and used with a keyboard. However, it did not include additional details that could benefit accessibility such as ensuring a logical tab order, testing tab order, or avoiding overriding default keyboard accessibility for semantic elements.

If you are ever unsure of a recommendation or wish to know more, ask GitHub Copilot more questions on those areas, ask to provide more references/links, and follow up on the documentation provided. As a developer, it is your responsibility to verify the information you receive, no matter the source.

Conclusion

In our exploration, GitHub Copilot Chat and a well-constructed foundational prompt come together to create a personal AI assistant for accessibility that can improve your understanding of accessibility. We invite you to try the sample prompt and work with a qualified accessibility expert to customize and improve it.

Limitations and considerations

While clever prompts can improve the accessibility of the results from GitHub Copilot Chat, it is not reasonable to expect generative AI tools to deliver perfect answers to your questions or generate code that fully conforms with accessibility standards. When working with GitHub Copilot Chat, we recommend that you think of yourself as a lead developer who is working with a more junior developer. As the lead developer, it is your responsibility to verify information that is provided by GitHub Copilot Chat and ensure it meets all of your requirements. We also recommend that you work with a qualified accessibility expert to review and test suggestions from GitHub Copilot. Finally, we recommend that you ensure that all GitHub Copilot Chat code suggestions go through proper code review, code security, and code quality channels to ensure they meet the standards of your team.

Learn more

The post Prompting GitHub Copilot Chat to become your personal AI assistant for accessibility appeared first on The GitHub Blog.

Accessibility considerations behind code search and code view

Post Syndicated from milemons original https://github.blog/2023-07-06-accessibility-considerations-behind-code-search-and-code-view/

GitHub prides itself on being the home for all developers, including developers with disabilities. Accessibility is a core priority for all new projects at GitHub, so it was top of mind when we started our project to rework code search and the code view at GitHub. With the old code view, some developers preferred to look at raw code rather than code view due to accessibility barriers, and that’s what we set out to fix. This blog post will shed light on our process and three major tasks that required thoughtful design and implementation—code search and query builder, the file tree, and navigating code by keyboard.

Process

We worked with a team of experts at GitHub dedicated to accessibility, since while we are confident in our development skills, accessibility has many nuances and requires a large depth of knowledge to get right. This team helped us immensely in three main ways:

  1. External auditing
    In this project, we performed accessibility auditing on all of our features. This meant that another team of auditors reviewed our webpage with all the changes implemented to find accessibility errors. They used tools including screen readers, color contrast tools, and more, to identify areas that users with disabilities may find hard to use. Once those issues were identified, the accessibility team would take a look at those issues and suggest a proper solution to the problem. Then, it was up to us to implement the solution and collaborate with the team where we needed additional assistance.
  2. Design reviews
    The user interface for code search and code view were both entirely redesigned to support our new goals—to allow users to search, navigate, and understand code in a way they weren’t able to before. As a part of the design process, a team of accessibility designers reviewed the Figma mockups to determine proper HTML markup and tab order. Then, they included detailed explanations of what interactions should look like in order to meet our accessibility goals.
  3. Office hours
    The accessibility team at GitHub hosts weekly meetings where engineers can sign up and discuss how to solve problems with their features with the team and a consultant. The consultant is incredibly helpful and knowledgeable about how to properly address issues because he has lived experience with screen readers and accessibility.During these meetings, we were able to discuss complicated issues, such as the following: proper filtering for code search, making an accessible file tree, and navigating code on a keyboard, along with other issues like tab order, and focus management across the whole feature.

Implementation details

This has been written about frequently on our blog—from one post on the details of QueryBuilder, our implementation of an accessible finder component to details about what navigating code search accessibly looks like. Those two posts are great reads and I strongly recommend checking them out, but they’re not the only thing that we’ve worked on. Thanks to @lindseywild, @smockle, @kendallgassner, @owenniblock, and @khiga8 for their guidance and help resolving issues with code search. This is still a work in progress, especially in areas like the filtering behavior on the search pages.

Two areas that also required careful consideration were managing focus after changing the sort order of search results and how to announce the result of the search for screen reader users.

Changing sort—changing focus?

When we first implemented this sort dropdown for non-code search types, if you navigated to it with your keyboard and then selected a different sort option, the whole page would reload and your focus moved back to the header. Our preferred, accessible behavior is that, when a dropdown menu is closed, focus returns to the button that opened the menu. However, our problem was that the “Sort” dropdown didn’t merely perform client-side operations like most dropdowns where you select an option. In this case, once a user selects an option, we do a full page navigation to perform the new search with the sort order added. This meant that the focus was being placed back on the header by default after the page navigation, instead of returning to the sort dropdown. For sighted mouse users, this is a nonissue; for sighted keyboard users, it is unexpected and annoying; for blind users, this is confusing and makes the product hard to use. To fix this, we had to make sure we returned focus to the dropdown after reloading the page.

Screenshot of search results. I have searched “react” and the “Repositories” filter by item is selected. There are 2.2M results, found in 1 second. The "Sort By" dropdown is open with “Best match” selected. The other options are “Most stars”, “Fewest stars”, “Most forks”, “Fewest forks”, “Recently updated”, and “Least recently updated”.

Announcing search results

When a sighted user types in the search bar and presses ‘Enter’ they quickly receive feedback that the search has completed and whether or not they found any results by looking at the page. For screen reader users, though, how to give the feedback that there were results, how many, or if there was an error requires more thought. One solution could be to place focus on the first result. That has a number of problems though.

  1. The user will not receive feedback about the number of search results and may think there’s only one.
  2. The user may miss important context like the Single sign on banner.
  3. The user will have to tab through more elements if they want to get back.

Another solution could be to use aria-live announcements to read out the number of results and success. This has its own problems.

  1. We already have an announcement on page navigation to read out the new title and these two announcements would compete or cause a race condition.
  2. There isn’t a way to programmatically force announcements to be read in order on page load.
  3. Some screen reader users have aria-live announcements turned off since they can be annoying.

After some discussion, we decided to focus and read out the header with the number of search results after a search and allow users to explore for errors, banners, or results on their own once they know how many results they received.

A screenshot of search results showing 0 results and an error message saying, “Invalid repository name.” I have searched “repo:asdfghijkl." The “Code” filter item is selected. There is also some help text on the page that says “Your search did not match any code. However, we found 544k packages that matched your search query. Alternatively, try one of the tips below.

Tree navigation

We knew when redesigning the code viewing experience that we wanted to include the tree panel to allow users to quickly switch between folders or files, because understanding code often requires looking in multiple places. We started the project by making our own tree to the aria spec, but it was too verbose. To fix this, our accessibility and design teams created the TreeView, which is open source. This was implemented to support various generic list elements and use proper HTML structure to make navigating through the tree a breeze, including typeahead (the ability to type a bit and have focus move to the first matching element), proper announcements for asynchronous loading of deeply nested items, and proper IDs for all elements that are guaranteed to be unique (that is, not constructed from their contents which may be the same). The valid markup for a tree view is very specific and developing it requires careful reviews for common issues, like invalid child item types under a role="group" element or using nested list elements without considering screen reader support, which is unlike most TreeView implementations on the internet. For more information about the design details for the tree view, check out the markdown documentation. Thanks to @colebemis, @mperrotti, and @ericwbailey for the work on this.

Reading and navigating code by keyboard

Before the new code view, reading code on GitHub wasn’t a great experience for screen reader users. The old experience presented code as a table—a hidden detail for sighted users but a major hurdle for screen readers, since code isn’t a table, and the semantics didn’t make sense in that context. For example, in code, whitespace has meaning. Whether stylistic or functional, that whitespace gets lost for screen reader users when the code is presented as a table. In addition, table semantics force users into line-by-line navigation, instead of character-by-character or word-by-word navigation. For many users, these hurdles meant that they mostly used GitHub just enough to be able to access the raw code or use a code editor for reading code. Since we want to support all our users, we knew we needed to totally rethink the way we structured code in the DOM.

This problem became even more complicated when we introduced symbols and the symbols panel. Mouse users are able to click on a “symbol” in the code—a special code element, like a function name—to see extra details about it, including references and definitions, in the symbol panel. They can then explore the code more deeply, navigating between lines of code where that symbol is found to investigate and understand the code better. This ability to dive deep has been game changing for many developers. However, for keyboard users, it doesn’t work. At best, a keyboard user can use the “Open symbols panel” button in the code header and then filter all symbol definitions for one they are interested in, but this doesn’t allow users to access symbol references when no definitions are found in a file. In addition, this flow really isn’t the same—if we want to support all developers, then we need to offer keyboard users a way to navigate through the code and select symbols they are interested in.

In addition, for many performance reasons mentioned in the post “Crafting a better, faster code view,” we introduced virtualization to the code viewer which created its own accessibility problems—not having all elements in the DOM can interfere with screen readers and overriding the cmd / ctrl + f shortcut is generally bad practice for screen readers as well. In addition, virtualization posed a problem for selecting text outside of the virtualization window.

This is when we came up with the solution of using a cursor from a <textarea> or a <div> that has the property contentEditable. This solution is modeled after Monaco, the text editor that powers VS Code. <textarea> elements, when not marked as readOnly, and contentEditiable<div> elements have a built in cursor that allows screen reader users to navigate code in their preferred manner using their built in screen reader settings. While a contentEditiable <div> would support syntax highlighting and the deep interactions we needed, screen readers don’t support them well1 which defeats the purpose of the cursor. As a result, we decided to go with the <textarea> However, <textarea> elements do not support syntax highlighting or deeper interactions like selecting symbols, which meant we needed to use both the <textarea> as a hidden element and syntax highlighted elements aligned perfectly above. Since we hide the text element, we need to add a visual “fake” cursor to show and keep it in sync with the “real” <textarea> cursor.

While the <textarea> and cursor help with our goals of allowing screen reader users and keyboard users to navigate code and symbols easily, they also ended up helping us with some of the problems we had run into with virtualization. One example of this was the cmd + f problem that we talk about more in depth in the blog post here. Another problem this solved was the drag and select behavior (or select all) for long files. Since the <textarea> is just one DOM node, we are able to load the whole file contents and select the contents directly from the <textarea> instead of the virtualized syntax highlighted version.

Unfortunately, while the <textarea> solved many problems we had, it also introduced a few other tricky ones. Since we have two layers of text, one hidden unless selected and one visual, we need to make sure that the scroll states are aligned. To do this, we have written observers to watch when one scrolls and mimic the scroll on the other. In addition, we often need to override the default <textarea> behaviors for some events—such as the middle click on mouse which was taking users all the way to the bottom of the code. Along with that issue, different browsers handle <textarea>s differently and making sure our solution behaves properly on all browsers has proven to be time intensive to say the least. In addition, we found that some browsers, like Firefox, allow users to customize their font size using Text zoom, which would apply to the formatted text but not the <textarea>. This led to “ghost text” issues with selection. We were able to resolve that by measuring the height of the text that is rendered and passing that to the <textarea>, though there are still some issues with certain plugins that modify text. We are still working to resolve these specific issues as well. In addition, the <textarea> currently does not work with our Wrap lines view option, which we are working to fix. Thanks especially to @joycezhu, @andrialexandrou, @adamshwert, and @jbrown1618 who put in a ton of work here.

Always iterating

We have taken huge strides to improve accessibility for all developers, but we recognize that we aren’t all the way there. It can be challenging to accommodate all developers, but we are committed to improving and iterating on what we have until we get it right. We want our tools to support everyone to access and explore code. We still have work—a lot of work—to do across GitHub, but we are excited for this journey.

To learn more about accessibility at GitHub, go to accessibility.github.com.


  1. For more information about contentEditable and screen readers, see this article written by @joycezhu from our Accessibility Engineering team. 

GitHub celebrates developers with disabilities on Global Accessibility Awareness Day

Post Syndicated from Ed Summers original https://github.blog/2023-05-18-github-celebrates-developers-with-disabilities-on-global-accessibility-awareness-day/

At GitHub, our favorite people are developers. We love to make them happy and productive, and today, on Global Accessibility Awareness Day, we want to celebrate their achievements by sharing some great stories about a few developers with disabilities alongside news of recent accessibility improvements at GitHub that help them do the best work of their lives.

Amplifying the voices of disabled developers

People with disabilities frequently encounter biases that prevent their full and equal participation in all areas of life, including education and employment. That’s why GitHub and The ReadME Project are thrilled to provide a platform for disabled developers to showcase their contributions and counteract bias.

Paul Chiou, a developer who’s paralyzed from the neck down, is breaking new ground in the field of accessibility automation, while pursuing his Ph.D. Paul uses a computer with custom hardware and software he designed and built, and this lived experience gives him a unique insight into the needs of other people with disabilities. The barriers he encounters push him to innovate, both in his daily life and in his academic endeavors. Learn more about Paul and his creative solutions in this featured article and video profile.

Becky Tyler found her way to coding via gaming, but she games completely with her eyes, just like everything else she does on a computer, from painting to livestreaming to writing code. Her desire to play Minecraft led her down the path of open source software and collaboration, and now she’s studying computer science at the University of Dundee. Learn more about Becky in this featured article and video profile.

Dr. Annalu Waller leads the Augmentative and Alternative Communication Research Group at the University of Dundee. She’s also Becky’s professor. Becky calls her a “taskmaster,” but the profile of Annalu’s life shows how her lived experience informed her high expectations for her students—especially those with disabilities—and gave her a unique ability to absorb innovations and use them to benefit people with disabilities.

Anton Mirhorodchenko has difficulty speaking and typing with his hands, and speaks English as a second language. Anton has explored ways to use ChatGPT and GitHub Copilot to not only help him communicate and express his ideas, but also develop software from initial architecture all the way to code creation. Through creative collaboration with his AI teammates, Anton has become a force to be reckoned with, and he recently shared his insights in this guide on how to harness the power of generative AI for software development.

Removing barriers that block disabled developers

Success requires skills. That’s why equal access to education is a fundamental human right. The GitHub Global Campus team agrees. They are working to systematically find and remove barriers that might block future developers with disabilities.

npm is the default package manager for JavaScript and the largest software registry in the world. To empower every developer to contribute to and benefit from this amazing resource, the npm team recently completed an accessibility bug bash and removed hundreds of potential barriers. Way to go, npm team!

The GitHub.com team has also been hard at work on accessibility and they recently shipped several improvements:

Great accessibility starts with design, requiring an in-depth understanding of the needs of users with disabilities and their assistive technologies. The GitHub Design organization has been leaning into accessibility for years, and this blog post explores how it has built a culture of accessibility and shifted accessibility left in the GitHub development process.

When I think about the future of technology, I think about GitHub Copilot—an AI pair programmer that boosts developers’ productivity and breaks down barriers to software development. The GitHub Copilot team recently shipped accessibility improvements for keyboard-only and screen reader users.

GitHub Next, the team behind GitHub Copilot, also recently introduced GitHub Copilot Voice, an experiment currently in technical preview. GitHub Copilot Voice empowers developers to code completely hands-free using only their voice. That’s a huge win for developers who have difficulty typing with their hands. Sign up for the technical preview if you can benefit from this innovation.

Giving back to our community

As we work to empower all developers to build on GitHub, we regularly contribute back to the broader accessibility community that has been so generous to us. For example, all accessibility improvements in Primer are available for direct use by the community.

Our accessibility team includes multiple Hubbers with disabilities—including myself. GitHub continually improves the accessibility and inclusivity of the processes we use to communicate and collaborate. One recent example is the process we use for retrospectives. At the end of our most recent retrospective, I observed that, as a person with blindness, it was the most accessible and inclusive retrospective I have ever attended. That observation prompted the team to share the process we use for inclusive retrospectives so other teams can benefit from our learnings.

More broadly, Hubbers regularly give back to the causes we care about. During a recent social giving event, I invited Hubbers to support the Seeing Eye because that organization has made such a profound impact in my life as a person with blindness. Our goal was to raise $5,000 so we could name and support a Seeing Eye puppy that will eventually provide independence and self-confidence to a person with blindness. I was overwhelmed by the generosity of my coworkers when they donated more than $15,000! So, we now get to name three puppies and I’m delighted to introduce you to the first one. Meet Octo!

A German Shepard named Octo sits in green grass wearing a green scarf that says “The Seeing Eye Puppy Raising Program.” She is sitting tall in a backyard with a black fence and a red shed behind her.
Photo courtesy of The Seeing Eye

Looking ahead

GitHub CEO, Thomas Dohmke, frequently says, “GitHub thrives on developer happiness.” I would add that the GitHub accessibility program thrives on the happiness of developers with disabilities. Our success is measured by their contributions. Our job is to remove barriers from their path and celebrate their accomplishments. We’re delighted with our progress thus far, but we are just getting warmed up. Stay tuned for more great things to come! In the meantime, learn more about the GitHub accessibility program at accessibility.github.com.

Creating an accessible search experience with the QueryBuilder component

Post Syndicated from Lindsey Wild original https://github.blog/2022-12-13-creating-an-accessible-search-experience-with-the-querybuilder-component/

Overview

Throughout the GitHub User Interface (UI), there are complex search inputs that allow you to narrow the results you see based on different filters. For example, for repositories with GitHub Discussions, you can narrow the results to only show open discussions that you created. This is completed with the search bar and the use of defined filters. The current implementation of this input has accessibility considerations that need to be examined at a deeper level, from the styled search input to the way items are grouped, that aren’t natively accessible, so we had to take some creative approaches. This led us to creating the QueryBuilder component, which is a fully accessible component designed for these types of situations.

As we rethought this core pattern within GitHub, we knew we needed to make search experiences accessible so everyone can successfully use them. GitHub is the home for all developers, including those with disabilities. We don’t want to stop at making GitHub accessible; we want to empower other developers to make a similar pattern accessible, which is why we’ll be open sourcing this component!

Process

GitHub is a very large organization with many moving pieces. Making sure that accessibility is considered in every step of the process is important. Our process looked a little something like this:

The first step was that we, the Accessibility Team at GitHub, worked closely with the designers and feature teams to design and build the QueryBuilder component. We wanted to understand the intent of the component and what the user should be able to accomplish. We used this information to help construct the product requirements.

Our designers and accessibility experts worked together on several iterations of what this experience would look like and annotated how it should function. Once everyone agreed on a path forward, it was time to build a proof of concept!

The proof of concept helped to work out some of the trickier parts of the implementation, which we will get to in the following Accessibility Considerations section. An accessibility expert review was conducted at multiple points throughout the process.

The Accessibility Team built the reusable component in collaboration with the Primer Team (GitHub’s Design System), and then collaborated with the GitHub Discussions Team on what it’d take to integrate the component. At this point in time, we have a fully accessible MVP component that can be seen on any GitHub.com Discussions landing page.

Introducing the QueryBuilder component

The main purpose of the QueryBuilder is to allow a user to enter a query that will narrow their results or complete a search. When a user types, a list of suggestions appears based on their input. This is a common pattern on web, which doesn’t sound too complicated, until you start to consider these desired features:

  • The input should contain visual styling that shows a user if they’ve typed valid input.

Text input with an icon of a magnifier at the beginning. The input text of "language:" is a dark gray and the value "C++" is a shade of medium blue with a highlight background of a lighter blue.

  • When a suggestion is selected, it can either take a user somewhere else (“Jump to”) or append the selection to the input (“Autocomplete”).

Two different search inputs with results. The results in the first example have "Autocomplete" appended to the end of the row of each suggestion. The results in the second example have "Jump to" appended to the end of the row of each suggestion.

  • The set of suggestions should change based on the entered input.

Text input example "is:" is giving a different list of results than "language:" did: Action, Discussion, Marketplace, Pull request, Project, Saved, Topic, User, and Wiki.

  • There should be groups of suggestions within the suggestion box.

Search input with results; first group of items is "Recent" with the Recent header on top. The second group is "Pages" with the Pages header on top of the second group. There is a line separator between each group of items.

Okay, now we’re starting to get more complicated. Let’s break these features down from an accessibility perspective.

Accessibility considerations

Note: these considerations are not comprehensive to every accessibility requirement for the new component. We wanted to highlight the trickier-to-solve issues that may not have been addressed before.

Semantics

We talked about this component needing to take a user’s input and provide suggestions that a user can select from in a listbox. We are using the Combobox pattern, which does exactly this.

Styled input

Zoomed in look at the styling between a qualifier, in this case "language:" and the value, "C++". The qualifier has a label of "color: $fg.default" which is a dark gray, and the value has a label of "color: $fg.accent; background: $bg.accent”, which are a lighter and darker shade of blue.

Natively, HTML inputs do not allow specific styling for individual characters, unless you use contenteditable. We didn’t consider this to be an accessible pattern; even basic mark-up can disrupt the expected keyboard cursor movement and contenteditable’s support for ARIA attributes is widely inconsistent. To achieve the desired styling, we have a styled element – a <div aria-hidden="true"> with <span> elements inside—that is behind the real <input> element that a user interacts with. It is perfectly lined up visually so all of the keyboard functionality works as expected, the cursor position is retained, input text is duplicated inside, and we can individually style characters within the input. We also tested this at high Zoom levels to make sure that everything scaled correctly. color: transparent was added to the real input’s text, so sighted users will see the styled text from the <div>.

While the styled input adds some context for sighted users, we also explored whether we could make this apparent for people relying on a screen reader. Our research led us to create a proof of concept with live-region-based announcements as the cursor was moved through the text. However, based on testing, the screen reader feedback proved to be quite overwhelming and occasionally flaky, and it would be a large effort to accurately detect and manage the cursor position and keyboard functionality for all types of assistive technology users. Particularly when internationalization was taken into account, we decided that this would not be overly helpful or provide good return on investment.

Items with different actions

Search results displaying the "Jump to" appended text to the results in the Recent group and "Autocomplete" appended to the results in the Saved searches group; there is a rectangular highlight over the appended words for emphasis.

Typical listbox items in a combobox pattern only have one action–and that is to append the selected option’s value to the input. However, we needed something more. We wanted some selected option values to be appended to the input, but others to take you to a different page, such as search results.

For options that will append their values to the input when selected, there is no additional screen reader feedback since this is the default behavior of a listbox option. These options don’t have any visual indication (color, underline, etc.) that they will do anything other than append the selection to the input.

When an option will take a user to a new location, we’ve added an aria-label to that option explaining the behavior. For example, an option with the title README.md and description primer/react that takes you directly to https://github.com/primer/react/blob/main/README.md will have aria-label=”README.md, primer/react, jump to this file”. This explains the file (README.md), description/location of the file (primer/react), action (jump to), and type (this file). Since this is acting as a link, it will have visual text after the value stating the action. Since options may have two different actions, having a visual indicator is important so that a user knows what will happen when they make a selection.

Group support

A text input and an expanded list of suggestions. The group titles, "Recent" and "Saved searches,” which contain list items related to those groups, are highlighted.

Groups are fully supported in an accessible way. role="group" is not widely supported inside of listbox for all assistive technologies, so our approach conveys the intent of grouped items to each user, but in different ways.

For sighted users, there is a visual header and separator for each group of items. The header is not focusable, and it has role="presentation" so that it’s hidden from screen reader users because this information is presented in a different way to them (which is described later in this blog). The wrapping <ul> and <li> elements are also given role="presentation" since a listbox is traditionally a list of <li> items inside of one parent <ul>.

For screen reader users, the grouped options are denoted by an aria-label with the content of each list item and the addition of the type of list item. This is the same aria-label as described in the previous section about items with different actions. An example aria-label for a list item with the value primer/react that takes you to the Primer React repository when chosen is “primer/react, jump to this repository.” In this example, adding “repository” to the aria-label gives the context that the item is part of the “Repository” group, the same way the visual heading helps sighted users determine the groups. We chose to add the item type at the end of the aria-label so that screen reader users hear the name of the item first and can navigate through the suggestions quicker. Since the aria-label is different from the visible label, it has to contain the visible label’s text at the beginning for voice recognition software users.

Screen reader feedback

By default, there is no indication to a screen reader user how many suggestions are displayed or if the input is successfully cleared via the optional clear button.

To address this, we added an aria-live region that updates the text whenever the suggestions change or the input is cleared. A screen reader will receive feedback when they press the “Clear” button that the input has been cleared, focus is restored to the input, and how many suggestions are currently visible.

While testing the aria-live updates, we noticed something interesting; if the same number of results are displayed as a user continues typing, the aria-live region will not update. For example, if a user types “zzz” and there are 0 results, and then they add an additional “z” to their query (still 0 results), the screen reader will not re-read “0 results” since the aria-live API did not detect a change in the text. To address this, we are adding and removing a &nbsp; character if the previous aria-live message is the same as the new aria-live message. The &nbsp; will cause the aria-live API to detect a change and the screen reader will re-read the text without an audible indication that a space was added.

Recap

In conclusion, this was a tremendous effort with a lot of teams involved. Thank you to the many Hubbers who collaborated on this effort, and to our accessibility friends at Prime Access Consulting (PAC). We are excited for users to get their hands on this new experience and really accelerate their efficiency in complex searches. This component is currently in production in a repository with GitHub Discussions enabled, and it will be rolling out to more parts of the UI. Stay tuned for updates about the progress of the component being open sourced.

What’s next

We will integrate this component into additional parts of GitHub’s UI, such as the new global search experience so all users can benefit from this accessible, advanced searching capability. We will continue to add the component to other areas of the GitHub UI and address any bugs or feedback we receive.

As mentioned in the beginning of this post, it will be open sourced in Primer ViewComponents and Primer React along with clear guidelines on how to use this component. The base of the component is a Web Component which allows us to share the functionality between ViewComponents and React. This will allow developers to easily create an advanced, accessible, custom search component without spending time researching how to make this pattern accessible or functional, since we’ve already done that work! It can work with any source of data as long as it’s in the expected format.

Many teams throughout GitHub are constantly working on accessibility improvements to GitHub.com. For more on our vision for accessibility at GitHub, visit accessibility.github.com.

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

Post Syndicated from Emily Flannery original https://blog.cloudflare.com/project-a11y/

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards

At Cloudflare, we believe the Internet should be accessible to everyone. And today, we’re happy to announce a more inclusive Cloudflare dashboard experience for our users with disabilities. Recent improvements mean our dashboard now adheres to industry accessibility standards, including Web Content Accessibility Guidelines (WCAG) 2.1 AA and Section 508 of the Rehabilitation Act.

Over the past several months, the Cloudflare team and our partners have been hard at work to make the Cloudflare dashboard1 as accessible as possible for every single one of our current and potential customers. This means incorporating accessibility features that comply with the latest Web Content Accessibility Guidelines (WCAG) and Section 508 of the US’s federal Rehabilitation Act. We are invested in working to meet or exceed these standards; to demonstrate that commitment and share openly about the state of accessibility on the Cloudflare dashboard, we have completed the Voluntary Product Accessibility Template (VPAT), a document used to evaluate our level of conformance today.

Conformance with a technical and legal spec is a bit abstract–but for us, accessibility simply means that as many people as possible can be successful users of the Cloudflare dashboard. This is important because each day, more and more individuals and businesses rely upon Cloudflare to administer and protect their websites.

For individuals with disabilities who work on technology, we believe that an accessible Cloudflare dashboard could mean improved economic and technical opportunities, safer websites, and equal access to tools that are shaping how we work and build on the Internet.

For designers and developers at Cloudflare, our accessibility remediation project has resulted in an overhaul of our component library. Our newly WCAG-compliant components expedite and simplify our work building accessible products. They make it possible for us to deliver on our commitment to an accessible dashboard going forward.

Our Journey to an Accessible Cloudflare Dashboard

In 2021, we initiated an audit with third party experts to identify accessibility challenges in the Cloudflare dashboard. This audit came back with a daunting 213-page document—a very, very long list of compliance gaps.

We learned from the audit that there were many users we had unintentionally failed to design and build for in Cloudflare dashboard user interfaces. Most especially, we had not done well accommodating keyboard users and screen reader users, who often rely upon these technologies because of a physical impairment. Those impairments include low vision or blindness, motor disabilities (examples include tremors and repetitive strain injury), or cognitive disabilities (examples include dyslexia and dyscalculia).

As a product and engineering organization, we had spent more than a decade in cycles of rapid growth and product development. While we’re proud of what we have built, the audit made clear to us that there was a great need to address the design and technical debt we had accrued along the way.

One year, four hundred Jira tickets, and over 25 new, accessible web components later, we’re ready to celebrate our progress with you. Major categories of work included:

  1. Forms: We re-wrote our internal form components with accessibility and developer experience top of mind. We improved form validation and error handling, labels, required field annotations, and made use of persistent input descriptions instead of placeholders. Then, we deployed those component upgrades across the dashboard.
  2. Data visualizations: After conducting a rigorous re-evaluation of their design, we re-engineered charts and graphs to be accessible to keyboard and screen reader users. See below for a brief case study.
  3. Heading tags: We corrected page structure throughout the dashboard by replacing all our heading tags (<h1>, <h2>, etc.) with a technique we borrowed from Heydon Pickering. This technique is an approach to heading level management that uses React Context and basic arithmetic.
  4. SVGs: We reworked how we create SVGs (Scalable Vector Graphics), so that they are labeled properly and only exposed to assistive technology when useful.
  5. Node modules: We jumped several major versions of old, inaccessible node modules that our UI components depend upon (and we broke many things along the way).
  6. Color: We overhauled our use of color, and contributed a new volume of accessible sequential colors to our design system.
  7. Bugs: We squashed a lot of bugs that had made their way into the dashboard over the years. The most common type of bug we encountered related to incorrect or unsemantic use of HTML elements—for example, using a <div> where we should have used a <td> (table data) or <tr> (table row) element within a table.

Case Study: Accessibility Work On Cloudflare Dashboard Data & Analytics

The Cloudflare dashboard is replete with analytics and data visualizations designed to offer deep insight into users’ websites’ performance, traffic, security, and more. Making those data visualizations accessible proved to be among the most complex and interdisciplinary issues we faced in the remediation work.

An example of a problem we needed to solve related to WCAG success criterion 1.4.1, which pertains to the use of color. 1.4.1 specifies that color cannot be the only means by which to convey information, such as the differentiation between two items compared in a chart or graph.

Our charts were clearly nonconforming with this standard, using color alone to represent different data being compared. For example, a typical graph might have used the color blue to show the number of requests to a website that were 200 OK, and the color orange to show 403 Forbidden, but failed to offer users another way to discern between the two status codes.

Our UI team went to work on the problem, and chose to focus our effort first on the Cloudflare dashboard time series graphs.

Interestingly, we found that design patterns recommended even by accessibility experts created wholly unusable visualizations when placed into the context of real world data. Examples of such recommended patterns include using different line weights, patterns (dashed, dotted or other line styles), and terminal glyphs (symbols set at the beginning and end of the lines) to differentiate items being compared.

We tried, and failed, to apply a number of these patterns; you can see the evolution of this work on our time series graph component in the three different images below.

v.1

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
Here is an early attempt at using both terminal glyphs and patterns to differentiate data in a time series graph. You can see that the terminal glyphs pile up and become indistinguishable; the differences among the line patterns are very hard to discern. This code never made it into production.

v.2

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
In this version, we eliminated terminal glyphs but kept line patterns. Additionally, we faded the unfocused items in the graph to help bring highlighted data to the forefront. This latter technique made it into our final solution.

v.3

Project A11Y: how we upgraded Cloudflare’s dashboard to adhere to industry accessibility standards
Here we eliminated patterns altogether, simplified the user interface to only use the fading technique on unfocused items, and put our new, sequentially accessible colors to use. Finally, a visual design solution approved by accessibility and data visualization experts, as well as our design and engineering teams.

After arriving at our design solution, we had some engineering work to do.

In order to meet WCAG success criterion 2.1.1, we rewrote our time series graphs to be fully keyboard accessible by adding focus handling to every data point, and enabling the traversal of data using arrow keys.

Navigating time series data points by keyboard on the Cloudflare dashboard.

We did some fine-tuning, specifically to support screen readers: we eliminated auditory “chartjunk” (unnecessary clutter or information in a chart or graph) and cleaned up decontextualized data (a scenario in which numbers are exposed to and read by a screen reader, but contextualizing information, like x- and y-axis labels, is not).

And lastly, to meet WCAG 1.1.1, we engineered new UI component wrappers to make chart and graph data downloadable in CSV format. We deployed this part of the solution across all charts and graphs, not just the time series charts like those shown above. No matter how you browse and interact with the web, we hope you’ll notice this functionality around the Cloudflare dashboard and find value in it.

Making all of this data available to low vision, keyboard, and assistive technology users was an interesting challenge for us, and a true team effort. It necessitated a separate data visualization report conducted by another, more specialized team of third party experts, deep collaboration between engineering and design, and many weeks of development.

Applying this thorough treatment to all data visualizations on the Cloudflare dashboard is our goal, but still work in progress. Please stay tuned for more accessible updates to our chart and graph components.

Conclusion

There’s a lot of nuance to accessibility work, and we were novices at the beginning: researching and learning as we were doing. We also broke a lot of things in the process, which (as any engineering team knows!) can be stressful.

Overall, our team’s biggest challenge was figuring out how to complete a high volume of cross-functional work in the shortest time possible, while also setting a foundation for these improvements to persist over time.

As a frontend engineering and design team, we are very grateful for having had the opportunity to focus on this problem space and to learn from truly world-class accessibility experts along the way.

Accessibility matters to us, and we know it does to you. We’re proud of our progress, and there’s always more to do to make Cloudflare more usable for all of our customers. This is a critical piece of our foundation at Cloudflare, where we are building the most secure, performant and reliable solutions for the Internet. Stay tuned for what’s next!

Not using Cloudflare yet? Get started today and join us on our mission to build a better Internet.

1All references to “dashboard” in this post are specific to the primary user authenticated Cloudflare web platform. This does not include Cloudflare’s product-specific dashboards, marketing, support, educational materials, or third party integrations.

More devices, fewer CAPTCHAs, happier users

Post Syndicated from Wesley Evans original https://blog.cloudflare.com/cap-expands-support/

More devices, fewer CAPTCHAs, happier users

More devices, fewer CAPTCHAs, happier users

Earlier this year we announced that we are committed to making online human verification easier for more users, all around the globe. We want to end the endless loops of selecting buses, traffic lights, and convoluted word diagrams. Not just because humanity wastes 500 years per day on solving other people’s machine learning problems, but because we are dedicated to making an Internet that is fast, transparent, and private for everyone. CAPTCHAs are not very human-friendly, being hard to solve for even the most dedicated Internet users. They are extremely difficult to solve for people who don’t speak certain languages, and people who are on mobile devices (which is most users!).

Today, we are taking another step in helping to reduce the Internet’s reliance on CAPTCHAs to prove that you are not a robot. We are expanding the reach of our Cryptographic Attestation of Personhood experiment by adding support for a much wider range of devices. This includes biometric authenticators — like Apple’s Face ID, Microsoft Hello, and Android Biometric Authentication. This will let you solve challenges in under five seconds with just a touch of your finger or a view of your face — without sending this private biometric data to anyone.

You can try it out on our demo site cloudflarechallenge.com and let us know your feedback.

In addition to support for hardware biometric authenticators, we are also announcing support for many more hardware tokens today in the Cryptographic Attestation of Personhood experiment. We support all USB and NFC keys that are both certified by the FIDO alliance and have no known security issues according to the FIDO Alliance Metadata service (MDS 3.0).

Privacy First

Privacy is at the center of Cryptographic Attestation of Personhood. Information about your biometric data is never shared with, transmitted to, or processed by Cloudflare. We simply never see it, at all. All of your sensitive biometric data is processed on your device. While each operating system manufacturer has a slightly different process, they all rely on the idea of a Trusted Execution Environment (TEE) or Trusted Platform Module (TPM). TEEs and TPMs allow only your device to store and use your encrypted biometric information. Any apps or software running on the device never have access to your data, and can’t transmit it back to their systems. If you are interested in digging deeper into how each major operating system implements their biometric security, read the articles we have linked below:

Apple (Face ID and Touch ID)

Face ID and Touch ID Privacy
Secure Enclave Overview

Microsoft (Windows Hello)

Windows Hello Overview
Windows Hello Technical Deep Dive

Google (Android Biometric Authentication)

Biometric Authentication
Measuring Biometric Unlock Security

Our commitment to privacy doesn’t just end at the device. The WebAuthn API also prevents the transmission of biometric information. If a user interacts with a biometric key (like Apple TouchID or Windows Hello), no third party — such as Cloudflare — can receive any of that biometric data: it remains on your device and is never transmitted by your browser. To get an even deeper understanding of how the Cryptographic Attestation of Personhood protects your private information, we encourage you to read our initial announcement blog post.

Making a Great Privacy Story Even Better

While building out this technology, we have always been committed to providing the best possible privacy guarantees. The existing assurances are already quite strong: as described in our previous post, when using CAP, you may disclose a small amount of information about your authentication hardware — at most, its make and model. By design, this information is not unique to you: the underlying standard requires that this be indistinguishable from thousands of others, which hides you in a crowd of identical hardware.

Even though this is a great start, we wanted to see if we could push things even farther. Could we learn just the one fact that we need — that you have a valid security key — without learning anything else?

We are excited to announce that we have made great strides on answering that question by using a form of the next generation of cryptography called Zero Knowledge Proofs (ZKP). ZKP prevents us from discovering anything from you, other than the fact that you have a device that can generate an approved certificate that proves that you are human.

If you are interested in learning more about Zero Knowledge Proofs, we highly recommend reading about it here.

Driven by Speed, Ease of Use, and Bot Defense

At the start of this project we set out three goals:

  1. How can we dramatically improve the user experience of proving that you are a human?
  2. How can we make sure that any solution that we develop centers around ease of use, and increases global accessibility on the Internet?
  3. How do we make a solution that does not harm our existing bot management system?

Before we deployed this new evolution of our CAPTCHA solution, we did a proof of concept test with some real users, to confirm whether this approach was worth pursuing (just because something sounds great in theory doesn’t mean it will work out in practice!). Our testing group tried our Cryptographic Attestation of Personhood solution using Face ID and told us what they thought. We learned that solving times with Face ID and Touch ID were significantly faster than selecting pictures, with almost no errors. People vastly preferred this alternative, and provided suggestions for improvements in the user experience that we incorporated into our deployment.

This was even more evident in our production usage data. While we did not enable attestation with biometrics providers during our first phase of the CAP roll out, we did record when people attempted to use a biometric authentication service. The highest recorded attempt solve rate besides YubiKeys was Face ID on iOS, and Android Biometric Authentication. We are excited to offer those mobile users their first choice of challenge for proving their humanity.

One of the things we heard about in testing was — not surprisingly — concerns around privacy. As we explained in our initial launch, the WebAuthn technology underpinning our solution provides a high level of privacy protection. Cloudflare is able to view an identifier shared by thousands of similar devices, which provides general make and model information. This cannot be used to uniquely identify or track you — and that’s exactly how we like it.

Now that we have opened up our solution to more device types, you’re able to use biometric readers as well. Rest assured that the same privacy protections apply — we never see your biometric data: that remains on your device. Once your device confirms a match, it sends only a basic attestation message, just as it would for any security key. In effect, your device sends a message proving that “yes, someone correctly entered a fingerprint on this trustworthy device”, and never sends the fingerprint itself.

We have also been reviewing feedback from people who have been testing out CAP on either our demo site or through routine browsing. We appreciate the helpful comments you’ve been sending, and we’d love to hear more, so we can continue to make improvements. We’ve set up a survey at our demo site. Your input is key to ensuring that we get usability right, so please let us know what you think.

New Raspberry Pi OS release — December 2020

Post Syndicated from original https://www.raspberrypi.org/blog/new-raspberry-pi-os-release-december-2020/

Well, in a year as disrupted and strange as 2020, it’s nice to know that there are some things you can rely on, for example the traditional end-of-year new release of Raspberry Pi OS, which we launch today. Here’s a run-through of the main new features that you’ll find in it.

Chromium

We’ve updated the Chromium browser to version 84. This has taken us a bit longer than we would have liked, but it’s always quite a lot of work to get our video hardware acceleration integrated with new releases of the browser. That’s done now, so you should see good-quality video playback on sites like YouTube. We’ve also, given events this year, done a lot of testing and tweaking on video conferencing clients such as Google Meet, Microsoft Teams, and Zoom, and they should all now work smoothly on your Raspberry Pi’s Chromium.

Version 84 of the Chromium web browser

There’s one more thing to mention on the subject of web browsers. We’ve been shipping Adobe’s Flash Player as part of our Chromium install for several years now. Flash Player is being retired by Adobe at the end of the year, so this release will be the last that includes it. Most websites have now stopped requiring Flash Player, so this hopefully isn’t something that anyone notices!

PulseAudio

From this release onwards, we are switching Raspberry Pi OS to use the PulseAudio sound server.

First, a bit of background. Audio on Linux is really quite complicated. There are multiple different standards for handling audio input and output, and it does sometimes seem that what has happened, historically, is that whenever anyone wanted to use audio in Linux, they looked at the existing libraries and programs and went “Hmmm… I don’t like that, I’ll write something new and better.” This has resulted in a confused mass of competing and conflicting software, none of which quite works the way anyone wants it to!

The most common audio interface, which lies underneath most Linux systems somewhere, is called ALSA, the Advanced Linux Sound Architecture. This is a fairly reliable low-level audio interface — indeed, it is what Raspberry Pi OS has used up until now — but it has quite a lot of limitations and is starting to show its age. For example, it can only handle one input and one output at a time. So for example, if ALSA is being used by your web browser to play sound from a YouTube video to the HDMI output on your Raspberry Pi, nothing else can produce sound at the same time; if you were to try playing a video or an audio file in VLC, you’d hear nothing but the audio from YouTube. Similarly, if you want to switch the sound from your YouTube video from HDMI to a USB sound card, you can’t do it while the video is playing; it won’t change until the sound stops. These aren’t massive problems, but most modern operating systems do handle audio in a more flexible fashion.

More significant is that ALSA doesn’t handle Bluetooth audio at all, so various other extensions and additional bits of software are required to even get audio into and out of Bluetooth devices on an ALSA-based system. We’ve used a third-party library called bluez-alsa for a few years now, but it’s an additional piece of code to maintain and update, so this isn’t ideal.

PulseAudio deals with all of this. It’s a piece of software that sits as a layer between all the audio hardware and all the applications that send and receive audio, and it automatically routes everything to the right places. It can mix the audio from multiple applications together, so you can hear VLC at the same time as YouTube, and it allows the output to be moved around between different devices while it is playing. It knows how to talk to Bluetooth devices, and it greatly simplifies the job of managing default input and output devices, so it makes it much easier to make sure audio ends up where it is supposed to be!

One area where it is particularly helpful is in managing audio input and output streams to web browsers like Chromium; in our testing, the use of PulseAudio made setting up video conferencing sessions much easier and more reliable, particularly with Bluetooth headsets and webcam audio.

The good news for Raspberry Pi users is that, if we’ve got it right, you shouldn’t even notice the change. PulseAudio now runs by default, and while the volume control and audio input/output selector on the taskbar looks almost identical to the one in previous releases of the OS, it is now controlling PulseAudio rather than ALSA. You can use it just as before: select your output and input devices, adjust the volume, and you’re good to go.

The PulseAudio input selector

There is one small change to the input/output selector, which is the menu option at the bottom for Device Profiles. In PulseAudio, any audio device has one or more profiles, which select which outputs and inputs are used on any device with multiple connections. (For example, some audio HATs and USB sound cards have both analogue and digital outputs — there will usually be a profile for each output to select where the audio actually comes out.)

The PulseAudio profile selector

Profiles are more important for Bluetooth devices. If a Bluetooth device has both an input and an output (such as a headset with both a microphone and an earphone), it usually supports two different profiles. One of these is called HSP (HeadSet Profile), and this allows you to use both the microphone and the earphone, but with relatively low sound quality — equivalent to that you hear on a mobile phone call, so fine for speech but not great for music. The other profile is called A2DP (Advanced Audio Distribution Profile), which gives much better sound quality, but is output-only: it does not allow you to use the microphone. So if you are making a call, you want your Bluetooth device to use HSP, but if you are listening to music, you want it to use A2DP.

We’ve automated some of this, so if you select a Bluetooth device as the default input, then that device is automatically switched to HSP. If you want to switch a device which is in HSP back to A2DP, just reselect it from the output menu. Its microphone will then be deactivated, and it will switch to A2DP. But sometimes you might want to take control of profiles manually, and the Device Profiles dialog allows you to do that.

(Note that if you are only using the Raspberry Pi’s internal sound outputs, you don’t need to worry about profiles at all, as there is only one, and it’s automatically selected for you.)

Some people who have had experience of PulseAudio in the past may be a little concerned by this change, because PulseAudio hasn’t always been the most reliable piece of software, but it has now reached the point where it solves far more problems than it creates, which is why many other Linux distributions, such as Ubuntu, now use it by default. Most users shouldn’t even notice the change; there may be occasional issues with some older applications such as Sonic Pi, but the developers of these applications will hopefully address any issues in the near future.

Printing

One thing which has always been missing from Raspberry Pi OS is an easy way to connect to and configure printers. There is a Linux tool for this, called CUPS, the Common Unix Printing System. (It’s actually owned by Apple and is the underlying printing system used by macOS X, but it is still free software and available for use by Linux distributions.)

CUPS has always been available in apt, so could be installed on any Raspberry Pi, but the standard web-based interface is a bit unfriendly. Various third-party front-end tools have been written to make CUPS a bit easier to use, and we have decided to use one called system-config-printer. (Like PulseAudio, this is also used as standard by Ubuntu.)

So both CUPS and system-config-printer are now installed as part of Raspberry Pi OS. If you are a glutton for punishment, you can access the CUPS web interface by opening the Chromium browser and going to http://localhost:631, but instead of doing that, we suggest just going into the Preferences section in the main menu and opening Print Settings.

The new Printer Settings dialog

This shows the system-config-printer dialog, from which you can add new printers, remove old ones, set one as the default, and access the print queue for each printer, just as you should be familiar with from other operating systems.

Like most things in Linux, this relies on user contributions, so not every printer is supported. We’ve found that most networked printers work fine, but USB printers are a bit hit-and-miss as to whether there is a suitable driver; in general, the older your printer is, the more likely it is to have a CUPS driver available. The best thing to do is to try it and see, and perhaps ask for help on our forums if your particular printer doesn’t seem to work.

This fills in one of the last things missing in making Raspberry Pi a complete desktop computer, by making it easy to set up a printer and print from applications such as LibreOffice.

Accessibility

One of the areas we have tried to improve in the Desktop this year is to make it more accessible to those with visual impairments. We added support for the Orca screen reader at the start of the year, and the display magnifier plugin over the summer.

While there are no completely new accessibility features this time, we have made some improvements to Orca support in applications like Raspberry Pi Configuration and Appearance Settings, to make them read what they are doing in a more helpful fashion; we’ve also worked with the maintainers of Orca to raise and fix a few bugs. It’s still not perfect, but we’re doing our best!

One of the benefits of switching to PulseAudio is that it now means that screen reader audio can be played through Bluetooth devices; this was not possible using the old ALSA system, so visually-impaired users who wish to use the screen reader with a Bluetooth headset or external speaker can now do so.

One feature we have added is an easy way to install Orca; it is still available through Recommended Software as before, but given that is not easy to navigate for a visually-impaired person, there is now a keyboard shortcut: just hold down ctrl and alt and press the space bar to automatically install Orca. A dialog box will be shown on the screen, and voice prompts will let you know when the install has started and finished.

And if you can’t remember that shortcut, when you first boot a new image, if you don’t do anything for thirty seconds or so, the startup wizard will now speak to you to remind you how to do it…

Finally, we had hoped to be able to say that Chromium was now compatible with Orca; screen reader support was being added to versions 8x. Unfortunately, for now this seems to only have been added for Windows and Mac versions, not the Linux build we use. Hopefully Google will address this in a future release, but for now if you need a web browser compatible with Orca, you’ll need to install Firefox from apt.

New hardware options

We’ve added a couple of options to the Raspberry Pi Configuration tool.

On the System tab, if you are running on Raspberry Pi with a single status LED (i.e. a Raspberry Pi Zero or the new Raspberry Pi 400), there is now an option to select whether the LED just shows that the power is on, or if it flickers off to show drive activity.

LED control in Raspberry Pi Configuration

On the Performance tab, there are options to allow you to control the new Raspberry Pi Case Fan: you can select the GPIO pin to which it is connected and set the temperature at which it turns on and off.

Fan controls in Raspberry Pi Configuration

How do I get it?

The latest image can be installed on a new card using the Raspberry Pi Imager, or can be downloaded from our Downloads page.

To apply the updates to an existing image, you’ll need to enter the usual commands in a terminal window:

sudo apt update
sudo apt full-upgrade

(It is safe to just accept the default answer to any questions you are asked during the update procedure.)

Then, to install the PulseAudio Bluetooth support, you will need to enter the following commands in the terminal window:

sudo apt purge bluealsa
sudo apt install pulseaudio-module-bluetooth

Now reboot.

To swap over the volume and input selector on the taskbar from ALSA to PulseAudio, after your Raspberry Pi has restarted, right-click a blank area on the taskbar and choose Add / Remove Panel Items. Find the plugin labelled Volume Control (ALSA/BT) in the list, select it and click Remove; then click the Add button, find the plugin labelled Volume Control (PulseAudio) and click Add. Alternatively, just open the Appearance Settings application from the Preferences section of the Main Menu, go to the Defaults tab and press one of the Set Defaults buttons.

As ever, do let us know what you think in the comments.

The post New Raspberry Pi OS release — December 2020 appeared first on Raspberry Pi.