Tag Archives: design

Accelerating GitHub theme creation with color tooling

Post Syndicated from Cole Bemis original https://github.blog/2022-06-14-accelerating-github-theme-creation-with-color-tooling/

Dark mode is no longer a nice-to-have feature. It’s an expectation. Yet, for many teams, implementing dark mode is still a daunting task.

Creating a palette for dark interfaces is not as simple as inverting colors and complexity increases if your team is planning multiple themes. Many people find themselves using a combination of disjointed color tools, which can be a painful experience.

GitHub dark mode (unveiled at GitHub Universe in December 2020) was the result of trial and error, copy and paste, as well as back and forth in a Figma file (with more than 370,000 layers!).

A screenshot of the Figma file we made while designing GitHub dark mode
A screenshot of the Figma file we made while designing GitHub dark mode

A few months after shipping dark mode, we began working on a dark high contrast theme to provide an option that maximizes legibility. While we were designing this new theme, we set out to improve our workflow by building an experimental tool to solve some of the challenges we encountered while designing the original dark color palette.

We’re calling our experimental color tool Primer Prism.

A sneak peek of Primer Prism
A sneak peek of Primer Prism

Part of GitHub’s Primer ecosystem, Primer Prism is a tool for creating and maintaining cohesive, consistent, and accessible color palettes. It allows us to:

  • Create or import color scales.
  • Adjust colors in a perceptually uniform color space (HSLuv).
  • Check contrast of color pairs.
  • Edit lightness curves across multiple color scales at once.
  • Export color palettes to production-ready code (JSON).

Our workflow

Our improved workflow for creating color palettes with Primer Prism is an iterative cycle comprised of three steps:

  1.  Defining tones
  2. Choosing colors
  3. Testing colors

Defining tones

We start by defining the color palette’s tonal character and contrast needs:

  • How light or dark should the background be?
  • What should the contrast ratio between the foreground and background be?

Although each palette will have a unique tonal character, we are mindful that all palettes meet contrast accessibility guidelines.

In Primer Prism, we start a new color palette by creating a new color scale and adjusting the lightness curve. In this phase, we’re only concerned with lightness and contrast. We’ll revisit hue and saturation later.

As we change the lightness of each color, Primer Prism checks the contrast of potential color pairings in the scale using the WCAG 2 standard.

Dragging lightness sliders up and down to adjust the lightness curve of a scale
Dragging lightness sliders up and down to adjust the lightness curve of a scale

Primer Prism also allows us to share curves across multiple color scales. So, when we have more scales, we can quickly change the tonal character of the entire color palette by adjusting a single lightness curve.

Adjusting the lightness curve of all color scales at once
Adjusting the lightness curve of all color scales at once

Primer Prism uses the HSLuv color space to ensure that the lightness values are perceptually uniform across the entire palette. In the HSLuv color space, two colors with the same lightness value will look equally bright.

Choosing colors

Next, we define the overall color character of our palette:

  • What hues do we need (for example: red, blue, green, etc.)?
  • How vibrant do we want the colors to be?

We create a color scale for every hue using the same lightness curve we made earlier. Then, we compare and adjust the base color (the fifth step in the scale) across all the color scales until the palette feels cohesive and consistent.

A side-by-side comparison of every color scale
A side-by-side comparison of every color scale

After deciding on the base color for each scale, we fine-tune the tints (lighter colors) and shades (darker colors). Blue, for example, shifts towards green hues in the tints and purple hues in the shades.

The hue, saturation, and lightness curves of the blue color scale
The hue, saturation, and lightness curves of the blue color scale

Fine-tuning color scales is more of an art than a science and often requires many micro-adjustments before the colors “feel right.” Check out Color in UI Design: A (Practical) Framework by Eric D. Kennedy to learn more about the fundamentals of designing color scales.

Testing colors

To test our colors in real-world scenarios, we export the palette from Primer Prism as a JSON object and add it to Primer Primitives, our repository of design tokens. We use pre-releases of the Primer Primitives package to test new color palettes on GitHub.com.

The dark color palette applied to GitHub.com
The dark color palette applied to GitHub.com

What’s next

We used Primer Prism to design several new color palettes, accelerating the creation of dark high contrast, light high contrast, and colorblind themes for GitHub. Next, we plan to improve our tooling to support the following key workflows.

Visual testing workflow

We plan to integrate visual testing directly into Primer Prism. Currently, visual testing of color palettes happens outside of Primer Prism, typically in Figma or production applications. However, we want a more convenient way to visualize how the colors will look when mapped to functional variables and used in actual user interfaces.

GitHub workflow

We plan to integrate GitHub into Primer Prism. Right now, it’s a hassle to edit existing color palettes because Primer Prism is not connected to the GitHub repository where we store color variables (Primer Primitives). A GitHub integration will allow us to directly pull from and push to the Primer Primitives repository.

Figma workflow

Our designers use Figma to explore and test new design ideas. We plan to create a Figma plugin to seamlessly integrate Primer Prism into their workflow.

Try it out

Primer Prism is open source and available for anyone to use at primer.style/prism.

We’d love to hear what you think. If you have feedback, please create an issue or start a discussion in the GitHub repository.

Warning: Primer Prism is experimental. Expect bugs and breaking changes as we continue to iterate.

Thanks

Huge shout-out to @Juliusschaeper, @auareyou, @edokoa, and @broccolini for their incredible work on the GitHub dark mode color palette.

Primer Prism was inspired by many existing color tools:
ColorBox by Lyft
Components AI
Huetone by Alexey Ardov
Leonardo by Adobe
Palettte by Gabriel Adorf
Palx by Brent Jackson
Scale by Hayk An

Further reading

A new WAF experience

Post Syndicated from Zhiyuan Zheng original https://blog.cloudflare.com/new-waf-experience/

A new WAF experience

A new WAF experience

Around three years ago, we brought multiple features into the Firewall tab in our dashboard navigation, with the motivation “to make our products and services intuitive.” With our hard work in expanding capabilities offerings in the past three years, we want to take another opportunity to evaluate the intuitiveness of Cloudflare WAF (Web Application Firewall).

Our customers lead the way to new WAF

The security landscape is moving fast; types of web applications are growing rapidly; and within the industry there are various approaches to what a WAF includes and can offer. Cloudflare not only proxies enterprise applications, but also millions of personal blogs, community sites, and small businesses stores. The diversity of use cases are covered by various products we offer; however, these products are currently scattered and that makes visibility of active protection rules unclear. This pushes us to reflect on how we can best support our customers in getting the most value out of WAF by providing a clearer offering that meets expectations.

A few months ago, we reached out to our customers to answer a simple question: what do you consider to be part of WAF? We employed a range of user research methods including card sorting, tree testing, design evaluation, and surveys to help with this. The results of this research illustrated how our customers think about WAF, what it means to them, and how it supports their use cases. This inspired the product team to expand scope and contemplate what (Web Application) Security means, beyond merely the WAF.

Based on what hundreds of customers told us, our user research and product design teams collaborated with product management to rethink the security experience. We examined our assumptions and assessed the effectiveness of design concepts to create a structure (or information architecture) that reflected our customers’ mental models.

This new structure consolidates firewall rules, managed rules, and rate limiting rules to become a part of WAF. The new WAF strives to be the one-stop shop for web application security as it pertains to differentiating malicious from clean traffic.

As of today, you will see the following changes to our navigation:

  1. Firewall is being renamed to Security.
  2. Under Security, you will now find WAF.
  3. Firewall rules, managed rules, and rate limiting rules will now appear under WAF.

From now on, when we refer to WAF, we will be referring to above three features.

Further, some important updates are coming for these features. Advanced rate limiting rules will be launched as part of Security Week, and every customer will also get a free set of managed rules to protect all traffic from high profile vulnerabilities. And finally, in the next few months, firewall rules will move to the Ruleset Engine, adding more powerful capabilities thanks to the new Ruleset API. Feeling excited?

How customers shaped the future of WAF

Almost 500 customers participated in this user research study that helped us learn about needs and context of use. We employed four research methods, all of which were conducted in an unmoderated manner; this meant people around the world could participate remotely at a time and place of their choosing.

  • Card sorting involved participants grouping navigational elements into categories that made sense to them.
  • Tree testing assessed how well or poorly a proposed navigational structure performed for our target audience.
  • Design evaluation involved a task-based approach to measure effectiveness and utility of design concepts.
  • Survey questions helped us dive deeper into results, as well as painting a picture of our participants.

Results of this four-pronged study informed changes to both WAF and Security that are detailed below.

The new WAF experience

The final result reveals the WAF as part of a broader Security category, which also includes Bots, DDoS, API Shield and Page Shield. This destination enables you to create your rules (a.k.a. firewall rules), deploy Cloudflare managed rules, set rate limit conditions, and includes handy tools to protect your web applications.

All customers across all plans will now see the WAF products organized as below:

A new WAF experience
  1. Firewall rules allow you to create custom, user-defined logic by blocking or allowing traffic that leverages all the components of the HTTP requests and dynamic fields computed by Cloudflare, such as Bot score.
  2. Rate limiting rules include the traditional IP-based product we launched back in 2018 and the newer Advanced Rate Limiting for ENT customers on the Advanced plan (coming soon).
  3. Managed rules allows customers to deploy sets of rules managed by the Cloudflare analyst team. These rulesets include a “Cloudflare Free Managed Ruleset” currently being rolled out for all plans including FREE, as well as Cloudflare Managed, OWASP implementation, and Exposed Credentials Check for all paying plans.
  4. Tools give access to IP Access Rules, Zone Lockdown and User Agent Blocking. Although still actively supported, these products cover specific use cases that can be covered using firewall rules. However, they remain a part of the WAF toolbox for convenience.

Redesigning the WAF experience

Gestalt design principles suggest that “elements which are close in proximity to each other are perceived to share similar functionality or traits.” This principle in addition to the input from our customers informed our design decisions.

After reviewing the responses of the study, we understood the importance of making it easy to find the security products in the Dashboard, and the need to make it clear how particular products were related to or worked together with each other.

Crucially, the page needed to:

  • Display each type of rule we support, i.e. firewall rules, rate limiting rules and managed rules
  • Show the usage amount of each type
  • Give the customer the ability to add a new rule and manage existing rules
  • Allow the customer to reprioritise rules using the existing drag and drop behavior
  • Be flexible enough to accommodate future additions and consolidations of WAF features

We iterated on multiple options, including predominantly vertical page layouts, table based page layouts, and even accordion based page layouts. Each of these options, however, would force us to replicate buttons of similar functionality on the page. With the risk of causing additional confusion, we abandoned these options in favor of a horizontal, tabbed page layout.

How can I get it?

As of today, we are launching this new design of WAF to everyone! In the meantime, we are updating documentation to walk you through how to maximize the power of Cloudflare WAF.

Looking forward

This is a starting point of our journey to make Cloudflare WAF not only powerful but also easy to adapt to your needs. We are evaluating approaches to empower your decision-making process when protecting your web applications. Among growing intel information and more rules creation possibilities, we want to shorten your path from a possible threat detection (such as by security overview) to setting up the right rule to mitigate such threat. Stay tuned!

How Grab built a scalable, high-performance ad server

Post Syndicated from Grab Tech original https://engineering.grab.com/scalable-ads-server

Why ads?

GrabAds is a service that provides businesses with an opportunity to market their products to Grab’s consumer base. During the pandemic, as the demand for food delivery grew, we realised that ads could be a service we offer to our small restaurant merchant-partners to expand their reach. This would allow them to not only mitigate the loss of in-person traffic but also grow by attracting more customers.

Many of these small merchant-partners had no experience with digital advertising and we provided an easy-to-use, scalable option that could match their business size. On the other side of the equation, our large network of merchant-partners provided consumers with more choices. For hungry consumers stuck at home, personalised ads and promotions helped them satisfy their cravings, thus fulfilling their intent of opening the Grab app in the first place!

Why build our own ad server?

Building an ad server is an ambitious undertaking and one might rightfully ask why we should invest the time and effort to build a technically complex distributed system when there are several reasonable off-the-shelf solutions available.

The answer is we didn’t, at least not at first. We used one of these off-the-shelf solutions to move fast and build a minimally viable product (MVP). The result of this experiment was a resounding success; we were providing clear value to our merchant-partners, our consumers and Grab’s overall business.

However, to take things to the next level meant scaling the ads business up exponentially. Apart from being one of the few companies with the user engagement to support an ads business at scale, we also have an ecosystem that combines our network of merchant-partners, an understanding of our consumers’ interactions across multiple services in the Grab superapp, and a payments solution, GrabPay, to close the loop. Furthermore, given the hyperlocal nature of our business, the in-app user experience is highly customised by location. In order to integrate seamlessly with this ecosystem, scale as Grab’s overall business grows and handle personalisation using machine learning (ML), we needed an in-house solution.

What we built

We designed and built a set of microservices, streams and pipelines which orchestrated the core ad serving functionality, as shown below.

Search data flow
  1. Targeting – This is the first step in the ad serving flow. We fetch a set of candidate ads specifically targeted to the request based on keywords the user searched for, the user’s location, the time of day, and the data we have about the user’s preferences or other characteristics. We chose ElasticSearch as the data store for our ads repository as it allows us to query based on a disparate set of targeting criteria.
  2. Capping – In this step, we filter out candidate ads which have exceeded various caps. This includes cases where an advertising campaign has already reached its budget goal, as well as custom requirements about the frequency an ad is allowed to be shown to the same user. In order to make this decision, we need to know how much budget has already been spent and how many times an ad has already been shown. We chose ScyllaDB to store these “stats”, which is scalable, low-cost and can handle the large read and write requirements of this process (more on how this data gets written to ScyllaDB in the Tracking step).
  3. Pacing – In this step, we alter the probability that a matching ad candidate can be served, based on a specific campaign goal. For example, in some cases, it is desirable for an ad to be shown evenly throughout the day instead of exhausting the entire ad budget as soon as possible. Similar to Capping, we require access to information on how many times an ad has already been served and use the same ScyllaDB stats store for this.
  4. Scoring – In this step, we score each ad. There are a number of factors that can be used to calculate this score including predicted clickthrough rate (pCTR), predicted conversion rate (pCVR) and other heuristics that represent how relevant an ad is for a given user.
  5. Ranking – This is where we compare the scored candidate ads with each other and make the final decision on which candidate ads should be served. This can be done in several ways such as running a lottery or performing an auction. Having our own ad server allows us to customise the ranking algorithm in countless ways, including incorporating ML predictions for user behaviour. The team has a ton of exciting ideas on how to optimise this step and now that we have our own stack, we’re ready to execute on those ideas.
  6. Pricing – After choosing the winning ads, the final step before actually returning those ads in the API response is to determine what price we will charge the advertiser. In an auction, this is called the clearing price and can be thought of as the minimum bid price required to outbid all the other candidate ads. Depending on how the ad campaign is set up, the advertiser will pay this price if the ad is seen (i.e. an impression occurs), if the ad is clicked, or if the ad results in a purchase.
  7. Tracking – Here, we close the feedback loop and track what users do when they are shown an ad. This can include viewing an ad and ignoring it, watching a video ad, clicking on an ad, and more. The best outcome is for the ad to trigger a purchase on the Grab app. For example, placing a GrabFood order with a merchant-partner; providing that merchant-partner with a new consumer. We track these events using a series of API calls, Kafka streams and data pipelines. The data ultimately ends up in our ScyllaDB stats store and can then be used by the Capping and Pacing steps above.

Principles

In addition to all the usual distributed systems best practices, there are a few key principles that we focused on when building our system.

  1. Latency – Latency is important for ads. If the user scrolls faster than an ad can load, the ad won’t be seen. The longer an ad remains on the screen, the more likely the user will notice it, have their interest piqued and click on it. As such, we set strict limits on the latency of the ad serving flow. We spent a large amount of effort tuning ElasticSearch so that it could return targeted ads in the shortest amount of time possible. We parallelised parts of the serving flow wherever possible and we made sure to A/B test all changes both for business impact and to ensure they did not increase our API latency.
  2. Graceful fallbacks – We need user-specific information to make personalised decisions about which ads to show to a given user. This data could come in the form of segmentation of our users, attributes of a single user or scores derived from ML models. All of these require the ad server to make dependency calls that could add latency to the serving flow. We followed the principle of setting strict timeouts and having graceful fallbacks when we can’t fetch the data needed to return the most optimal result. This could be due to network failures or dependencies operating slower than usual. It’s often better to return a non-personalised result than no result at all.
  3. Global optimisation – Predicting supply (the amount of users viewing the app) and demand (the amount of advertisers wanting to show ads to those users) is difficult. As a superapp, we support multiple types of ads on various screens. For example, we have image ads, video ads, search ads, and rewarded ads. These ads could be shown on the home screen, when booking a ride, or when searching for food delivery. We intentionally decided to have a single ad server supporting all of these scenarios. This allows us to optimise across all users and app locations. This also ensures that engineering improvements we make in one place translate everywhere where ads or promoted content are shown.

What’s next?

Grab’s ads business is just getting started. As the number of users and use cases grow, ads will become a more important part of the mix. We can help our merchant-partners grow their own businesses while giving our users more options and a better experience.

Some of the big challenges ahead are:

  1. Optimising our real-time ad decisions, including exciting work on using ML for more personalised results. There are many factors that can be considered in ad personalisation such as past purchase history, the user’s location and in-app browsing behaviour. Another area of optimisation is improving our auction strategy to ensure we have the most efficient ad marketplace possible.
  2. Expanding the types of ads we support, including experimenting with new types of content, finding the best way to add value as Grab expands its breadth of services.
  3. Scaling our services so that we can match Grab’s velocity and handle growth while maintaining low latency and high reliability.

Join us

Grab is a leading superapp in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across over 400 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

Designing products and services based on Jobs to be Done

Post Syndicated from Grab Tech original https://engineering.grab.com/designing-products-and-services-based-on-jtbd

Introduction

In 2016, Clayton Christensen, a Harvard Business School professor, wrote a book called Competing Against Luck. In his book, he talked about the kind of jobs that exist in our everyday life and how we can uncover hidden jobs through the act of non-consumption. Non-consumption is the inability for a consumer to fulfil an important Job to be Done (JTBD).

JTBD is a framework; it is a different way of looking at consumer goals and is based on the notion that people buy products and services to get a job done. In this article, we will walk through what the JTBD framework is, look at an example of a popular JTBD, and look at how we use the JTBD framework in one of Grab’s services.

JTBD framework

In his book, Clayton Christensen gives the example of the milkshake, as a JTBD example. In the mid-90s, a fast food chain was trying to understand how to improve the milkshakes they were selling and how they could sell more milkshakes. To sell more, they needed to improve the product. To understand the job of the milkshake, they interviewed their customers. They asked their customers why they were buying the milkshakes, and what progress the milkshake would help them make.

Job 1: To fill their stomachs

One of the key insights was the first job, the customers wanted something that could fill their stomachs during their early morning commute to the office. Usually, these car drives would take one to two hours, so they needed something to keep them awake and to keep themselves full.

In this scenario, the competition could be a banana, but think about the properties of a banana. A banana could fill your stomach but your hands get dirty and sticky after peeling it. Bananas cannot do a good job here. Another competitor could be a Snickers bar, but it is rather unhealthy, and depending on how many bites you take, you could finish it in one minute.

By understanding the job the milkshake was performing, the restaurant now had a specific way of improving the product. The milkshake could be made milkier so it takes time to drink through a straw. The customer can then enjoy the milkshake throughout the journey; the milkshake is optimised for the job.

Search data flow
Milkshake

Job 2: To make children happy

As part of the study, they also interviewed parents who came to buy milkshakes in the afternoon, around 3:00 PM. They found out that the parents were buying the milkshakes to make their children happy.

By knowing this, they were able to optimise the job by offering a smaller version of the milkshake which came in different flavours like strawberry and chocolate. From this milkshake example, we learn that multiple jobs can exist for one product. From that, we can make changes to a product to meet those different jobs.

JTBD at GrabFood

A team at GrabFood wanted to prioritise which features or products to build, and performed a prioritisation exercise. However, there was a lack of fundamental understanding of why our consumers were using GrabFood or any other food delivery services. To gain deeper insights on this, we conducted a JTBD study.

We applied the JTBD framework in our research investigation. We used the force diagram framework to find out what job a consumer wanted to achieve and the corresponding push and pull factors driving the consumer’s decision. A job here is defined as the progress that the consumer is trying to make in a particular context.

Search data flow
Force diagram

There were four key points in the force diagram:

  • What jobs are people using GrabFood for?
  • What did people use prior to GrabFood to get the jobs done?
  • What pushed them to seek a new solution? What is attractive about this new solution?
  • What are the things that will make them go back to the old product? What are the anxieties of the new product?

By applying this framework, we progressively asked these questions in our interview sessions:

  • Can you remind us of the last time you used GrabFood? — This was to uncover the situation or the circumstances.
  • Why did you order this food? — This was to get down to the core of the need.
  • Can you tell us, before GrabFood, what did you use to get the same job done?

From the interview sessions, we were able to uncover a number of JTBDs, one example was working parents buying food for their families. Before GrabFood, most of them were buying from food vendors directly, but that is a time consuming activity and it adds additional friction to an already busy day. This led them in search of a new solution and GrabFood provided that solution.

Let’s look at this JTBD in more depth. One anxiety that parents had when ordering GrabFood was the sheer number of choices they had to make in order to check out their order:

Search data flow
Force diagram – inertia, anxiety

There was already a solution for this problem: bundles! Food bundles is a well-known concept from the food and beverage industry; items that complement each other are bundled together for a more efficient checkout experience.

Search data flow
Force diagram – pull, push

However, not all GrabFood merchants created bundles to solve this problem for their consumers. This was an untapped opportunity for the merchants to solve a critical problem for their consumers. Eureka! We knew that we needed to help merchants create bundles in an efficient way to solve for the consumer’s JTBD.

We decided to add a functionality to the GrabMerchant app that allowed merchants to create bundles. We built an algorithm that matched complementary items and automatically suggested these bundles to merchants. The merchant only had to tap a button to create a bundle instantly.

Search data flow
Bundle

The feature was released and thousands of restaurants started adding bundles to their menu. Our JTBD analysis proved to be correct: food and beverage entrepreneurs were now equipped with an essential tool to drive growth and we removed an obstacle for parents to choose GrabFood to solve for their JTBD.

Conclusion

At Grab, we understand the importance of research. We educate designers and other non-researcher employees to conduct research studies. We also encourage the sharing of research findings, and we ensure that research insights are consumable. By using the JTBD framework and asking questions specifically to understand the job of our consumers and partners, we are able to gain fundamental understanding of why our consumers are using our products and services. This helps us improve our products and services, and optimise it for the jobs that need to be done throughout Southeast Asia.

This article was written based on an episode of the Grab Design Podcast – a conversation with Grab Lead Researcher Soon Hau Chua. Want to listen to the Grab Design Podcast? Join the team, we’re hiring!


Special thanks to Amira Khazali and Irene from Tech Learning.


Join us

Grab is a leading superapp in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across over 400 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

Reshaping Chat Support for Our Users

Post Syndicated from Grab Tech original https://engineering.grab.com/reshaping-chat-support

Introduction

The Grab support team plays a key role in ensuring our users receive support when things don’t go as expected or whenever there are questions on our products and services.

In the past, when users required real-time support, their only option was to call our hotline and wait in the queue to talk to an agent. But voice support has its downsides: sometimes it is complex to describe an issue in the app, and it requires the user’s full attention on the call.

With chat messaging apps growing massively in the last years, chat has become the expected support channel users are familiar with. It offers real-time support with the option of multitasking and easily explaining the issue by sharing pictures and documents. Compared to voice support, chat provides access to the conversation for future reference.

With chat growth, building a chat system tailored to our support needs and integrated with internal data, seemed to be the next natural move.

In our previous articles, we covered the tech challenges of building the chat platform for the web, our workforce routing system and improving agent efficiency with machine learning. In this article, we will explain our approach and key learnings when building our in-house chat for support from a Product and Design angle.

A glimpse at agent and user experience
A glimpse at agent and user experience

Why Reinvent the Wheel

We wanted to deliver a product that would fully delight our users. That’s why we decided to build an in-house chat tool that can:

  1. Prevent chat disconnections and ensure a consistent chat experience: Building a native chat experience allowed us to ensure a stable chat session, even when users leave the app. Besides, leveraging on the existing Grab chat infrastructure helped us achieve this fast and ensure the chat experience is consistent throughout the app. You can read more about the chat architecture here.
  2. Improve productivity and provide faster support turnarounds: By building the agent experience in the CRM tool, we could reduce the number of tools the support team uses and build features tailored to our internal processes. This helped to provide faster help for our users.
  3. Allow integration with internal systems and services: Chat can be easily integrated with in-house AI models or chatbot, which helps us personalise the user experience and improve agent productivity.
  4. Route our users to the best support specialist available: Our newly built routing system accounts for all the use cases we were wishing for such as prioritising certain requests, better distribution of the chat load during peak hours, making changes at scale and ensuring each chat is routed to the best support specialist available.

Fail Fast with an MVP

Before building a full-fledged solution, we needed to prove the concept, an MVP that would have the key features and yet, would not take too much effort if it fails. To kick start our experiment, we established the success criteria for our MVP; how do we measure its success or failure?

Defining What Success Looks Like

Any experiment requires a hypothesis – something you’re trying to prove or disprove and it should relate to your final product. To tailor the final product around the success criteria, we need to understand how success is measured in our situation. In our case, disconnections during chat support was one of the key challenges faced so our hypothesis was:

Starting with Design Sprint

Our design sprint aimed to solutionise a series of problem statements, and generate a prototype to validate our hypothesis. To spark ideation, we run sketching exercises such as Crazy 8, Solution sketch and end off with sharing and voting.


Some of the prototypes built during the Design sprint

Defining MVP Scope to Run the Experiment

To test our hypothesis quickly, we had to cut the scope by focusing on the basic functionality of allowing chat message exchanges with one agent.

Here is the main flow and a sneak peek of the design:

Accepting chats
Accepting chats
Handling concurrent chats
Handling concurrent chats

What We Learnt from the Experiment

During the experiment, we had to constantly put ourselves in our users’ shoes as ‘we are not our users’. We decided to shadow our chat support agents and get a sense of the potential issues our users actually face. By doing so, we learnt a lot about how the tool was used and spotted several problems to address in the next iterations.

In the end, the experiment confirmed our hypothesis that having a native in-app chat was more stable than the previous chat in use, resulting in a better user experience overall.

Starting with the End in Mind

Once the experiment was successful, we focused on scaling. We defined the most critical jobs to be done for our users so that we could scale the product further. When designing solutions to tackle each of them, we ensured that the product would be flexible enough to address future pain points. Would this work for more channels, more users, more products, more countries?

Before scaling, the problems to solve were:

  • Monitoring the performance of the system in real-time, so that swift operational changes can be made to ensure users receive fast support;
  • Routing each chat to the best agent available, considering skills, occupancy, as well as issue prioritisation. You can read more about the our routing system design here;
  • Easily communicate with users and show empathy, for which we built file-sharing capabilities for both users and agents, as well as allowing emojis, which create a more personalised experience.

Scaling Efficiently

We broke down the chat support journey to determine what areas could be improved.

Reducing Waiting Time

When analysing the current wait time, we realised that when there was a surge in support requests, the average waiting time increased drastically. In these cases, most users would be unresponsive by the time an agent finally attends to them.

To solve this problem, the team worked on a dynamic queue limit concept based on Little’s law. The idea is that considering the number of incoming chats and the agents’ capacity, we can forecast the number of users we can handle in a reasonable time, and prevent the remaining from initiating a chat. When this happens, we ensure there’s a backup channel for support so that no user is left unattended.

This allowed us to reduce chat waiting time by ~30% and reduce unresponsive users by ~7%.

Reducing Time to Reply

A big part of the chat time is spent typing the message to send to the user. Although the previous tool had templated messages, we observed that 85% of them were free-typed. This is because agents felt the templates were impersonal and wanted to add their personal style to the messages.

With this information in mind, we knew we could help by providing autocomplete suggestions  while the agents are typing. We built a machine learning based feature that considers several factors such as user type, the entry point to support, and the last messages exchanged, to suggest how the agent should complete the sentence. When this feature was first launched, we reduced the average chat time by 12%!

Read this to find out more about how we built this machine learning feature, from defining the problem space to its implementation.


Reducing the Overall Chat Time

Looking at the average chat time, we realised that there was still room for improvement. How can we help our agents to manage their time better so that we can reduce the waiting time for users in the queue?

We needed to provide visibility of chat durations so that our agents could manage their time better. So, we added a timer at the top of each chat window to indicate how long the chat was taking.

Timer in the minimised chat
Timer in the minimised chat

We also added nudges to remind agents that they had other users to attend to while they were in the chat.

Timer in the maximised chat
Timer in the maximised chat

By providing visibility via prompts and colour-coded indicators to prevent exceeding the expected chat duration, we reduced the average chat time by 22%!

What We Learnt from this Project

  • Start with the end in mind. When you embark on a big project like this, have a clear vision of how the end state looks like and plan each step backwards. How does success look like and how are we going to measure it? How do we get there?
  • Data is king. Data helped us spot issues in real-time and guided us through all the iterations following the MVP. It helped us prioritise the most impactful problems and take the right design decisions. Instrumentation must be part of your MVP scope!
  • Remote user testing is better than no user testing at all. Ideally, you want to do user testing in the exact environment your users will be using the tool but a pandemic might make things a bit more complex. Don’t let this stop you! The qualitative feedback we received from real users, even with a prototype on a video call, helped us optimise the tool for their needs.
  • Address the root cause, not the symptoms. Whenever you are tasked with solving a big problem, break it down into its components by asking “Why?” until you find the root cause. In the first phases, we realised the tool had a longer chat time compared to 3rd party softwares. By iteratively splitting the problem into smaller ones, we were able to address the root causes instead of the symptoms.
  • Shadow your users whenever you can. By looking at the users in action, we learned a ton about their creative ways to go around the tool’s limitations. This allowed us to iterate further on the design and help them be more efficient.

Of course, this would not have been possible without the incredible work of several teams: CSE, CE, Comms platform, Driver and Merchant teams.


Join Us

Grab is the leading superapp platform in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across 428 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

Architecting for Reliable Scalability

Post Syndicated from Marwan Al Shawi original https://aws.amazon.com/blogs/architecture/architecting-for-reliable-scalability/

Cloud solutions architects should ideally “build today with tomorrow in mind,” meaning their solutions need to cater to current scale requirements as well as the anticipated growth of the solution. This growth can be either the organic growth of a solution or it could be related to a merger and acquisition type of scenario, where its size is increased dramatically within a short period of time.

Still, when a solution scales, many architects experience added complexity to the overall architecture in terms of its manageability, performance, security, etc. By architecting your solution or application to scale reliably, you can avoid the introduction of additional complexity, degraded performance, or reduced security as a result of scaling.

Generally, a solution or service’s reliability is influenced by its up time, performance, security, manageability, etc. In order to achieve reliability in the context of scale, take into consideration the following primary design principals.

Modularity

Modularity aims to break a complex component or solution into smaller parts that are less complicated and easier to scale, secure, and manage.

Monolithic architecture vs. modular architecture

Figure 1: Monolithic architecture vs. modular architecture

Modular design is commonly used in modern application developments. where an application’s software is constructed of multiple and loosely coupled building blocks (functions). These functions collectively integrate through pre-defined common interfaces or APIs to form the desired application functionality (commonly referred to as microservices architecture).

 

Scalable modular applications

Figure 2: Scalable modular applications

For more details about building highly scalable and reliable workloads using a microservices architecture, refer to Design Your Workload Service Architecture.

This design principle can also be applied to different components of the solution’s architecture. For example, when building a cloud solution on a single Amazon VPC, it may reach certain scaling limits and make it harder to introduce changes at scale due to the higher level of dependencies. This single complex VPC can be divided into multiple smaller and simpler VPCs. The architecture based on multiple VPCs can vary. For example, the VPCs can be divided based on a service or application building block, a specific function of the application, or on organizational functions like a VPC for various departments. This principle can also be leveraged at a regional level for very high scale global architectures. You can make the architecture modular at a global level by distributing the multiple VPCs across different AWS Regions to achieve global scale (facilitated by AWS Global Infrastructure).

In addition, modularity promotes separation of concerns by having well-defined boundaries among the different components of the architecture. As a result, each component can be managed, secured, and scaled independently. Also, it helps you avoid what is commonly known as “fate sharing,” where a vertically scaled server hosts a monolithic application, and any failure to this server will impact the entire application.

Horizontal scaling

Horizontal scaling, commonly referred to as scale-out, is the capability to automatically add systems/instances in a distributed manner in order to handle an increase in load. Examples of this increase in load could be the increase of number of sessions to a web application. With horizontal scaling, the load is distributed across multiple instances. By distributing these instances across Availability Zones, horizontal scaling not only increases performance, but also improves the overall reliability.

In order for the application to work seamlessly in a scale-out distributed manner, the application needs to be designed to support a stateless scaling model, where the application’s state information is stored and requested independently from the application’s instances. This makes the on-demand horizontal scaling easier to achieve and manage.

This principle can be complemented with a modularity design principle, in which the scaling model can be applied to certain component(s) or microservice(s) of the application stack. For example, only scale-out Amazon Elastic Cloud Compute (EC2) front-end web instances that reside behind an Elastic Load Balancing (ELB) layer with auto-scaling groups. In contrast, this elastic horizontal scalability might be very difficult to achieve for a monolithic type of application.

Leverage the content delivery network

Leveraging Amazon CloudFront and its edge locations as part of the solution architecture can enable your application or service to scale rapidly and reliably at a global level, without adding any complexity to the solution. The integration of a CDN can take different forms depending on the solution use case.

For example, CloudFront played an important role to enable the scale required throughout Amazon Prime Day 2020 by serving up web and streamed content to a worldwide audience, which handled over 280 million HTTP requests per minute.

Go serverless where possible

As discussed earlier in this post, modular architectures based on microservices reduce the complexity of the individual component or microservice. At scale it may introduce a different type of complexity related to the number of these independent components (microservices). This is where serverless services can help to reduce such complexity reliably and at scale. With this design model you no longer have to provision, manually scale, maintain servers, operating systems, or runtimes to run your applications.

For example, you may consider using a microservices architecture to modernize an application at the same time to simplify the architecture at scale using Amazon Elastic Kubernetes Service (EKS) with AWS Fargate.

Example of a serverless microservices architecture

Figure 3: Example of a serverless microservices architecture

In addition, an event-driven serverless capability like AWS Lambda is key in today’s modern scalable cloud solutions, as it handles running and scaling your code reliably and efficiently. See How to Design Your Serverless Apps for Massive Scale and 10 Things Serverless Architects Should Know for more information.

Secure by design

To avoid any major changes at a later stage to accommodate security requirements, it’s essential that security is taken into consideration as part of the initial solution design. For example, if the cloud project is new or small, and you don’t consider security properly at the initial stages, once the solution starts to scale, redesigning the entire cloud project from scratch to accommodate security best practices is usually not a simple option, which may lead to consider suboptimal security solutions that may impact the desired scale to be achieved. By leveraging CDN as part of the solution architecture (as discussed above), using Amazon CloudFront, you can minimize the impact of distributed denial of service (DDoS) attacks as well as perform application layer filtering at the edge. Also, when considering serverless services and the Shared Responsibility Model, from a security lens you can delegate a considerable part of the application stack to AWS so that you can focus on building applications. See The Shared Responsibility Model for AWS Lambda.

Design with security in mind by incorporating the necessary security services as part of the initial cloud solution. This will allow you to add more security capabilities and features as the solution grows, without the need to make major changes to the design.

Design for failure

The reliability of a service or solution in the cloud depends on multiple factors, the primary of which is resiliency. This design principle becomes even more critical at scale because the failure impact magnitude typically will be higher. Therefore, to achieve a reliable scalability, it is essential to design a resilient solution, capable of recovering from infrastructure or service disruptions. This principle involves designing the overall solution in such a way that even if one or more of its components fail, the solution is still be capable of providing an acceptable level of its expected function(s). See AWS Well-Architected Framework – Reliability Pillar for more information.

Conclusion

Designing for scale alone is not enough. Reliable scalability should be always the targeted architectural attribute. The design principles discussed in this blog act as the foundational pillars to support it, and ideally should be combined with adopting a DevOps model.

Introducing Secrets and Environment Variables to Cloudflare Workers

Post Syndicated from John Donmoyer original https://blog.cloudflare.com/workers-secrets-environment/

Introducing Secrets and Environment  Variables to Cloudflare Workers

Introducing Secrets and Environment  Variables to Cloudflare Workers

The Workers team here at Cloudflare has been hard at work shipping a bunch of new features in the last year and we’ve seen some amazing things built with the tools we’ve provided. However, as my uncle once said, with great serverless platform growth comes great responsibility.

One of the ways we can help is by ensuring that deploying and maintaining your Workers scripts is a low risk endeavor. Rotating a set of API keys shouldn’t require risking downtime through code edits and redeployments and in some cases it may not make sense for the developer writing the script to know the actual API key value at all. To help tackle this problem, we’re releasing Secrets and Environment Variables to the Wrangler CLI and Workers Dashboard.

Supporting secrets

As we started to design support for secrets in Workers we had a sense that this was already a big concern for a lot of our users but we wanted to learn about all of the use cases to ensure we were building the right thing. We headed to the community forums, twitter, and the inbox of Louis Grace, business development representative extraordinaire, for some anecdotes about Secrets usage. We also sent out a survey to our existing users to learn about use cases and pain points.

We learned that even though there was already a way to store secrets without exposing them via Workers KV, the solution was not very intuitive, nor did it meet all the needs of our users. Many users didn’t even know we had an interim solution in place. Recognizing that we were not the first platform to encounter this problem, we surveyed the existing landscape of Platform as a Service offerings to get a better sense for what our users would expect of us.

Deciding on a solution

One of the first things we found was that not all environment variables are created equal. While the simplest use case for having a defined environment variable may be storing a piece of text that can be updated no matter where it is referenced in a script, sometimes those variables may have higher stakes associated with them. If you’re storing an API key that controls access to an important system, you may not want to allow anyone with dashboard access to see it, maybe not even the developers themselves.

With this in mind, we had to ensure the feature covered two different use cases: one for storing variables in plain text where you could see the variable being referenced and make edits to it and another where the variable would be encrypted as soon as you save it, never to be seen again. This way, we were able to serve both needs of our users, side by side, without one compromising for the other.

Testing our prototypes

Once we had a fairly good idea of what we wanted to build, we built some prototypes and rough implementations in staging environments so we would be able to perform some usability testing. We wrangled up some developers and observed them as they performed a series of tasks where they were asked to add some secrets and plain-text environment variables, reference them in one of their Workers, and bind their Worker to a Worker KV namespace.

Along the way we also asked questions to understand the developer’s professional background, familiarity with the product, and the use cases they’ve had for using Workers in the past along with any paint points they experienced.

While we were testing the new dashboard interface we also began testing the usability of the Wrangler CLI. We had Wrangler users perform the same tasks as the Workers dashboard users to help us find out if users are expecting different things out of their command-line tooling.

Findings and fixes

Through our testing we were able to make a number of changes before the final release. Some of the smaller changes included things like adjusting the behavior of form fields to ensure users knew which variable would be associated with each value. We also made larger changes like electing to separate the KV namespace bindings from the other environment variables as a way to emphasize that KV namespace bindings are not the keys and values themselves but a reference to a namespace where those keys are stored.

Cina, one of our engineers, put together a proposal to align some of our terminology with the terms that our developers were naturally using to describe their workflow. In Wrangler users were accustomed to referencing their KV namespaces by adding a KV namespace binding so when they came to the Workers dashboard interface and saw a field called “KV Variables” they were often confused, thinking they were adding keys and values to the namespace itself instead of establishing a variable that could be used to reference the namespace. As a fix, we decided to call it a “KV namespace binding” throughout the experience.

Try it out

Environment variables are available now with the Wrangler CLI and in the Workers Dashboard so go ahead and give them a shot today!

Introducing Secrets and Environment  Variables to Cloudflare Workers
Adding a secret with Wrangler
Introducing Secrets and Environment  Variables to Cloudflare Workers
Managing environment variables and KV bindings in the Workers Dashboard

As we continue to build out the Workers platform we’d love to hear from you. Let us know if you’re interested in participating in user research or just have something to say as we’d love to hear from you.

Thinking about color

Post Syndicated from Sam Mason de Caires original https://blog.cloudflare.com/thinking-about-color/

Color is my day-long obsession, joy and torment – Claude Monet

Thinking about color

Thinking about color

Over the last two years we’ve tried to improve our usage of color at Cloudflare. There were a number of forcing functions that made this work a priority. As a small team of designers and engineers we had inherited a bunch of design work that was a mix of values built by multiple teams. As a result it was difficult and unnecessarily time consuming to add new colors when building new components.

We also wanted to improve our accessibility. While we were doing pretty well, we had room for improvement, largely around how we used green. As our UI is increasingly centered around visualizations of large data sets we wanted to push the boundaries of making our analytics as visually accessible as possible.

Cloudflare had also undergone a rebrand around 2016. While our marketing site had rolled out an updated set of visuals, our product ui as well as a number of existing web properties were still using various versions of our old palette.

Our product palette wasn’t well balanced by itself. Many colors had been chosen one or two at a time. You can see how we chose blueberry, ice, and water at a different point in time than marine and thunder.

Thinking about color
The color section of our theme file was partially ordered chronologically

Lacking visual cohesion within our own product, we definitely weren’t providing a cohesive visual experience between our marketing site and our product. The transition from the nice blues and purples to our green CTAs wasn’t the streamlined experience we wanted to afford our users.

Thinking about color
Our app dashboard in 2017

Reworking our Palette

Our first step was to audit what we already had. Cloudflare has been around long enough to have more than one website. Beyond cloudflare.com we have dozens of web properties that are publicly accessible. From our community forums, support docs, blog, status page, to numerous micro-sites.

All-in-all we have dozens of front-end codebases that each represent one more chance to introduce entropy to our visual language. So we were curious to answer the question – what colors were we currently using? Were there consistent patterns we could document for further reuse? Could we build a living style guide that didn’t cover just one site, but all of them?

Thinking about color
Screenshots of pages from cloudflare.com contrasted with screenshots from our product in 2017

Our curiosity got the best of us and we went about exploring ways we could visualize our design language across all of our sites.

Thinking about color
Above – our product palette. Below – our marketing palette.

A time machine for color

As we first started to identify the scale of our color problems, we tried to think outside the box on how we might explore the problem space. After an initial brainstorming session we combined the Internet Archive’s Wayback Machine with the Css Stats API to build an audit tool that shows how our various websites visual properties change over time. We can dynamically select which sites we want to compare and scrub through time to see changes.

Below is a visualization of palettes from 9 different websites changing over a period of 6 years. Above the palettes is a component that spits out common colors, across all of these sites. The only two common colors across all properties (appearing for only a brief flash) were #ffffff (white) and transparent. Over time we haven’t been very consistent with ourselves.

Thinking about color

If we drill in to look at our marketing site compared to our dashboard app – it looks like the video below. We see a bit more overlap at first and then a significant divergence at the 16 second mark when our product palette grew significantly. At the 22 second mark you can see the marketing palette completely change as a result of the rebrand while our product palette stays the same. As time goes on you can see us becoming more and more inconsistent across the two code bases.

Thinking about color

As a product team we had some catching up to do improve our usage of color and to align ourselves with the company brand. The good news was, there was no where to go but up.

This style of historical audit gives us a visual indication with real data. We can visualize for stakeholders how consistent and similar our usage of color is across products and if we are getting better or worse over time. Having this type of feedback loop was invaluable for us – as auditing this manually is incredibly time consuming so it often doesn’t get done. Hopefully in the future as it’s standard to track various performance metrics over time at a company it will be standard to be able to visualize your current levels of design entropy.

Picking colors

After our initial audit revealed there wasn’t a lot of consistency across sites, we went to work to try and construct a color palette that could potentially be used for sites the product team owned. It was time to get our hands dirty and start “picking colors.”

Hindsight of course is always 20/20. We didn’t start out on day one trying to generate scales based on our brand palette. No, our first bright idea, was to generate the entire palette from a single color.

Our logo is made up of two oranges. Both of these seemed like prime candidates to generate a palette from.

Thinking about color

We played around with a number of algorithms that took a single color and created a palette. From the initial color we generated an array scales for each hue. Initial attempts found us applying the exact same curves for luminosity to each hue, but as visual perception of hues is so different, this resulted in wildly different contrasts at each step of the scale.

Below are a few of our initial attempts at palette generation. Jeeyoung Jung did a brilliant writeup around designing palettes last year.

Thinking about color
Visualizing peaks of intensity across hues

We can see the intensity of the colors change across hue in peaks, with yellow and green being the most dominant. One of the downsides of this, is when you are rapidly iterating through theming options, the inconsistent relationships between steps across hues can make it time consuming or impossible to keep visual harmony in your interface.

The video below is another way to visualize this phenomenon. The dividing line in the color picker indicates which part of the palette will be accessible with black and white. Notice how drastically the line changes around green and yellow. And then look back at the charts above.

Thinking about color
Demo of https://kevingutowski.github.io/color.html

After fiddling with a few different generative algorithms (we made a lot of ugly palettes…) we decided to try a more manual approach. We pursued creating custom curves for each hue in an effort to keep the contrast scales as optically balanced as possible.

Thinking about color
Heavily muted palette

Thinking about color

Generating different color palettes makes you confront a basic question. How do you tell if a palette is good? Are some palettes better than others? In an effort to answer this question we constructed various feedback loops to help us evaluate palettes as quickly as possible. We tried a few methods to stress test a palette. At first we attempted to grab the “nearest color” for a bunch of our common UI colors. This wasn’t always helpful as sometimes you actually want the step above or below the closest existing color. But it was helpful to visualize for a few reasons.

Thinking about color
Generated palette above a set of components previewing the old and new palette for comparison

Sometime during our exploration in this space, we stumbled across this tweet thread about building a palette for pixel art. There are a lot of places web and product designers can draw inspiration from game designers.

Thinking about color
Two color palettes visualized to create 3d objects

Thinking about color
A color palette applied in a few different contexts

Here we see a similar concept where a number of different palettes are applied to the same component. This view shows us two things, the different ways a single palette can be applied to a sphere, and also the different aesthetics across color palettes.

Thinking about color
Different color palettes previewed against a common component

It’s almost surprising that the default way to construct a color palette for apps and sites isn’t to build it while previewing its application against the most common UI patterns. As designers, there are a lot of consistent uses of color we could have baselines for. Many enterprise apps are centered around white background with blue as the primary color with mixtures of grays to add depth around cards and page sections. Red is often used for destructive actions like deleting some type of record. Gray for secondary actions. Maybe it’s an outline button with the primary color for secondary actions. Either way – the margins between the patterns aren’t that large in the grand scheme of things.

Consider the use case of designing UI while the palette or usage of color hasn’t been established. Given a single palette, you might want to experiment with applying that palette in a variety of ways that will output a wide variety of aesthetics. Alternatively you may need to test out several different palettes. These are two different modes of exploration that can be extremely time consuming to work through . It can be non-trivial to keep an in progress design synced with several different options for color application, even with the best use of layer comps or symbols.

How do we visualize the various ways a palette will look when applied to an interface? Here are examples of how palettes are shown on a palette list for pixel artists.

Thinking about color

Thinking about color
https://lospec.com/palette-list/vines-flexible-linear-ramps

One method of visualization is to define a common set of primitive ui elements and show each one of them with a single set of colors applied. In isolation this can be helpful. This mode would make it easy to vet a single combination of colors and which ui elements it might be best applied to.

Alternatively we might want to see a composed interface with the closest colors from the palette applied. Consider a set of buttons that includes red, green, blue, and gray button styles. Seeing all of these together can help us visualize the relative nature of these buttons side by side. Given a baseline palette for common UI, we could swap to a new palette and replace each color with the “closest” color. This isn’t always a full-proof solution as there are many edge cases to cover. e.g. what happens when replacing a palette of 134 colors with a palette of 24 colors? Even still, this could allow us to quickly take a stab at automating how existing interfaces would change their appearance given a change to the underlying system. Whether locally or against a live site, this mode of working would allow for designers to view a color in multiple contets to truly asses its quality.

Thinking about color

After moving on from the idea of generating a palette from a single color, we attempted to use our logo colors as well as our primary brand colors to drive the construction of modular scales. Our goal was to create a palette that would improve contrast for accessibility, stay true to our visual brand, work predictably for developers, work for data visualizations, and provide the ability to design visually balanced and attractive interfaces. No sweat.

Thinking about color
Brand colors showing Hue and Saturation level

While we knew going in we might not use every step in every hue, we wanted full coverage across the spectrum so that each hue had a consistent optical difference between each step. We also had no idea which steps across which hues we were going to need just yet. As they would just be variables in a theme file it didn’t add any significant code footprint to expose the full generated palette either.

One of the more difficult parts, was deciding on a number of steps for the scales. This would allow us to edit the palette in the future to a variety of aesthetics and swap the palette out at the theme level without needing to update anything else.

In the future if when we did need to augment the available colors, we could edit the entire palette instead of adding a one-off addition as we had found this was a difficult way to work over time. In addition to our primary brand colors we also explored adding scales for yellow / gold, violet, teal as well as a gray scale.

The first interface we built for this work was to output all of the scales vertically, with their contrast scores with both white and black on the right hand side. To aid scannability we bolded the values that were above the 4.5 threshold. As we edited the curves, we could see how the contrast ratios were affected at each step. Below you can see an early starting point before the scales were balanced. Red has 6 accessible combos with white, while yellow only has 1. We initially explored having the gray scale be larger than the others.

Thinking about color
Early iteration of palette preview during development

As both screen luminosity and ambient light can affect perception of color we developed on two monitors, one set to maximum and one set to minimum brightness levels. We also replicated the color scales with a grayscale filter immediately below to help illustrate visual contrast between steps AND across hues. Bouncing back and forth between the grayscale and saturated version of the scale serves as a great baseline reference. We found that going beyond 10 steps made it difficult to keep enough contrast between each step to keep them distinguishable from one another.

Taking a page from our game design friends – as we were balancing the scales and exploring how many steps we wanted in the scales, we were also stress testing the generated colors against various analytics components from our component library.

Our slightly random collection of grays had been a particular pain point as they appeared muddy in a number of places within our interface. For our new palette we used the slightest hint of blue to keep our grays consistent and just a bit off from being purely neutral.

Thinking about color
Optically balanced scales

With a palette consisting of 90 colors, the amount of combinations and permutations that can be applied to data visualizations is vast and can result in a wide variety of aesthetic directions. The same palette applied to both line and bar charts with different data sets can look substantially different, enough that they might not be distinguishable as being the exact same palette. Working with some of our engineering counterparts, we built a pipeline that would put up the same components rendered against different data sets, to simulate the various shapes and sizes the graph elements would appear in. This allowed us to rapidly test the appearance of different palettes. This workflow gave us amazing insights into how a palette would look in our interface. No matter how many hours we spent staring at a palette, we couldn’t get an accurate sense of how the colors would look when composed within an interface.

Thinking about color
Analytics charts with a blues and oranges. Telling the colors of the lines apart is a different visual experience than separating out the dots in sequential order as they appear in the legend.

We experimented with a number of ideas on visualizing different sizes and shapes of colors and how they affected our perception of how much a color was changing element to element. In the first frame it is most difficult to tell the values at 2% and 6% apart given the size and shape of the elements.

Thinking about color
Stress testing the application of a palette to many shapes and sizes

We’ve begun to package up some of this work into a web app others can use to create or import a palette and preview multiple depths of accessible combinations against a set of UI elements.

The goal is to make it easier for anyone to work seamlessly with color and build beautiful interfaces with accessible color contrasts.

Thinking about color
Color by Cloudflare Design

In an effort to make sure everything we are building will be visually accessible – we built a react component that will preview how a design would look if you were colorblind. The component overlays SVG filters to simulate alternate ways someone can perceive color.

Thinking about color
Analytics component previewed against 8 different types of color blindness

While this is previewing an analytics component, really any component or page can be previewed with this method.

import React from "react"

const filters = [
  'achromatopsia',
  'protanomaly',
  'protanopia',
  'deuteranomaly',
  'deuteranopia',
  'tritanomaly',
  'tritanopia',
  'achromatomaly',
]

const ColorBlindFilter = ({itemPadding, itemWidth, ...props }) => {
  return (
      <div  {...props}>
        {filters.map((filter, i) => (
          <div
            style={{filter: 'url(/filters.svg#'+filter+')'}}
            width={itemWidth}
            px={itemPadding}
            key={i+filter}
          >
            {props.children}
          </div>
        ))}
      </div>
  )
}

ColorBlindFilter.defaultProps = {
  display: 'flex',
  justifyContent: 'space-around',
  flexWrap: 'wrap',
  width: 1,
  itemWidth: 1/4
}

export default ColorBlindFilter

We’ve also released a Figma plugin that simulates this visualization for a component.

After quite a few iterations, we had finally come up with a color palette. Each scale was optically aligned with our brand colors. The 5th step in each scale is the closest to the original brand color, but adjusted slightly so it’s accessible with both black and white.

Thinking about color
Our preview panel for palette development, showing a fully desaturated version of the palette for reference

Lyft’s writeup “Re-approaching color” and Jeeyoung Jung’s “Designing Systematic Colors are some of the best write-ups on how to work with color at scale you can find.

Color migrations

Thinking about color
A visual representation of how the legacy palette colors would translate to the new scales.

Getting a team of people to agree on a new color palette is a journey in and of itself. By the time you get everyone to consensus it’s tempting to just collapse into a heap and never think about colors ever again. Unfortunately the work doesn’t stop at this point. Now that we’ve picked our palette, it’s time to get it implemented so this bike shed is painted once and for all.

If you are porting an old legacy part of your app to be updated to the new style guide like we were, even the best color documentation can fall short in helping someone make the necessary changes.

We found it was more common than expected that engineers and designers wanted to know what the new version of a color they were familiar with was. During the transition between palettes we had an interface people could input any color and get the closest color within our palette.

There are times when migrating colors, the closest color isn’t actually what you want. Given the scenario where your brand color has changed from blue to purple, you might want to be porting all blues to the closest purple within the palette, not the closest blues which might still exist in your palette. To help visualize migrations as well as get suggestions on how to consolidate values within the old scale, we of course built a little tool. Here we can define those translations and import a color palette from a URL import. As we still have. a number of web properties to update to our palette, this simple tool has continued to prove useful.

Thinking about color

We wanted to be as gentle as possible in transitioning to the new palette in usage. While the developers found string names for colors brittle and unpredictable, it was still a more familiar system for some than this new one. We first just added in our new palette to the existing theme for usage moving forward. Then we started to port colors for existing components and pages.

For our colleagues, we wrote out desired translations and offered warnings in the console that a color was deprecated, with a reference to the new theme value to use.

Thinking about color
Example of console warning when using deprecated color

Thinking about color
Example of how to check for usage of deprecated values

While we had a few bugs along the way, the team was supportive and helped us fix bugs almost as quickly as we could find them.

We’re still in the process of updating our web properties with our new palette, largely prioritizing accessibility first while trying to create a more consistent visual brand as a nice by-product of the work. A small example of this is our system status page. In the first image, the blue links in the header, the green status bar, and the about copy, were all inaccessible against their backgrounds.

A lot of the changes have been subtle. Most notably the green we use in the dashboard is a lot more inline with our brand colors than before. In addition we’ve also been able to add visual balance by not just using straight black text on background colors. Here we added one of the darker steps from the corresponding scale, to give it a bit more visual balance.

Thinking about color
Example page within our Dashboard in 2017 vs 2019

While we aren’t perfect yet, we’re making progress towards more visual cohesion across our marketing materials and products.

2017

Thinking about color
Our app dashboard in 2017

2019

Next steps

Trying to keep dozens of sites all using the same palette in a consistent manner across time is a task that you can never complete. It’s an ongoing maintenance problem. Engineers familiar with the color system leave, new engineers join and need to learn how the system works. People still launch sites using a different palette that doesn’t meet accessibility standards. Our work continues to be cut out for us. As they say, a garden doesn’t tend itself.

If we do ever revisit our brand colors, we’re excited to have infrastructure in place to update our apps and several of our satellite sites with significantly less effort than our first time around.

Resources

Some of our favorite materials and resources we found while exploring this problem space.

Apps

Writing

Code

Videos

Driving Southeast Asia forward through people-focused design

Post Syndicated from Grab Tech original https://engineering.grab.com/driving-sea-forward-through-people-focused-design

Southeast Asia is home to around 650 million people from diverse and comparatively different economic, political and social backgrounds. Many people in the region today rely on super apps like Grab to earn a daily living or get from A to B more efficiently and safely. This means that decisions made have real impact on people’s lives – so how do you know when your decisions are right or wrong?

In this post, I’ll share key customer insights that have guided my decisions and informed my design thinking over the last year whilst working as a product designer for Grab in Singapore. I’ve broken my learnings down into 3 transient areas for thinking about product development and how each one addressed our customers’ needs.

  1. Relevance – does the design solve the customer problem? For example, loss of connectivity which is common in Southeast Asia should not completely prevent a customer from accessing the content on our app.
  2. Inclusivity – does the design consider the full range of customer diversity? For example, a driver waiting in the hot sun for his passenger can still use the product. Inclusive design covers people with a range of perspectives, disabilities and environments.
  3. Engagement – does the design invoke a feeling of satisfaction? For example, building a compelling narrative around your product that solicits a higher engagement.

Under each of these areas, I’ll elaborate on how we’ve built empathy from customer insights and applied these to our design thinking.

But before jumping in, think about the lens which frames any customer experience – the mobile device. In Southeast Asia, the commonly used devices are inexpensive low-end devices made by OPPO, Xiaomi, and Samsung. Knowing which devices customers use helps us understand potential performance constraints, different screen resolutions, and custom Android UIs.

Designing for relevance  

Shopping mall in Medan, Indonesia
Shopping mall in Medan, Indonesia

Connectivity

In Southeast Asia, it’s not too hard to find public WiFi. However, the main challenge is finding a reliable network. Take this shopping mall in Medan, Indonesia. The WiFi connectivity didn’t live up to the modern infrastructure of the building. The locals knew this and used mobile data over spotty and congested connectivity. Mobile data is the norm for most people and 4G reach is high, but the power of the connections is relatively low.

Building empathy

To genuinely design for customers’ needs, designers at Grab regularly get out the office to understand what people are doing in the real world. But how do we integrate empathy and compassion into the design process? Throughout this article, I’ll explain how the insights we gathered from around Southeast Asia can inform your decision making process.  

For simulating a loss of connectivity, switch to airplane mode to observe current UI states and limitations. If you have the resources, create a 2G network to compare how bandwidth constraints page loading speeds.Network Link Conditioner for Mac and iOS or Lighthouse by Chrome DevTools can replicate a slower network.

Design implications

This diagram is from Scott Hurff’s book, Designing Products People Love. The book is amazing, but if you don’t have the time to read it, this article offers a quick overview.

Scott Hurff’s UI Stack
Scott Hurff’s UI Stack

An ideal state (the fully loaded experience) is primarily what a lot of designers think about when problem-solving. However, when connectivity is a common customer pain-point, designers at Grab have to design for the less desirable: Blank, Loading, Partial, and Error states in tandem with all the happy paths. Latency can make or break the user experience, so buffer wait times with visual progress to cushion each millisecond. Loading skeletons when you open Grab, act as momentary placeholders for content and reduce the perceived latency to load the full experience.

A loss of connectivity shouldn’t mean the end of your product’s experience. Prepare connectivity issues by keeping screens alive through intuitive visual cues, messaging, and cached content.

Device type and condition

In Southeast Asia, people tend to opt for low-end or hand-me-down devices that can sometimes have cracked screens or depleting batteries. These devices are usually in circulation much longer than in developed markets, and the device’s OS might not be the latest version because of the perceived effort or risk to update.  

A driver’s device taken during research in Indonesia
A driver’s device taken during research in Indonesia

Building empathy

At Grab, we often use a range of popular, in-market devices to understand compatibility during the design process. Installing mainstream apps to a device with a small screen size, 512MB internal memory, low resolution and depleting battery life will provide insights into performance.  If these apps have lite versions or Progressive Web Apps (PWA), try to understand the trade-offs in user experience compared to the parent app.

Grab’s passenger app on the left versus the driver app
Grab’s passenger app on the left versus the driver app

Design implications

Design for small screens first to reduce the chances of design debt later in the development lifecycle. For interactive elements, it’s important to think about all types of customers that will use the product and in what circumstances. For Grab’s driver-partners who may have their devices mounted to the dashboard, tap targets need to be larger and more explicit.  

Similarly, color contrast will vary depending on screen resolution and time of the day. Practical tests involve dimming the screen and standing near a window in bright sunshine (our HQ is in Singapore which helps!). To further improve accessibility, use a tool like Sketch’s Stark plugin to understand if contrast ratios are accessible to visually impaired customers. A general rule is to aim for higher contrast between essential UI components, text and interactive affordances.

Fancy transitions can look great on high-end devices but can appear choppy or delayed on older and less performant phones. Aim for simple animations to offer a more seamless experience.

Passenger verification to improve safety
Passenger verification to improve safety

Day-to-day budgeting

Many people in Southeast Asia earn a daily income, so it’s no surprise that prepaid mobile is more common over a monthly contract. This mindset to ration on a day-to-day basis also extends itself to other essentials like washing powder and nappies. Data can be an expensive necessity, and customers are selective over the types of content that will consume a daily or weekly budget. Some customers might turn off data after getting a ride, and not turn it back on until another Grab service is required.

Building empathy

Rationing data consumption daily can be achieved through not connecting to WiFi, or a more granular way is to turn off WiFi and use an app like Google’s Datally on Android to cap data usage. Starting low, around 50MB per day will increase your understanding around the data trade-offs you make and highlight the apps that require more data to perform certain actions.

Design implications

Where possible, avoid using video when SVG animations can be just as effective, scalable and lightweight. For Grab’s passenger verification flow, we decided to move away from a video tutorial and keep data consumption to a minimum through utilising SVG animations. When a video experience is required, like Grab’s feed on the home screen, disabling autoplay and clearly distinguishing the media as video allowed customers to decide on committing data.

Design for inclusivity  

Mobile-only

The expression “mobile-first” has been bounced around for the last decade, but in Southeast Asia, “mobile-only” is probably more accurate. Most customers have never owned a tablet or laptop, and mobile numbers are more synonymous with a method of registration over an email address. In the region, people rely more on social media and chat apps to understand broadcast or published news reports, events and recommendations. Customers who sign up for a new Grab account, prefer phone numbers and OTP (one-time-password) registration over providing an email address and password. And anecdotally from interviews conducted at Grab, customers didn’t feel the need for email when communication can take place via SMS, WhatsApp, or other messaging apps.

Building empathy

At Grab, we apply design thinking from a mobile-only perspective for our passenger, merchant,  and driver-partner experiences by understanding our customers’ journeys online and off.  These journeys are synthesized back in the office and sometimes recreated with video and physical artifacts to simulate the customer experience. It’s always helpful to remove smartwatches, put away laptops and use an in-market device that offers a similar experience to your customers.

Design implications

When onboarding new customers, offer a relevant sign-in method for a mobile-only customer, like phone number and social account registration. Grab’s passenger sign-up experience addresses these priorities with phone number first, social accounts second.  

Grab’s sign-in screen
Grab’s sign-in screen

PC-era icons are also widely misunderstood by mobile-only customers, so avoid floppy disks to imply Save, or a folder to Change Directory as these offer little symbolic meaning. When icons are paired with text, this can often reinforce meaning and quicken recognition.  For example, a pencil icon alone can be confusing, so adding the word “Edit” will provide more clarity.  

Nightfall in Yogyakarta, Indonesia
Nightfall in Yogyakarta, Indonesia

Diversity and safety

This photo was taken in Yogyakarta, Indonesia. In the evening, women often formed groups to improve personal safety. In an online environment, women often face discrimination, harassment, blackmail, cyberstalking, and more.  Minorities in emerging markets are further marginalised due to employment, literacy, and financial issues.  

Building empathy

Southeast Asia has a very diverse population, and it’s important to understand gender, ethnic,  and class demographics before you plan any research.Research recruitment at Grab involves working with local vendors to recruit diverse groups of customers for interviews and focus groups. When spending time with customers, we try to understand how diversity and safety factors contribute to the experience of the product.

If you don’t have the time and resources to arrange face-to-face interviews, I’d recommend this article for creating a survey: Respectful Collection of Demographic Data

Design for inclusivity

Allow people to control how they represent their identities through pseudonym names and avatars. But does this undermine trust on the platform? No, not really. Credit card registration or more recently, Grab’s passenger and driver selfie verification feature has closed the loop on suspect accounts whilst maintaining everyone’s privacy and safety.  

On the visual design side, our illustration and content guide incorporates diverse representations of ethnic backgrounds, clothing, physical ability, and social class. You can see examples in the app or through our Dribbble page. For user-generated content, allow people to report and flag abusive material. While data and algorithms can do so much, facts and ethics cannot be policed by machine learning.

Language

In Southeast Asia and other emerging markets, customers may set their phone to a language which they aspire to learn but may not fully comprehend. Swipe, tap, drag, pinch, and other specific terms relating to interactions might not easily translate into the local language, and English might be the preferred language regardless of comprehension. It’s surprisingly common to attend an interview with a translator but the device’s UI is set to English.  

A Grab pick-up point taken in Medan, Indonesia
A Grab pick-up point taken in Medan, Indonesia

Building empathy

If your app supports multiple languages, try setting your phone to a different language but know how to change it back again!  At Grab, we test design robustness by incorporating translated text strings into our mocks. Look for visual cues to infer meaning since some customers might be illiterate or not fully comprehend English.

Grab’s Safety Centre in different languages
Grab’s Safety Centre in different languages

Design for different languages, formats and visual cues

To reduce design debt later on, it’s a good idea to start with the smallest screen size and test the most vulnerable parts of the UI with translated text strings. Keep in mind, dates, times, addresses, and phone numbers may have different formats and require special attention. You can apply multiple visual cues to represent important UI states, such as a change in colour, shape and imagery.

Design for engagement

Sharing

From our research studies, word-of-mouth communication and consuming viral content via Instagram or Facebook was more popular than trawling through search page results. The social aspect is extended to the physical environment where devices can sometimes be shared with more than one person, or in some cases, one mobile is used concurrently with more than one user at a time. In numerous interviews, customers talk about not using biometric authentication so that family members can access their devices.

Building empathy

To understand the layers of personalisation, privacy and security on a device, it’s worth loaning a device from your research team or just borrow a friend’s phone (if they let you!).  How far do you get before you require biometric authentication or a PIN to proceed further? If you decide to wipe a personal device, what steps can you miss out from the setup, and how does that affect your experience post setup?

Offline to Online: GrabNow connecting with driver
Offline to Online: GrabNow connecting with driver

Design for sharing

If necessary, facilitate device sharing through easy switching of accounts, and enable people to remove or hide private content after use. Allow content to be easily shared for both online and offline in-person situations. Using this approach, GrabNow allows passengers to find and connect with a nearby driver without having to pre-book and wait for a driver to arrive. This offline to online interaction also saves data and battery for the customer.

Support and tutoring

In Southeast Asia, people find troubleshooting issues from inside a help page troublesome and generally prefer human assistance, like speaking to someone through a call centre. The opportunity for face-to-face tutoring on how something works is often highly desired and is much more effective than standard onboarding flows that many apps use. From the many physical phone stores, it’s not uncommon for people to go and ask for help or get apps manually installed onto their device.

Building empathy

Apart from speaking with your customers regularly, always look through the Play and App Store reviews for common issues. Understand your customers’ problems and the jargon they use to describe what happened. If you have a customer support team, the tickets created will be a key indicator of where your customers need the most support.

Help and Feedback Design implications

Make support accessible through a variety of methods: online forms, email, and if possible, allow customers to call in. With in-app or online forms, try to use drop-downs or pre-populated quick responses to reduce typing, triage the type of support, and decrease misunderstanding when a request comes in.  When a customer makes a Grab transport booking for the first time, we assist the customer through step-by-step contextual call-outs.

Local aesthetics

This photo was taken in Medan, Indonesia, on the day of an important wedding. It was impressive to see hundreds of handcrafted, colourful placards lining the streets for miles, but maybe more admirable that such an occasion was shared with the community and passers-by, and not only for the wedding guests.  

A wedding celebration flower board in Medan, Indonesia
A wedding celebration flower board in Medan, Indonesia

These types of public displays are not exclusive to weddings in Southeast Asia, vibrant colours and decorative patterns are woven into the fabric of everyday life, indicative of a jovial spirit that many people in the region possess.

Building empathy

What are some of the immediate patterns and surfaces that exist in your workspace? Looking around your immediate environment can provide an immediate assessment of visual stimuli that can influence your decisions on a day-to-day basis.

Wall space can be incredibly valuable when you can display photos from your research trip, or find online inspiration to recreate some of the visual imagery from your target markets.  When speaking with your customers, ask to see mobile wallpapers, and think about how fashion could also play a role in determining an aesthetic choice. Lastly, take time out when on a research trip to explore the streets, museums, and absorb some of the local cultures.

Design to delight and surprise customers

Capture local inspiration on research trips to incorporate into visual collections that can be a source of inspiration for colour, imagery, and textures. Find opportunities in your product to delight and engage customers through appropriate images and visuals. Grab’s marketing consent experience leverages illustrative visuals to help customers understand the different categories that require their consent.

For all our markets, we work with local teams around culturally sensitive visuals and imagery to ensure our content is not offensive or portrays the wrong connotations.

My top 5 for guerrilla field research

If you don’t have enough time, stakeholder buy-in or budget to do research, getting out of the office to do your own is sometimes the only answer. Here are my top 5 things to keep in mind.

  1. Don’t jump in. Always start with observation to capture customers’ natural behaviour.
  2. Sanity check scripts. Your time and customers’ time is valuable; streamline your script and prepare for u-turns and potential Facebook and Instagram friend requests at the end!  
  3. Ask the right people. It’s difficult to know who wants to or has time for your 10-minute intercept. Look for individuals sitting around and not groups if possible (group feedback can be influenced by the most vocal person).
  4. Focus on the user.Never multitask when speaking to the user. Jotting notes on an answer sheet is less distracting than using your mobile or laptop (and less dangerous in some places!). Ask permission to record audio if you want to avoid notetaking all together but this does create more work later on.
  5. Use insights to enrich understanding. Insights are not trends and should be used in conjunction with quantitative data to validate decision making.

Feel inspired by this article and want to learn more? Grab is hiring across Southeast Asia and Seattle. Connect with me on LinkedIn or Twitter @PhilipMadeley to learn more about design at Grab.

Connecting the Invisibles to Design Seamless Experiences

Post Syndicated from Grab Tech original https://engineering.grab.com/connecting-the-invisibles-to-design-seamless-experiences

Leonardo Da Vinci's Vitruvian Man
Leonardo Da Vinci’s Vitruvian Man (Source: Public Doman @Wikicommons)

Before we begin, what is service design anyway?

In the world of design jargon, meet “service design”. Unlike other objectives in design to simplify and clarify, service design is not about building singular touchpoints. Rather, it is about bringing ease and harmony into large and often complex ecosystems.

Think of the human body. There are organ systems such as the cardiovascular, respiratory, musculoskeletal, and nervous systems. These systems perform key functions that we see and feel everyday, like breathing, moving, and feeling.

Service design serves as the connective tissue that brings the amazing systems together to work in harmony. Much of the work done by the service design team at Grab revolves around connecting online experiences to the offline world, connecting challenges across a complex ecosystem, and enabling effective collaboration across cross-functional teams.

Connecting online experiences to the offline world

We explore holistic experiences by visualizing the connections across features, both through the online-offline as well as internal-external interactions. At Grab, we have a collection of (very cool!) features that many teams have worked hard to build. However, equally important is how a person arrives from feature to feature seamlessly, from the app to their physical experiences, as well as how our internal teams at Grab support and execute behind-the-scenes throughout our various systems.

For example, placing an order on GrabFood requires much more work than sending information to the merchant through the Grab app. How might Grab

  • allocate drivers effectively,
  • support unhappy paths with our customer support team,
  • resolve discrepancies in our operations teams, and
  • store this data in a system that can continue to expand for future uses to come?
Connecting online experiences to the offline world

Connecting challenges across a complex ecosystem

Sometimes, as designers, we might get too caught up in solving problems through a singular lens, and overlook how it affects the rest of the system. Meanwhile, many problems are part of a connected network. Changing one part of the problem can potentially affect other parts of the network.

Considering those connections, or the “stuff in between”, makes service design a holistic practice – crossing boundaries between teams in search of a root cause, and considering how treating one problem might affect other parts of the network.

  • If this happens, then what?
  • Which point in the system is easiest to fix and has the greatest impact?

For example, if we want to introduce a feature for drivers to report restaurant closings, how might Grab

  • Ensure the report is accurate?
  • Deal with accidental closings or fraud?
  • Use that data for our operations team to make decisions?
  • Let drivers know when their report has led to a successful action?
  • Last but not least, is this the easiest point in the system to fix restaurant opening inaccuracies, or should this be tackled through an operational fix?
Connecting challenges across a complex ecosystem

Facilitating effective collaborations in cross-functional teams

Finally, we believe in the power of a participatory design process to unlock meaningful, customer-centric solutions. Working on the “stuff in between” often puts the service design team in the thick of alignment of priorities, creation of a common vision, and coherent action plans. Achieving this requires solid facilitation and processes for cross-team collaboration.

  • Who are the right stakeholders and how do we engage?
  • How does an initiative affect stakeholders, and how can they contribute?
  • How can we create visual processes that allow diverse stakeholders to have a shared understanding and co-create solutions?
Facilitating effective collaborations in cross-functional teams

What’s the ultimate goal? A Harmonious Backstage for a Delightful Customer Experience

By facilitating cross-functional collaborations and espousing a whole-of-Grab approach, the service design team at Grab helps to connect the dots in an interconnected ‘super-app’ service ecosystem. By empathising with our users, and having a deep understanding of how different parts of the Grab ecosystem affect one another, we hope to unleash the full power of Grab to deliver maximum value and delight to serve our users.

Why you should organise an immersion trip for your next project

Post Syndicated from Grab Tech original https://engineering.grab.com/why-you-should-organise-an-immersion-trip-for-your-next-project

Sherizan Sheikh is a Design Lead at Grab Ventures, an incubation arm that looks at experiences beyond ride-hailing, for example, groceries, healthcare and autonomous deliveries.

Grab Ventures is where exciting initiatives are birthed in Grab. From strategic partnerships like GrabFresh, Grab’s first on-demand grocery delivery service, to exploratory concepts such as on-demand e-scooter rentals, there has never been a more exciting time to be in this unique space.

Cover GrabFresh

 

In my role as Design Lead for Grab Ventures, I juggle between both sides of the coin and whether it’s a partnership or exploratory concept, I ask myself:

“How do I know who my customers are, and what are their pain points?”

So I like to answer that question by starting with traditional research methods like desktop research and surveys, just to name a few. At Grab, it’s usually not enough to answer those questions.

That said, I find that some of the best insights are formed from immersion trips.

In one sentence, an immersion trip is getting deeply involved in a user’s life by understanding him or her through observation and conversation.

Our CEO, Anthony Tan, picking items for a customer, on an immersion trip.
Our CEO, Anthony Tan, picking items for a customer, on an immersion trip.

 

For designers and researchers in Singapore, it plucks you out of your everyday reality and drops you into someone else’s, somewhere else, where 99.9% of the time, everything you expect and anticipate gets thrown out in a matter of minutes. I’ve trained myself to switch my mindset, go back to basics, and learn (or relearn) everything I need to know about the country I’d be visiting even if I’ve been there countless times.

Fun fact: In 2018, I spent about 100 days in Indonesia. That means roughly 30% of 2018 was spent on the ground, observing, shadowing, interviewing people (and getting stuck in traffic) and loving it.

Why immersions?

Understanding one’s country, culture and her people is something that gets me excited as I continuously build empathy visit upon a visit, interview after interview.

I remembered one time during an immersion trip, we interviewed locals at different supermarkets to learn and understand their motivations: why they prefer to visit the supermarket vs purchasing them online. One of our hypotheses was that the key motivator for Indonesians to buy groceries online must be down to convenience. We were wrong.

It boiled down to 2 key factors.

1) Freshness: We found out that many of the locals still felt the need to touch and feel the products before they buy. There were many instances where they felt the need to touch the fresh produce on the shelves, cutting off a piece of fruit or even poking the eyes of the fish to check its freshness.

Oranges

 

2) Price: The de-facto for most locals as they are price-sensitive. Every decision was made with the price tag in mind. They are willing to travel far, spend the time to go through the traffic just to get to the wet or supermarket that offers the lowest prices and value for money. Through observations, while shadowing at a local wet market, we also found something interesting. Most of the wet market vendors are getting WhatsApp messages from their regular customers seeking fresh produce and making orders. The transactions were mostly via e-wallets or bank transfers. The vendors then packed them and get bike drivers to help with the delivery. I couldn’t have gotten this valuable information if I was just sitting at my desk.

An immersion trip is an excellent opportunity to learn about our customers and the meaning behind their behaviours. There is only so much we can learn from white papers and reports. As soon as you are in the same environment as your users, seeing your users do everyday errands or acts, like grocery shopping or hopping on a bike, feeling their frustrations and experiencing them yourself, you’ll get so much more fruitful and valuable insights to help shape your next product. (Or even, improve an existing one!)

My colleagues trying to blend in.
My colleagues trying to blend in.

 

Now that I’ve sold you on this idea, here are some tips on how to plan and execute effective immersion trips, share your findings and turn them into actionable insights for your team and stakeholders.

Pro tip #1 – Generate a hypothesis

Generating a hypothesis is a valuable exercise. It enables you to focus on the “wants vs. needs” and to validate your assumptions beyond desktop research. Be sure to get your core team members together, including Business, Ops and Tech, to generate a hypothesis. I’ll give an example below.

Pro tip #2 – Have short immersion days with a debrief at the end for everyone

Scheduling really depends on your project. I have planned for trips that are from a few hours to up to fourteen days long. Be sure not to have too many locations in a single day and spread them out evenly in case there are unexpected roadblocks such as traffic jams that might contribute to rushed research.

Do include Brief and Debrief sessions into your schedule. I’d recommend shorter immersion days so that you have enough energy left for the critical Debrief session at the end of the day. The structure should be kept very simple with focus of collating ALL observations from the contextual inquiries you did into writing. It’s actually up to you how you structure your document.

Be prepared for the unexpected.
Be prepared for the unexpected.

 

Pro tip #3 – Recce locations beforehand

Once you’ve nailed down the locations, it is essential for you to get a local resident to recce the places first. In Southeast Asia, more often than so would you realise that information found online is unreliable and misleading, so doing a physical recce will save you a lot of time.

I had experienced a few time-wasting incidents when we did not expect specific locations to be what was intended. For example, while on our grocery-run, we wanted to visit a local wet market that opens only very early in the morning. We got up at 5 am, drove about 1.5 hours and only to realize the wet market is not open to the public and we eventually got chased out by the security guards.

Pro tip #4 – Never assume a customer’s journey

(even though you’ve experienced it before as a customer)

One of the most important processes throughout a product life cycle is to understand a customer’s journey. It’s particularly important to understand the journey if we are not familiar with the actual environment. Take our GrabFresh service as an example. It’s a complex journey that happens behind the scenes. Desktop research might not be enough to fully validate the journey hence, an immersion trip that allows you to be on the field will ensure you go through the lifecycle of the entire process to observe and note all the phases that happen in the real environment.

GrabFresh user journey

 

Pro tip #5 – Be 100% sure of your open-ended, non-leading questions that will validate your hypothesis.

This part is an essential piece to the quality of your immersion outcome. Not spending enough time crafting or vetting the questions thoroughly might end up with skewed insights and could jeopardise your entire immersion. Please be sure your questions links up with your hypothesis and provide backup questions to support your assumptions.

For example, don’t suggest answers in questions.

Bad: “Why do you like this supermarket? Cheap? Convenient?”

Good: “Tell me why you chose this particular supermarket?”

Pro tip #6 – Break into smaller groups of 2 to 3. Dress comfortably and like a local. Keep your expensive belongings out of sight.

During my recent trip, I visited a lot of places that unknowingly had very tight security. One of the mistakes I made was going as a group of 6 (foreign-looking, and – okay –  maybe a little touristy with our appearances and expensive gadgets).

Out of nowhere, once we started approaching customers for interviews, and snapping photos with our cameras and phones, we could see the security teams walking towards us. Unfortunately, we were asked to leave the premises when we could not provide a permit.

As luck would have it, we eyed a few customers and approached them when they were further away from the original location. Success!

Pro tip #7 – Find translators with people skills and interview experience.

Most of my immersion trips are overseas, where English is not the main language. I get annoyed at myself for not being able to interview non-English speaking customers. Having seasoned, outgoingtranslators does help a lot! If you feel awkward standing around waiting for a translated answer, feel free to step away and let the translator interview the customer without feeling pressured. Be sure it’s all recorded for transcription later.

Insights + Action plan= Strategy

Findings are significant, it’s the basis of everything that you do while you are in immersion. But what’s more important is the ability to connect those dots and extract value from them. It’s similar to how we can amass tons of raw data but entirely pointless if nothing is done with it.

A good strategy usually comes from good insights that are actionable.

For example, we found out that a % of customers that we interviewed did not know that GrabFresh has a pool of professional shoppers who pick grocery items for customers. Their impression was that a driver would receive their order, drive to the location, get out of their vehicle and go into the store to do the picking. That’s not right. It hinders customers from making their first purchase through the app.

Observing a personal shopper interacting with Grab driver-partner.
Observing a personal shopper interacting with Grab driver-partner.

 

So, in this case, our hypothesis was: if customers are aware of personal shoppers, the number of orders will increase.

This opinion was a shared one that may have had an impact on our business. So we needed to take this back to the team, look at the data, brainstorm, and come up with a great strategy to improve the perception and its impact on our business (whether good or bad).

Wrapping Up

After a full immersion, it is always important to ask each and every member of some of these questions:

“What went well? What did you learn?”

“What can be improved? If you could change one thing, what would it be?”

I’d usually document them and have a reflection for myself so that I can pick up what worked, what didn’t and continue to improve for my next immersion trip.

Following the Double Diamond framework, immersion trips are part of the “Discover”phase where we gather customer insights. Typically, I follow up with a Design sprint workshop where we start framing the problems. This is where we have a session where experts and researchers share their domain knowledge and research insights uncovered from various methodologies including immersions.

Then, hopefully, we will have some actionable changes that we can execute confidently.

So, good luck, bring some sunblock and see you on the ground!

If you’d like to connect with Sherizan, you can find him on LinkedIn.

Recipe for building a widget: How we helped to “peak-shift” demand by helping passengers understand travel trends

Post Syndicated from Grab Tech original https://engineering.grab.com/peak-shift-demand-travel-trends

Credits: Photo by rawpixel on Unsplash

 

Stuck in traffic in a Grab ride? Pass the time by opening your Grab app and checking out the Feed – just scroll down! You’ll find widgets for games, polls, videos, news and even food recommendations!

Beyond serving your everyday needs, we want to provide our users with information that is interesting, useful and relevant. That’s why we’re always coming up with new widgets.

Building each widget takes close collaboration across multiple different teams – from Product Management to Design, Engineering, Behavioral Science, and Data Science and Analytics. Sounds like a lot of people, doesn’t it? But you’ll be surprised to hear that this behind-the-scenes collaboration works rapidly, usually in the span of one month! Which means we’re often moving from ideation phase to product release in just a few weeks.

Travel Trends Widget

This fast-and-furious process is anchored on one word – “customer-centric”. And that’s how it all began with our  “Travel Trends Widget” – a widget that provides passengers with an overview of historical supply and demand trends for their current location and nearby time periods.

Because we had so much fun developing this widget, we wanted to write a blog post to share with you what we did and how we did it!

Inspiration: Where it all started

Transport demand can be rather lumpy. Owing to organic patterns (e.g. office hours), a lot of passengers tend to request for cars around the same time. In periods like this, the increase in demand could outpace the arrival of driver supply, increasing the waiting time for passengers.

Our goal at Grab is to make sure people get a ride when they want it and at the price they want, so we got to thinking about how we can ease this friction by leveraging our treasure trove – Big Data! – to help our passengers better plan their trips.

As we were looking at the data, we noticed that there is a seasonality to demand and supply: at certain times and days, imbalances appear, peak and disappear, and the process repeats itself. Studies say that humans in general, unless shown a compelling reason or benefit for change, are habitual beings subject to inertia. So we set out to achieve exactly that: To create a widget to surface information to our passengers that may help them alter their decisions on when they choose to book a ride, thereby redistributing some of the present peak demands to periods just before and after peak – also known as “peak shifting the demand”!

While this widget is the first-of-its-kind in the ride-hailing industry, “peak-shifting” was actually coined and introduced long ago!

London Transport Museum Trends

As you can see from this post from the London Transport Museum (Source: Transport for London), London tube tried peak-shifting long before anyone else: Original Ad from 1928 displayed on the left, and Ad from 2015 displayed on the right, comparing the trends to 1928.

Trends from a hotel in Beijing

You may also have seen something similar at the last hotel you stayed at. Notice here a poster in an elevator at a Beijing hotel, announcing the best times to eat breakfast in comfort and avoid the crowd. (Photo credits to Prashant, our Product Manager, who saw this on holiday.)

To apply “peak-shifting” and help our users better plan their trips, we decided to dig in and leverage our data. It was way more complex than we had initially thought, as market conditions could be different on different days. This meant that  generic statements like “5PM-8PM are peak hours and prices will be hight” would not hold true. Contrary to general perception, we observed that even during peak hours, there are buckets of time when there is no surge or low surge.

For instance, plot 1 and plot 2 below shows how a typical Monday and Tuesday surge looks like in a given month respectively. One of the key insights is that the surge trends during peak hour is different on Monday from Tuesday. It reinforces our initial hypothesis that every day is unique.

So we used machine learning techniques to build a forecasting widget which can help our users and give them the power to plan their trips beforehand. This widget is able to provide the pricing trends for the next 2 hours. So with a bit of flexibility, riders can ride the tide!

Grab trends

So how exactly does this widget work?!

Historical trends for Monday

It pulls together historically observed imbalances between supply and demand, for the consumer’s current location and nearby time periods. Aggregated data is displayed to consumers in easily interpreted visualisations, so that they can plan to leave at times when there are more supply, and with potentially more savings for fares.

How did we build the widget? Loop, agile working process, POC & workstream

Widget-building is an agile, collaborative, and simultaneous process. First, we started the process with analysis from Product Analytics team, pulling out data on traffic trends, surge patterns, and behavioral insights of both passengers and drivers in Singapore.

When we noticed the existence of seasonality for each day of the week, we came up with more precise analytical and business questions to dig deeper into the data. Upon verification of hypotheses, we decided that we will build a widget.

Then joined the Behavioural Science, UX (User Experience) Design and the Product Management teams, who started giving shape to the problem we are solving. Our Behavioural Scientists shared their expertise on how information, suggestions and choices should be presented to enable easy assimilation and beneficial action. Daily whiteboarding breakouts, endless back-and forth conversations, and a healthy amount of challenge-and-accept culture ensured that we distilled the idea down to its core. We then presented the relevant information with just the right level of detail, and with the right amount of messaging, to allow users to take the intended action i.e. shift his/her demand outside of peak periods if possible.

Our amazing regional Copywriting team then swung in to put our intent into words in 7 different languages for our users across South-East Asia. Simultaneously, our UX designers and Full-stack Engineers started exploring the best visual components to communicate data on time trends to users. More on this later, but suffice to say that plenty of ideas were explored and discarded in a collaborative process, which aimed to create something that’s intuitive and engaging while being robust and scalable to work across all types of devices.

While these designs made their way up to engineering, the Data Science team worked on finding the most rigorous method to deduce the historical trend of surge across all our cities and areas, and time periods within them. There were discussions on how to best store and update this data reliably so that the widget itself can access it with great performance.

Soon after, we went into the development process, and voila! We had the first iteration of the widget ready on our staging (internal testing) servers in just 2 weeks! This prototype was opened up to the core team for influx of feedback.

And just two weeks later, the widget made its way to our Singapore and Jakarta Feeds, accessible to the world at large! Feedback from our users started pouring in almost immediately (thanks to the rich feedback functionality that comes with each widget), ranging from great to sometimes not-so-great, and we listened to all of it with a keen ear! And thus began a new cycle of iterations and continuous improvement, more of which we will share in a subsequent post.

In the trenches with the creators: How multiple teams got together to make this come true

Various disciplines within our cross functional team came together to whip out this widget by quipping their expertise to the end product.

Using Behavioural Science to simplify choices and design good outcomes

Behavioural Science helped to explore many facets of consumer behaviour in order to plan and design the widget: understanding how consumers think and conceptualizing a widget that can be easily understood and used by the consumers.

While fares are governed entirely by market conditions, it’s important for us to explain the economics to customers. As a customer-centric company, we aim to make the consumers feel like they own their decisions, which they can take based on full information. And this is the role of Behavioral Scientists at Grab!

In guiding the customers through the information, Behavioural Science team had the following three objectives in mind while building this Travel Trends widget:

  1. Offer transparency on the fares: By exposing our historic surge levels for a 4 hour period, we wanted to ensure that the passenger is aware of the surge levels and does not treat the fare as a nasty shock.
  2. Give information that helps them plan: By showing them surge levels for the future 2 hours, we wanted to help customers who have the flexibility, plan for a better time, hence, giving them the power to decide based on transparent information.
  3. Provide helpful tips: Every bar gives users tips on the conditions at that time and the immediate future. For instance, a low surge bar, followed by a high surge bar gives the tip “Psst… Leave now, It might get busy later!”, helping people understand the graph better and nudging them to take an action. If you are interested in saving fares, may we suggest tapping around all the bars to reveal the secret pro-tips?

Designing interfaces that lead to consumer success by abstracting complexity

Design team is the one behind the colors and shapes that make up the widget that you see and interact with! The team took inspiration from Google’s Popular Times.

Source/Credits: Google Live Popular Times
Source/Credits: Google Live Popular Times

 

Right from the offset, our content and product teams were keen to surface additional information and actions with each bar to keep the widget interactive and useful. One of the early challenges was to arrive at the right gesture that invites the user to interact and intuitively navigate the bars on the widget but also does not conflict with other gestures (eg scrolling and scrubbing) that the user was pre-trained to perform on the feed. We found out that tapping was simultaneously an unused and yet intuitive gesturethat we could use for interaction with the bars.

We then went into rounds of iteration on the visual design of the widget. In this process, multiple stakeholders were involved ranging from Product to Content to Engineering. We had to overcome a number of constraints i.e. the limited canvas of a widget and the context of a user when she is exploring the feed. By re-using existing libraries and components, we managed to keep the development light and ship something fast.

GrabCar trends near you

Dozens of revisions and four iterations later, we landed with a design that we felt equipped the feature for its user-facing goal, and did so in a manner which was aesthetically appealing!

And finally we managed to deliver on the feature’s goal, by surfacing just the right detail of information in a manner that is intuitive yet effective to peak-shift demand.  

Bringing all of this to fruition through high performance engineering

Our Development Engineering team was in charge of developing the widget and making it available to our users in just a few weeks’ time – materialising the work of the other teams.

One of their challenges was to find the best way to process the vast amount of data (millions of database entries) so it can be visualized simply as bar charts. Grab’s engineers had to achieve this while making sure performance is as resilient as possible.

There were two options in doing this:

a) Fetch the data directly from the DB for each API call; or

b) Store the data in an in-memory data structure on a timely basis, so when a user calls the API will no longer have to hit the DB.

After considering that this feature will likely expect a lot of traffic thus high QPS, we decided that the former option would be too costly. Ultimately, we chose the latter option since it is more performant and more scalable.

At the frontend, the challenge was to cater to the intricate request from our designers. We use chart libraries to increase our development speed, and not all of the requirements were readily supported by these libraries.

For instance, let’s say this library makes visualising charts easy, but not so much for customising them. If designers wanted to have an average line in a dotted form, the library did not support this so easily. Also, the moving arrow pointers as you move between bar chart, changing colors of the bars changes when clicked – all required countless CSS tweaks.

CSS tweak on trends widget
CSS tweak on trends widget

Closing the product loop with user feedback and data driven insights

One of the most crucial parts of launching any product is to ensure that customers are engaging with the widget and finding it useful.

To understand what customers think about the widget, whether they find it useful and whether it is helping them to plan better,  we delved into the huge mine of clickstream data.

User feedback on the trends widget

We found that 1 in 3 users who make a booking everyday interact with the widget. And of these people, more than 70% users have given positive rating for the widget. This validates our initial hypothesis that if given an option, our customers will love the freedom to plan their trips and inculcate more transparent ecosystem.

These users also indicate the things they like most about the widget. 61% of users gave positive rating for usefulness, 20% were impressed by the design (Kudos to our fantastic designer Ajmal!!) and 13% for usability.

Tweet about the widget

Beyond internal data, our widget made some rounds on social media channels. For Example, here is screenshot of what our users have to say on Twitter.

We closely track these metrics on user engagement and feedback to ensure that we keep improving and coming up with new iterations which helps us to serve our customers in a better way.

Conclusion

We hope you enjoyed reading about how we went from ideation, through iterations to a finished widget in the hands of the user, all in 1 month! Many hands helped along the way. If you are interested in joining this hyper-proactive problem-solving team, please check out Grab’s career site!

And if you have feedback for us, we are here to listen! While we cannot be happier to see some positive reaction from the public, we are also thrilled to hear your suggestions and advice. Please leave us a memo using the Widget’s comment function!

Epilogue

We just released an upgrade to this widget which allows users to set reminders and be notified about availability of good fares in a time period of their choosing. We will keep a watch and come knocking! Go ahead, find the widget on your Grab feed, set a reminder and save on fares on your next ride!

Build your own weather station with our new guide!

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/build-your-own-weather-station/

One of the most common enquiries I receive at Pi Towers is “How can I get my hands on a Raspberry Pi Oracle Weather Station?” Now the answer is: “Why not build your own version using our guide?”

Build Your Own weather station kit assembled

Tadaaaa! The BYO weather station fully assembled.

Our Oracle Weather Station

In 2016 we sent out nearly 1000 Raspberry Pi Oracle Weather Station kits to schools from around the world who had applied to be part of our weather station programme. In the original kit was a special HAT that allows the Pi to collect weather data with a set of sensors.

The original Raspberry Pi Oracle Weather Station HAT – Build Your Own Raspberry Pi weather station

The original Raspberry Pi Oracle Weather Station HAT

We designed the HAT to enable students to create their own weather stations and mount them at their schools. As part of the programme, we also provide an ever-growing range of supporting resources. We’ve seen Oracle Weather Stations in great locations with a huge differences in climate, and they’ve even recorded the effects of a solar eclipse.

Our new BYO weather station guide

We only had a single batch of HATs made, and unfortunately we’ve given nearly* all the Weather Station kits away. Not only are the kits really popular, we also receive lots of questions about how to add extra sensors or how to take more precise measurements of a particular weather phenomenon. So today, to satisfy your demand for a hackable weather station, we’re launching our Build your own weather station guide!

Build Your Own Raspberry Pi weather station

Fun with meteorological experiments!

Our guide suggests the use of many of the sensors from the Oracle Weather Station kit, so can build a station that’s as close as possible to the original. As you know, the Raspberry Pi is incredibly versatile, and we’ve made it easy to hack the design in case you want to use different sensors.

Many other tutorials for Pi-powered weather stations don’t explain how the various sensors work or how to store your data. Ours goes into more detail. It shows you how to put together a breadboard prototype, it describes how to write Python code to take readings in different ways, and it guides you through recording these readings in a database.

Build Your Own Raspberry Pi weather station on a breadboard

There’s also a section on how to make your station weatherproof. And in case you want to move past the breadboard stage, we also help you with that. The guide shows you how to solder together all the components, similar to the original Oracle Weather Station HAT.

Who should try this build

We think this is a great project to tackle at home, at a STEM club, Scout group, or CoderDojo, and we’re sure that many of you will be chomping at the bit to get started. Before you do, please note that we’ve designed the build to be as straight-forward as possible, but it’s still fairly advanced both in terms of electronics and programming. You should read through the whole guide before purchasing any components.

Build Your Own Raspberry Pi weather station – components

The sensors and components we’re suggesting balance cost, accuracy, and easy of use. Depending on what you want to use your station for, you may wish to use different components. Similarly, the final soldered design in the guide may not be the most elegant, but we think it is achievable for someone with modest soldering experience and basic equipment.

You can build a functioning weather station without soldering with our guide, but the build will be more durable if you do solder it. If you’ve never tried soldering before, that’s OK: we have a Getting started with soldering resource plus video tutorial that will walk you through how it works step by step.

Prototyping HAT for Raspberry Pi weather station sensors

For those of you who are more experienced makers, there are plenty of different ways to put the final build together. We always like to hear about alternative builds, so please post your designs in the Weather Station forum.

Our plans for the guide

Our next step is publishing supplementary guides for adding extra functionality to your weather station. We’d love to hear which enhancements you would most like to see! Our current ideas under development include adding a webcam, making a tweeting weather station, adding a light/UV meter, and incorporating a lightning sensor. Let us know which of these is your favourite, or suggest your own amazing ideas in the comments!

*We do have a very small number of kits reserved for interesting projects or locations: a particularly cool experiment, a novel idea for how the Oracle Weather Station could be used, or places with specific weather phenomena. If have such a project in mind, please send a brief outline to [email protected], and we’ll consider how we might be able to help you.

The post Build your own weather station with our new guide! appeared first on Raspberry Pi.

Randomly generated, thermal-printed comics

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/random-comic-strip-generation-vomit-comic-robot/

Python code creates curious, wordless comic strips at random, spewing them from the thermal printer mouth of a laser-cut body reminiscent of Disney Pixar’s WALL-E: meet the Vomit Comic Robot!

The age of the thermal printer!

Thermal printers allow you to instantly print photos, data, and text using a few lines of code, with no need for ink. More and more makers are using this handy, low-maintenance bit of kit for truly creative projects, from Pierre Muth’s tiny PolaPi-Zero camera to the sound-printing Waves project by Eunice Lee, Matthew Zhang, and Bomani McClendon (and our own Secret Santa Babbage).

Vomiting robots

Interaction designer and developer Cadin Batrack, whose background is in game design and interactivity, has built the Vomit Comic Robot, which creates “one-of-a-kind comics on demand by processing hand-drawn images through a custom software algorithm.”

The robot is made up of a Raspberry Pi 3, a USB thermal printer, and a handful of LEDs.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

At the press of a button, Processing code selects one of a set of Cadin’s hand-drawn empty comic grids and then randomly picks images from a library to fill in the gaps.

Vomit Comic Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

Each image is associated with data that allows the code to fit it correctly into the available panels. Cadin says about the concept behing his build:

Although images are selected and placed randomly, the comic panel format suggests relationships between elements. Our minds create a story where there is none in an attempt to explain visuals created by a non-intelligent machine.

The Raspberry Pi saves the final image as a high-resolution PNG file (so that Cadin can sell prints on thick paper via Etsy), and a Python script sends it to be vomited up by the thermal printer.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

For more about the Vomit Comic Robot, check out Cadin’s blog. If you want to recreate it, you can find the info you need in the Imgur album he has put together.

We ❤ cute robots

We have a soft spot for cute robots here at Pi Towers, and of course we make no exception for the Vomit Comic Robot. If, like us, you’re a fan of adorable bots, check out Mira, the tiny interactive robot by Alonso Martinez, and Peeqo, the GIF bot by Abhishek Singh.

Mira Alfonso Martinez Raspberry Pi

The post Randomly generated, thermal-printed comics appeared first on Raspberry Pi.

Recording lost seconds with the Augenblick blink camera

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/augenblick-camera/

Warning: a GIF used in today’s blog contains flashing images.

Students at the University of Bremen, Germany, have built a wearable camera that records the seconds of vision lost when you blink. Augenblick uses a Raspberry Pi Zero and Camera Module alongside muscle sensors to record footage whenever you close your eyes, producing a rather disjointed film of the sights you miss out on.

Augenblick blink camera recording using a Raspberry Pi Zero

Blink and you’ll miss it

The average person blinks up to five times a minute, with each blink lasting 0.5 to 0.8 seconds. These half-seconds add up to about 30 minutes a day. What sights are we losing during these minutes? That is the question asked by students Manasse Pinsuwan and René Henrich when they set out to design Augenblick.

Blinking is a highly invasive mechanism for our eyesight. Every day we close our eyes thousands of times without noticing it. Our mind manages to never let us wonder what exactly happens in the moments that we miss.

Capturing lost moments

For Augenblick, the wearer sticks MyoWare Muscle Sensor pads to their face, and these detect the electrical impulses that trigger blinking.

Augenblick blink camera recording using a Raspberry Pi Zero

Two pads are applied over the orbicularis oculi muscle that forms a ring around the eye socket, while the third pad is attached to the cheek as a neutral point.

Biology fact: there are two muscles responsible for blinking. The orbicularis oculi muscle closes the eye, while the levator palpebrae superioris muscle opens it — and yes, they both sound like the names of Harry Potter spells.

The sensor is read 25 times a second. Whenever it detects that the orbicularis oculi is active, the Camera Module records video footage.

Augenblick blink recording using a Raspberry Pi Zero

Pressing a button on the side of the Augenblick glasses set the code running. An LED lights up whenever the camera is recording and also serves to confirm the correct placement of the sensor pads.

Augenblick blink camera recording using a Raspberry Pi Zero

The Pi Zero saves the footage so that it can be stitched together later to form a continuous, if disjointed, film.

Learn more about the Augenblick blink camera

You can find more information on the conception, design, and build process of Augenblick here in German, with a shorter explanation including lots of photos here in English.

And if you’re keen to recreate this project, our free project resource for a wearable Pi Zero time-lapse camera will come in handy as a starting point.

The post Recording lost seconds with the Augenblick blink camera appeared first on Raspberry Pi.

Security and Human Behavior (SHB 2018)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/security_and_hu_7.html

I’m at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

Next year, I’ll be hosting the event at Harvard.