Blog

Design AI applications that instill trust

June 13, 2018 | By Tram Tran

Trust is both abstract and complex, making it difficult to pinpoint. Familiarity and reputation are key factors in establishing trust between the user and the solution; but when a new platform is introduced to users, neither exist. As PwC pinpoints, the adoption of AI is always met with scepticism from a variety of stakeholders.

Since joining HIVERY as a UX designer, I have had the opportunity to work with various customers in Retail. These companies face an immense challenge in mining large volumes of multidimensional data about product performance, store traffic, consumer behaviours and loyalties. While we can help to solve their problems by using AI to automate the process and arrive at better decisions, the customers would more or less remain sceptical unless we put effort into growing their confidence in the application’s recommendations. For that reason, it has now become apparent to us that the foundation of AI application design rests on trust building.

The onus of trust building, however, is on the whole company (rather than on a single department) and is well-knitted in the process (rather than in a certain phase). Trust starts at the algorithm level, where we tirelessly work to build reliable models that produce consistent data results to the interface level, where we employ different methods to present the outputs in a digestible manner to the end users.

Approaching the monumental task of cultivating trust, our team in HIVERY have come to agree on the following design principles, which help to guide our decisions in conceiving, developing and managing AI-based products.

Design principles in building trust:

Transparency

Interacting with AI black box might be a very unpleasant experience when we don’t understand the underlying process of how it works.

Transparency in design would help users understand why a particular recommendation is made and sheds light on the process that an AI system follows to arrive at its conclusions and recommendations.

Copious research supports the fact that transparency in design has a significant impact on a user’s experience. It makes the system more reasonable and comprehensible for the user.

As part of HIVERY’s process, we always ensure that our customers understand how the algorithm works and what range of outputs that they can expect from the tool. From an interface standpoint, there are also many tactics that can be used to fully implement transparency in design. One such example is displaying data in ways that would allow users to quickly ascertain validation based on their working experience. This would help users to identify any problems that may exist. For example, in Vending Analytics (an AI application optimising assortment, space and price of vending machines) users can figure out why a product is added or removed from a machine by looking at its performance against all other available products.

In another project we tried implementing a progress indicator that shows step-by-step our algorithm’s backend process when the users choose to optimise the performance of their supermarket stores; this was communicated in business logic, as opposed to technical language.

Promoting the need for transparency, yet we are well aware that there are many advanced algorithms of which the creators can not fully understand their “thought process” or explain their behaviour. A case in point is the mystery behind every move of DeepMind’s AlphaGo when it plays the ancient board game Go.

Simplicity

Image Credit: TarikVision

Simplicity is a relative concept, what seems obvious to me may not seem the same for you. Therefore it can only be achieved by understanding thoroughly the user’s mental model, their pattern of enquiry and decision-making process. Within the company, it’s also vital to ensure that all stakeholders are given an opportunity to join design critique sessions and discuss the simplicity that the application requires.

At HIVERY, simplicity is there to ensure that our team designs a data-driven platform that empowers users without a steep learning curve and at the same time retains all layers of sophistication in their work. It’s quite a challenge, especially with high-end tech, but it’s nonetheless an important goal. It doesn’t matter how capable a platform is if the end users can’t harness that power.

Though it varies from one application to another, there are a few things that we always need to consider when striving for simplicity:

  • Don’t stick to the old approach

When designing AI application interface, it’s important to bear in mind that we’re constructing an access for users to leverage the power of the algorithms in place. The interface is therefore not only bound by users’ mental model but also constrained by how the back-end algorithms work. There is a need to strike the right balance between what they users expect to see, how their process currently is and what they should see and how their process would evolve after the introduction of the application in the workflow.

  • Emphasise Information Architecture

The point of AI is to offload as much information and processing as possible. There’s no need to bog end users down with too much redundant data, too many menu options or a mega navigation with deeply nested pages.
It’s always a good idea to apply principles of good information architecture to enable users to navigate with ease in the application, quickly find information that is required to make a right decision and see clarity in every piece of presented data. For those who are unfamiliar with the term, information architecture concerns the visual hierarchy, the labelling system, navigation system and search system.

  • Present a clean layout

While we all agree that sci-fi interfaces are brilliant works, it’s not necessary to bring that level of complexity into the interface. The layout of AI applications, based on our experiences, should be minimalistic with just enough fractions and loads of white space. Achromatic colour scheme is preferred so that key information in signal colours would stand out better. Reusing interface components is also a promising means to reduce the perceived complexity.

  • Create flexibility for customisation

Designing an application that accommodates the needs of many user groups is a tricky balancing act to pull off. It happens most of the time that there are different stakeholders in every organisation that will view the data in different ways and use the AI recommendations to perform different functions. Providing them simplicity means allowing everyone to customise the platform to fit their needs.

Even when there is only one group of users, they also need to have a good amount of autonomy in choosing data displays and methods advantageously for the questions they need to answer.

Control

Image Credit: Planet Flem

Mediating between two diverging concepts such as control and autonomy (and the trust on which the autonomy is based) is a problem in the design of human-computer interfaces.

In every AI application that we design, not only do we provide recommendations for business decisions, but we also empower users to create different scenarios, test various hypotheses, and compare their own intuition-based decision with the machine’s recommendation.

The reason is that in the real world, business constraints and circumstances would sometimes require users to create overrides, manual adjustments, and other deviations from the ideal use case.

The power of making the final decision remains in the users’ hands, yet the interface must always inform them of the possible consequences of their actions.

Vulnerability

Image Credit: nadia_bormotova

To build trust, AI needs to communicate its confidence or, even better, express its fear of failure. While consistently demonstrating its superior intelligence in handling strategic analysis is of utmost importance, AI applications still need to be explicit about their limitations.

In Vending Analytics, there are many built-in controls that help to catch issues before they become a problem. For example, the application seeks the user’s confirmation when it detects anomalous data. It also expresses uncertainty when there are not sufficient data to predict whether a product can be added to a particular type of machine or location.

At a minimum, we need to include thresholds to ensure that users are aware of its boundaries of accuracy. An example might include setting a minimum and maximum price for a certain product.

—-

As technology continues to evolve, it’s hard to foresee how it will transform the nature of design work. At the moment, we believe a successful adaptation of AI requires designers to demonstrate empathy not only to users but also to machines, and more than that, figure out a way to bridge the gap of understanding between the two and thus foster their collaborative relationship, because “It takes two to do the trust tango

—-

Sources:
Building Trust
How to build trust and confidence in AI
AI Can Be A Troublesome Teammate
Tuning the Agent Autonomy: the Relationships between Trust and Control
Computer as Partner: A Synergistic Approach to Interaction Design

Subscribe to HIVERY updates