Understanding friction in the context of using AI to support human decision-makers

Bogdana Rakova
4 min readNov 18, 2022

--

Beyond improving the AI systems themselves, we point to the need to think critically about the affordances of the interface through which we interact with them. Recent work in the field of fairness, accountability, and transparency of AI also points us to the need to evolve improved feedback loops and an integrative participatory approach to their design and evaluation.

There are many kinds of friction between people and technology. We are asked to make decisions regarding cookie preferences and terms-of-service without having a clear understanding of what we’re agreeing to. Ad blockers have become ubiquitous in helping users turn off ad pop ups. In the context of AI systems, we conducted a study to better understand the kinds of friction that arise among practitioners building AI — taking into account their organization’s structure and culture. The work was partly inspired by computer scientist Melvin E. Conway’s observation that “any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Inspired by psychologist Daniel Kahneman’s “Thinking Fast and Slow” theory of human reasoning and decision-making, interdisciplinary scholars are pioneering an emergent area of research that brings together neural and symbolic approaches to building AI systems. According to Kahneman’s theory, human decision-making is guided by the cooperation of two systems: System 1 is intuitive, fast, and takes little effort, while System 2 is employed in more complex decision-making involving logical and rational thinking.

While in the field of AI there’s an unquestionable strive for frictionless technology, what if we could design specific kinds of friction back in, in order to enable slowing down, self-reflection, conflict resolution, open collaboration, learning, and care?

For example, Twitter might nudge users to read a news article before retweeting it or allow them to write “notes” critiquing or explaining a tweet. Apple gives you a screentime weekly report pop-up, showing the total amount of time spent on apps. A web browser extension made by Mozilla allows you to report harmful YouTube recommendations.

Daniel Kahneman investigates friction in the context of conflicts of values, conflicts between the interests of the experiencing and the remembering selves, and conflicts between experienced utility and decision utility. For example, he explains that one of the tasks of System 2 is to overcome the impulses of System 1, which involves self-control and conflict resolution. Discussing the work of psychologist Paul Slovic, he draws attention to situations in which the differences reflect a genuine conflict of values.

The question of disentangling conflicts of values is central to the field of Speculative and Critical Design (SCD). Anthony Dunne and Fiona Raby describe SCD as a type of design practice that aims to challenge norms, values, and incentives, and in this way has the potential to become a catalyst for change. In the table below they juxtapose design as it is usually understood with the practice of SCD, highlighting that they are complimentary and the goal is to facilitate a discussion.

SCD (the B side of the A/B comparison above) is not about providing answers but about asking questions, enabling debate, using design not as a solution but as a medium in the service of society, not about science fiction but about creating functional and social fictions. I wonder how the world would be different if we were to leverage speculative and critical design in the design of multi-agent AI systems.

Through that interdisciplinary lens, the main question in my research is related to experimenting with a symbolic approach to documenting AI systems in order to improve robustness and reliability.

In particular, we ground the study of conflict of values between people and AI in a taxonomy of sociotechnical harms, where we define sociotechnical harms as “the adverse lived experiences resulting from a system’s deployment and operation in the world — occurring through the ‘co-productive’ interplay of technical system components and societal power dynamics” (Shelby et al.)

In a recent paper the taxonomy distinguishes between representational, allocative, quality-of-service, interpersonal, and social system/societal harms. For example, social stereotyping is a type of representational harm.

AI safety researchers point out that human objectives and their associated values are often too complex to capture and express. However, recent research in the field of Cognitive Science has begun to reveal that human values have a systematic and predictable structure. Of course, values vary across cultures and sometimes even the same individual can hold conflicting values or make contradictory judgements.

To better understand the friction that may arise between conflicting human values and AI systems, we’re interested in building an ontology that enables decision-makers to formally specify a perceived experience of values misalignment which may lead to sociotechnical harms with regards to:

  • Specific AI task inputs and outputs (i.e. outcome decision)
  • Human’s perception of harm with regards to a taxonomy of harms
  • Internal state of the multi-agent system including its model of the world, model of self and model of others.

The ontology would then be operationalized through nudges and choice architecture as part of the interface between people and AI. We hope that adding such design friction could empower improved human agency and transparency through creating an entirely new kind of feedback loop between users and AI builders.

We are currently building a prototype in the context of the interactions between people and large language models and would love to hear from you. How is your work related to these research questions?

This blog post is a summary of my slow lightning talk at the Thinking Fast and Slow and other Cognitive Theories in AI, part of the AAAI Fall Symposia. See all other talks and papers here.

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency