A New Framework for Coming to Terms with Algorithms

Reflections on terms of service, gender equity, and chatbots

Bogdana Rakova
Data & Society: Points

--

Sticky notes with ideas

“I agree to the terms of service” is perhaps the most falsely given form of consent. The small print and legalese in these contractual agreements often fails to provide people with meaningful consent and contestability in cases of AI functionality failures or nonconsensual use of content as training data for generative AI — exemplifying the friction between people and technology companies leveraging AI.

But it doesn’t have to be this way.

The Terms-we-Serve-with (TwSw) is a socio-technical framework for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system. Resisting the one-time take-it-or-leave-it terms-of-service agreements we currently have, the TwSw presents a new social imaginary for coming to relational terms with algorithmic systems.

The TwSw is a feminist framework: it sets out to enable people to recognize, acknowledge, challenge, and transform existing power relations, social norms, and mental models in AI by offering an alternative model. Through five dimensions or principles, it is meant to enable communities, practitioners, builders, and policymakers to foster transparency, trust, and engagement in AI. These principles are in conversation with prior work in the field of feminist science and technology studies, and serve as entry points for technologists and policymakers to foster more meaningful forms of consent and accountability through:

  • Co-constitution of user agreements, defining system boundaries and how interactions across those boundaries are constructed and constituted;
  • Addressing friction, engaging in dialogue in the production and resolution of conflict in the context of coercion and dark design patterns, leveraging the fields of design justice and critical design;
  • Generative informed refusal mechanisms, reflecting the need for a sufficient level of human oversight and agency, including opting out;
  • Contestability mechanisms for people to disagree, challenge, complain, dispute, or otherwise contest AI decisions and outcomes on the individual or collective level; and
  • Disclosure-centered mediation, to acknowledge and take responsibility for algorithmic harm, drawing on the field of medical law and engaging with forms and forums that facilitate reparative alternative dispute resolution.
Explore the five dimensions of the Terms-we-Serve-with (TwSw) social, computational, and legal framework — co-constitution, addressing friction, informed refusal, disclosure-centered mediation, and contestability.

How does the TwSw work in practice? The South African startup Kwanele offers one example. Kwanele is building a chatbot that helps women and children report and prosecute cases of gender-based violence, with an emphasis on helping people understand their legal rights as well as the processes for seeking recourse. To realize these goals, the chatbot needs to play several roles at once: a legal analyst, helping make the legalese within policy and government regulation easier to understand; a crisis response social worker, helping people report gender-based violence; and a mental health therapist that people interact with in a very vulnerable state — all in one.

In light of these challenges, Temi Popo, Megan Ma, Renee Shelby, and I worked with Kwanele to disentangle potential algorithmic risks and harms. Using the TwSw design framework, we facilitated a participatory workshop during which we discussed the institutional barriers that prevent AI builders from meaningfully “hearing” complaints — including siloed organizational structure and a lack of proper communication channels between product and user-facing support teams. Participants reflected on how a chatbot’s inability to understand might create a feeling of alienation for Kwanele’s users, who are already in vulnerable situations. Together, we worked on designing mechanisms through which users could voice feedback about their experiences of algorithmic harm, on both the individual and collective level. We also considered what needs to be disclosed and to whom, and how, and when a disclosure would be changed: in the context of gender-based violence reporting, for example, it is important to disclose to users that the chatbot they are interacting with is an automated technology and offers a connection to local social workers.

Through these efforts, builders saw that effective disclosure could contribute to a transformative justice approach to the mitigation of algorithmic bias. As a result, Kwanele’s team is developing a protocol for small group user studies and user experience research that will help them understand potential Large Language Models (LLMs) failure modes. They are also engaging with communities in co-designing human-centered user agreements that improve the transparency of such failure modes. Their development team is prototyping an interface that enables users to provide continuous feedback about potential risks and harms as they interact with the chatbot. Ultimately, all of these interventions will help them better serve their users. (Read more about how Kwanele is leveraging the Terms-we-Serve-with framework in this blog post.)

The critical feminist interventions that emerged during this workshop are a step toward centering work around the lived experiences of members of communities affected by algorithmic chatbot systems. Currently, we’re working to evolve a domain specific taxonomy of harms and risks of LLMs in the context of gender equity, which could be leveraged by user agreements (such as those related to cookies and content policies) and inform a more granular level of user feedback when it comes to how LLMs are built. By evolving the TwSw as a multipronged approach, we hope to inspire transdisciplinary practitioners and policymakers with tools and generative questions to reorient their work toward a reparative approach centered on the needs of those most impacted by algorithmic harms.

In centering the need to acknowledge, understand, investigate, mitigate, and mediate algorithmic harms and risks, we seek to empower new kinds of solidarities. The feminist-inspired approach behind this collaborative project has allowed us to see the frictions among existing stakeholders as a force for positive systems change. Resisting the illusion of frictionless technology, we ask : What if we could design specific kinds of friction back in, in order to enable slowing down, self-reflection, conflict resolution, open collaboration, learning, and care? We hope the Terms-we-Serve-with intervention will lead to real world systems change, by empowering meaningful participation and radically reimagining how we come to terms with algorithms.

--

--

Bogdana Rakova
Data & Society: Points

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency