Reimagining consent and contestability in AI

Bogdana Rakova
5 min readFeb 8, 2023

--

Did you ever share something on social media only to find out that your content was taken down by an automated content moderation system? Or maybe you were concerned that your work was used to train generative AI algorithms without your consent? Did you contest i.e. challenge the outcome of an algorithmic model or otherwise provide feedback, report a concern, or contact a customer service team? — these are only a few examples of friction between people and algorithmic systems. Currently, the people building these systems leverage many kinds of user feedback in ways aligned with their business objectives. Unfortunately, what is considered user feedback and how it is used is often not transparent to everyday users. For example, we don’t know how our clicks are used, we don’t have the time to read boilerplate terms-of-service, privacy, and other agreements and policies, which not only don’t provide us with the information we care about, but create even more friction in our user experience.

This is the starting point for my research and prototyping work exploring alternatives to how we come to terms with AI systems, taking into account a growing number of investigations illuminating real world experiences of algorithmic harms and injustice.

These are not easy challenges to solve and we’ve seen that over and over again through the work of academic institutes, investigative journalists, civil society, and research teams within technology companies, working across responsible AI, trustworthy AI, AI ethics, explainable AI, human-centered AI, and other converging themes and fields at the intersection of technology and the humanities.

I argue that we need to question good intentions in AI through socio-technical frameworks that evolve social, computational, and legal agreements that are actionable. That’s the goal behind the Terms-we-Serve-with (TwSw) — a feminist-inspired multi-stakeholder engagement framework and tools.

The TwSw is a feminist framework in a sense that it sets out to enable people to recognize, acknowledge, challenge and transform existing power asymmetries in AI by offering an alternative model. It is meant to enable practitioners, builders, and policymakers to foster transparency, accountability, and engagement in AI, empowering individuals and communities navigating cases of algorithmic harms and injustice to transform them by aligning AI tools with a psychology of care and service. We do that through five dimensions or principles which are in conversation with prior work in the field of Feminist Science and Technology Studies.

The TwSw framework offers five entry points for technologists and policymakers to foster more meaningful forms of consent and enable more transformative models of algorithmic accountability through: (1) co-constitution of user agreements, centering the voices and leadership of disadvantaged communities; (2) addressing friction, leveraging the fields of design justice and critical design in the production and resolution of conflict; (3) enabling refusal mechanisms, reflecting the need for a sufficient level of human oversight and agency including opting out; (4) complaint, through a feminist studies lens and open-sourced computational tools; and (5) disclosure-centered mediation, to disclose, acknowledge, and take responsibility for harm, drawing on the field of medical law.

There is a lot of complexity to how AI products and services enter society — we need to better understand and acknowledge existing power structures, information asymmetries, and regulatory challenges.

There is often friction not only between users and AI-driven technology but also between how different stakeholders frame the risks and harms in the process of building and auditing AI systems.

Friction doesn’t equal bad design — that is also at the core of the fields of design justice, critical design, design friction, human factors, and others. In my view, frictionless is both impossible and also not desirable in cases where there’s a plurality of stakeholders holding different views. Therefore, we need language to articulate and navigate trade offs from different perspectives — friction for whom, in what cases, at what cost, with what consequences?

Better understanding the conflicts between everyday users’ experiences, AI products, and existing policy and regulatory frameworks, could help us transform AI incidents into improved transparency and human agency.

Qualitative frameworks are an approach in social science research which helps us map how such conflicts might lead to AI incidents i.e. algorithmic harms, injustice, controversies, or what AI developers may call outliers or exceptions from the common use cases. For example, there have been more than 2,000 reports of harms submitted to the AI incidents database project and researchers across industry and academia have recently proposed a taxonomy of algorithmic harms.

What if we could leverage this moment in time to resist and refuse optimizing AI that works for everyone and instead welcome the plurality of diverse human experiences that people have when interacting with AI systems?

The Terms-we-Serve-with is a socio-technical framework for collaborative projects to define consent and contestability in the context of an AI system they are building or using. It is a social, computational, and legal agreement, leveraging a participatory qualitative design framework, open-source tools, and innovation at the intersection with the field of Computational Law.

For example, imagine an AI chatbot that is a legal analyst, helping make the legalese within policy and government regulation easier to understand, a crisis response social worker, helping people report gender-based violence, and a mental health therapist, all in one. This is what the South African startup Kwanele is doing and we are grateful to be collaborating with them on making sure that the AI chatbot they are building is better aligned with their mission and values to serve women and children. Read more about how we are leveraging the principles within the Terms-we-Serve-with framework in the final part of the blog series.

Our intention with this work is to explore alternative models for meaningful consent and contestability in AI. The goal is that the outputs from initial pilot projects and collaborations would directly translate into technical decisions and tools, organizational practices, or legal safeguards. Please reach out if you’re interested in learning more or contributing!

The illustrations above are screens from our Terms-we-Serve-with Zine created by Yan Li.

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency