Reimagining consent and contestability in AI
A question that I’ve found myself thinking a lot about in the past more than 3 years of working across Responsible AI teams in industry and as a senior Trustworthy AI fellow at Mozilla, has been centered on the concept of friction. This is also the topic of a 3-part blog series which I summarize below.
What does consent and contestability even mean in the context of AI? — Did you ever share something on social media only to find out that your content was taken down by an automated content moderation system? or maybe you were concerned that the platform is potentially suppressing the visibility of your content, or that your work was used by generative AI algorithms without your consent? — these are only a few examples of friction between people and algorithmic systems. Currently, the people building these systems leverage many kinds of user feedback in ways aligned with their business objectives. Unfortunately, what is considered user feedback and how it is used is often not transparent to everyday users. For example, we don’t know how our clicks are used, we don’t have the time to read boilerplate terms-of-service, privacy, and other agreements and policies, which not only don’t provide us with the information we care about, but create even more friction in our user experience.
This is the starting point for my research and prototyping work exploring alternatives to how we come to terms with AI systems, taking into account a growing number of investigations illuminating real world experiences of algorithmic harms and injustice.
These are not easy challenges to solve and we’ve seen that over and over again through the work of academic institutes, investigative journalists, civil society, and research teams within technology companies, working across responsible AI, trustworthy AI, AI ethics, explainable AI, human-centered AI, and other converging themes and fields at the intersection of technology and the humanities.
There is often friction not only between users and AI-driven technology but also between how different stakeholders frame the risks and harms in the process of building and auditing AI systems.
Friction doesn’t equal “bad design” — that is also at the core of the fields of design justice, critical design, design friction, human factors, and others. In my view, “frictionless” is both impossible and also not desirable in cases where there’s a plurality of stakeholders holding different views. Therefore, we need language to articulate and navigate trade offs from different perspectives — friction for whom, in what cases, at what cost, with what consequences?
Better understanding the conflicts between everyday users’ experiences, AI products, and existing policy and regulatory frameworks, could help us transform AI incidents into improved transparency and human agency.
Qualitative frameworks are an approach in social science research which helps us map how such conflicts might lead to AI incidents i.e. algorithmic harms, injustice, controversies, or what AI developers may call outliers or exceptions from the common use cases. For example, there have been more than 2,000 reports of harms submitted to the AI incidents database project and researchers across industy and academia have recently proposed a taxonomy of algorithmic harms.
What if we could leverage this moment in time to resist and refuse optimizing AI for the common use cases and instead welcome the plurality of diverse human experiences that people have when interacting with AI systems?
This is also at the heart of our provocation for a Terms-we-Serve-with agreement — a socio-technical framework for collaborative projects to define consent and contestability in the context of an AI system they are building. It is a social, computational, and legal agreement, leveraging a participatory qualitative framework, open-source tools, and innovation at the intersection with the field of computational law.
For example, imagine an AI chatbot that is a legal analyst, helping make the legalese within policy and government regulation easier to understand, a crisis response social worker, helping people report gender-based violence, and a mental health therapist, all in one. This is what the South African startup Kwanele is doing and we are grateful to be collaborating with them on making sure that the AI chatbot they are building is better aligned with their mission and values to serve women and children. Read more about the how we are leveraging the principles within the Terms-we-Serve-with framework in the final part of the blog series.
Our intention with this work is to explore alternative models for meaningful consent and contestability in AI. The goal is that the outputs from initial pilot projects and collaborations would directly translate into technical decisions and tools, organizational practices, or legal safeguards. Please reach out if you’re interested in learning more or contributing!
The illustrations above are screens from our Terms-we-Serve-with Zine created by Yan Li.