Reflecting on the “Sociotechnical Approaches to Governing AI” workshop at the Technical University of Munich

Bogdana Rakova
4 min readJun 5, 2022

Recently I had an amazing opportunity to contribute to an interdisciplinary workshop together with leading scholars and policymakers, organized by Jacob Metcalf and Ruth Müller at the Technical University of Munich, on the topic of Sociotechnical Approaches to Governing AI.

Regulators and AI justice advocates are increasingly turning towards assessments of algorithmic systems’ impacts as a method for governing what role automated decisions should have in sensitive areas of our lives. In the opening of the workshop Jacob Metcalf invited us to question the frame:

What is the ‘unit’ that new regulatory frameworks should address in order to improve public trust in AI systems? — Technical parameters? Assessment reports? Data privacy? Development and review procedures? The agency and autonomy of data subjects? Economic incentives and business models? — each choice implies a different model of the accountability relationships between developer, regulator and data subjects, and offers a different construction of what constitutes harm and redress.

Emanuel Moss started with a provocation — in the context of AI, “personal information is neither personal nor information.” We need to question and pay attention to how legal categories are contextualized through technology. Futhermore, in a recent article he explores the “spectacular capabilities” of data science and machine learning and how modes of myth-making affect the success of technology companies.

Jenny Brennan presented Ada Lovelace’s recent case study with the National Health Service in the UK exploring the use of AI impact assessments in healthcare data governance.

Lastly, within the session I was part of, I shared about my work on regulating the downstream use of AI through computational contracts which I’m exporing during my fellowship at the Mozilla Foundation.

Computational contracts (not smart contracts i.e. not in any way related to the blockchain), could be a promising approach to the sociotechnical assessment and regulation of algorithmic systems. A computable contract is a natural language contract that is expressed in computer-processable rules corresponding to specific contractual terms and conditions. It has been proposed by scholars and practitioners in the field of Computational Law and has picked up traction in human-centered healthcare insurance contracting.

Together with collaborators Megan Ma and Renee Shelby, we are evaluating the use of computational contracts as a transparent sociotechnical intervention that builds improved feedback loops between consumers, civil society, policymakers, and technology companies.

At a recent conference organized by the Oxford Internet Institute, we presented a paper, challenging the status quo of technology companies’ terms-of-service agreements. Our proposal is for a Terms-we-Serve-with (TwSw) — a feminist-inspired social, computational, and legal contract for restructuring power asymmetries and center-periphery dynamics to enable improved human agency in individual and collective experiences of algorithmic harms.

The TwSw is a provocation and a speculative imaginary centered on five dimensions — co-constitution, through participatory mechanisms; accountability, through reparation, apology, and forgiveness; positive friction through enabling meaningful dialogue in the production and resolution of conflict; verification through open-sourced computational tools; and veto-power reflecting the temporal dynamics of how individual and collective experiences of algorithmic harm unfold.

Critically, my conversations with scholars, industry practitioners, and policymakers globally have given rise to the following research questions:

  • ex-post enforcement mechanisms in the proposed EU Artificial Intelligence Act compared to regulation proposals we’re seeing in the US;
  • enabling improved outcomes through challenging how agency is distributed among stakeholders;
  • how do we avoid overwhelming consumers with participation which turns into free labor;
  • creating meaningful models of participation that empower diverse and often marginalized communities;

I think that computable contracts could be a sociotechnical intervention that enables new kinds of feedback loops among diverse actors — individuals using an algorithmic system, civil society, regulators, builders of AI, and others. Therefore, empowering improved collaboration in addressing the critical questions regarding enforcement mechanisms.

Consider the case of content takedown, misinformation, or bullying and harassment online. Lack of transparency about consumer tech companies’ content policies and community guidelines has led to an increasing number of harmful experiences. A computational contract could be automatically executed on the interaction level between people and an algorithmic system, providing new kinds of metadata about potential presence of bias or injustices more broadly. It is a form of verification or justification not with regards to a specific algorithmic decision but with regards to the interaction — Is what I’m experiencing on the platform aligned with the contractual agreements that govern that interaction? What can I do to seek recourse in cases of harmful experiences? Who can help me navigate such incidents or controversies?

Ultimately, I hope that this project will enable improved transparency and new forms of community organizing in cases of particular kinds of algorithmic harms. As someone who has spent a number of years building AI systems within some of the same companies that my research investigates now, I belilve that technology and sociotechnical interventions could meaningfully empower civil society actors in helping individuals and communities experiencing algorithmic harms and injustices.

More soon!

📢 I’d love to hear from you. Let’s co-create a new social contract that empowers improved transparency and human agency in the complex interactions between people and algorithmic systems.

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency