Trustworthy AI futures: reflections from being a Mozilla Fellow in 2023

Bogdana Rakova
12 min readJan 10, 2024

--

2023 was a milestone year for my work and presented me with a transformative shift in my ability to make sense of the rapidly evolving AI space. Our brains constantly make models of the world. Trained as a computer scientist, I am wired to think in engineering terms, and I’ve consciously sought out opportunities to engage with interdisciplinary fields and understand engineering concepts from the perspective of people with entirely different academic backgrounds and livelihoods from me. In what follows, I share about that in the context of my learnings from being a Senior Trustworthy AI fellow at Mozilla Foundation in 2023 — the global Mozilla ecosystem, rapid socio-technical prototyping, building alternatives to the status quo, building incentive structures, and multi-stakeholder engagement.

Global ecosystem and scales of impact

A lot has happened in the world in the past couple of years. I vividly remember my interview with Mozilla in late 2021. “We work to amplify the stories of people who care” was roughly what Ashley Boyd shared at the time in between interview questions. That genuinely stuck with me. I had recently come out of working on a +1 project titled Where responsible AI meets reality: practitioner perspectives on enablers for shifting organizational practice and for the first time in my career felt like belonging to something more than the tech company I was working at. I felt a deep sense of belonging to a movement of technologists who cared. Yet, as Stanford researcher Sanna J. Ali and her collaborators have written about more recently, walking the walk of AI ethics is not straightforward. Also, it is not easy to imagine what caring means and how we can sustain it and translate it into impact across time and organizations.

Working at the intersection of open-source and trustworthy AI constantly teaches me about the relationship between local and global scales. The only way to be global is to be hyper-local and build capacity and impact on a regional level. Local, regional, and global scales of strategy and impact are in complex dynamic relationships and depend on social, technological as well as ecological infrastructure.

Mozilla’s global Trustworthy AI initiatives took me to 5 countries in 2023 and taught me that movement building means sustained action. It is not collaboration, it doesn’t just happen, it needs conscious effort, and as with anything, it needs practice. Movement building is about working in a way that builds collective power and a diverse ecosystem that is able to shift existing power structures and information asymmetries on a specific issue. With its roots in the open internet movement, Mozilla refers to movement building as “growing the number of people and organizations committed to creating a healthier digital world,” and with respect to trustworthy AI, “to encourage a growing number of civil society actors to promote trustworthy AI as a key part of their work” (see the Trustworthy AI theory-of-change here and the Mozilla manifesto here). In my work as a fellow, I evolved a theory-of-change for every research project I worked on and every collaboration and partnership I started. It helped me track my progress and reflect on how my actions align with a bigger vision. A theory-of-change is a model that explains how a project is expected to lead to a specific change, drawing on available evidence. It is a common approach to impact evaluation in policy and social innovation. It relies on a systems thinking approach, recognizing that our actions as individuals and institutions are embedded in and interact with systems.

Rapid socio-technical prototyping

Rapid prototyping is a valuable skill when building a technical project, effectively communicating your ideas to others, or just learning something new, such as getting up to speed with generative AI. Social scientists have argued that AI systems are socio-technical systems. They are even social-ecological-technological systems, as we proposed in our paper together with Roel Dobbe (see here a summary blog post I wrote for the Montreal AI Ethics Institute). Therefore, there is an emergent need to consider how to practically incorporate social and ecological considerations within a predominately engineering prototyping paradigm.

Rapid prototyping is at the core of what happens within innovation teams at technology companies or during AI hackathons. A transformative question to ask is how we could expand the kinds of values that are taken into account during the early prototyping stages. To do that in my fellowship, I worked on centering qualitative methods, community-driven and domain-specific methodologies, interdisciplinary metrics frameworks, and multi-stakeholder engagement.

This was a key topic during the workshop on Operationalizing the Measure Function of the NIST AI Risk Management Framework which happened in partnership with the Northwestern University CASMI research institute. The focus group I was part of worked on workshopping experimental protocols for testing AI models or systems using socio-technical methods. Read a blog post about it here. I got to collaborate with CASMI early on last year when organizing a session on Algorithmic Contestability during their Toward a Safety Science of AI workshop. Contestability is the ability for people to disagree, challenge, appeal, or dispute harmful algorithmic outcomes, and I propose that it is a critical part of what safety means for AI.

I got the opportunity to lead a workshop on Prototyping Social Norms and Agreements in Responsible AI during the Mozila Responsible AI Challenge hackathon in San Francisco. It gave me an opportunity to engage about 40 people in prototyping social, computational, and legal mechanisms that allowed us to explore questions of What do we mean when we speak of recognizing the risks of AI and evolving safeguards? What does that mean in terms of privacy, security, fairness, human autonomy, and digital sovereignty? I also mentored the hackathon winning team that developed anti-AI watermarks for images and won the top prize of $50,000.

Prototyping Social Norms and Agreements in Responsible AI workshop during the Mozilla Responsible AI Challenge hackathon in San Francisco

The team that won second place — Kwanele — develops a chatbot used in the context of gender-based violence prevention. Their hackathon mentor was Ranjit Singh, while I’ve also been a close mentor since 2022 and feel so grateful for our ongoing partnership. We’re currently prototyping a domain-specific language model evaluation protocol for the use of chatbots in navigating gender-based violence prevention.

Leonora Tima from Kwanele during the Mozilla Responsible AI Challenge hackathon in San Francisco. I’ve been a close advisor for their work since 2022 and collaborated with her team on a Terms-we-Serve-with pilot project.

Build alternatives to the status quo — three provocations

Mozilla has given me a tremendous opportunity to explore alternatives to the status quo. I got to prototype three practical provocations — terms-we-serve-with, ecosystemic AI, and speculative friction. These are all ongoing projects that aim to create new space for engaging with questions of meaningful and contextual forms of trust, transparency, safety, and governance of AI systems.

Terms-we-serve-with is a framework — a social, computational and legal contract — and an approach to anticipating and mitigating algorithmic harms. We wrote about it in our paper together with Dr. Renee Shelby and Dr. Megan MaTerms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harm, which I presented at the conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO).

At the ACM FAccT conference, I got to share about this work during a panel session on Language Models and Society: Bridging Research and Policy together with Mihaela Vorvoreanu, Irene Solaiman, Gretchen Krueger, Ioana Baldini, and Alex C. Engler. I explored topics on the risk of overreliance on AI and the role of AI literacy similar to digital literacy education. After developing a responsible AI training program during my work as a responsible AI research manager at a global consulting firm, I have been acutely aware of the need to improve literacy about potential risks both internally within organizations and externally for end users of AI. Mihaela Vorvoreanu’s work on a Responsible AI Maturity Model at Microsoft provides steps for organizations to improve their readiness for addressing responsible AI concerns. However, more is needed in order to improve awareness of the risks among end users and ultimately build more meaningful trust in downstream adoption.

Language Models and Society: Bridging Research and Policy session at the FAccT conference

Algorithms as Social-Ecological-Technological Systems: an Environmental Justice Lens on Algorithmic Audits — Roel Dobbe and I worked on a new paper which I got to present at the ACM FAccT conference. In this work we propose going beyond carbon footprint in how we conceptualize and measure the sustainability dimensions of building AI products and services. We bring in perspectives from the field of just sustainabilities and build a qualitative assessment framework that practitioners could operationalize on the ground. Based on this initial work, Kiito Shilongo and I started an online working group — Ecosystemic AI and facilitated multiple follow up workshops. I was also part of the review committee of two grant programs at the intersection of digital technology, AI, environmental and climate justice — the Mozilla Technology Fund and the Green Screen Coalition: Catalyst Fund — which collectively supported projects with a total amount of $1M in funding.

What if fiction was an approach to engage with friction? tensions and frictions between people as well as between people and the planet are often at the core of the complexity behind some of the biggest challenges we’re wrestling with. In the Speculative F(r)iction experiment, I propose that there’s a harmful status quo in AI innovation: Friction is bad. Friction means more work and effort. Friction results in slowing down. Instead, I’m exploring ways to prototype design fictions that empower constructive friction in AI — to contribute to improved human agency, contextual transparency, safety, conflict resolution, open collaboration, learning, and care in the context of how people engage with generative AI. Join me on January 19th when I’ll facilitate a panel and workshop together with Gemma Petrie, Sophia Bazile, Richmond Y. Wong, and Tyler (T) Munyua. Learn more about the initiative and RSVP here.

Build incentive structures

Having alternatives helps us bring light to underlying incentive structures. That could help us see new possibilities and explore entirely new kinds of incentive structures. Just like a negative space coming into the foreground.

Rubin’s vase (in the image below) is an optical illusion and a famous example of exploring positive vs. negative space with tone reversal. What you don’t include (negative space) is as important as what you do include (positive space). Positive and negative spaces create tension in an art piece — the interaction between them is what directs your eye. In the context of AI, we could consider the lack of specific incentives (negative space) in interaction with existing incentives (positive space). Again, it is the interaction between them that directs your eye (and decisions).

Rubin’s vase (1915)

The reality is that the social structures we rely on are not evolving at the speed of AI innovation. Putting burden on consumers does not mean meaningful transparency and human agency. Tech companies having good intentions on safety and alignment, recommendation guidelines, and best practices does not translate to operationalizing responsible AI in practice. I got to talk about building incentive structures during the AI Symposium: Collaborating Toward a Safer and More Responsible Future co-organized by Google and the Software Engineering Institute at Carnegie Mellon University. The event brought together participants from industry, universities, and government to discuss AI’s impact on knowledge, information, and national security. I shared about my work on consent and contestability mechanisms, human-centered contextual data and AI literacy, third-party oversight of incident reporting, and active co-design of community norms and user agreements. Read about some of this in a recent Stanford blog post exploring — Engaging on responsible AI terms: rewriting the small print of everyday AI systems.

AI Symposium: Collaborating Toward a Safer and More Responsible Future co-organized by Google and the Software Engineering Institute at Carnegie Mellon University

Generative AI models are going to fail, hallucinate, and be weaponized in misinformation campaigns. It is how companies understand, communicate, and navigate these risks which is going to help them strengthen their relationship with clients and downstream end users of their products and services. Future-oriented business models need to employ a human-centered user experience that empowers trust. For example, during the Community-collaborative visions for computing research session at the ACM FAccT 2023 conference, Lauren Wilcox talked about how broadening participation in AI research, development, and ongoing evaluation, could contribute to identifying and mitigating potential failures and risks for end users. In their paper AI consent futures: a case study on voice data collection with clinicians — they look at AI-assisted healthcare applications, distilling eight classes of potential risks that clinicians are concerned about with regards to voice data collection during health consultations.

An emerging project I’m working on has to do with the fiction of consent in the context of generative AI models and the data pipeline and infrastructure they rely on. It investigates the ability of decision-makers to negotiate critical aspects of the AI system lifecycle. To do that, I’m conducting semi-structured interviews with AI practitioners, researchers, and policy makers operating within civil society. The questions in the interview explore their perspectives on what does meaningful consent and contestability mean in the context of data and AI; what is the relationship between them as well as the relationship with other concepts such as transparency and explainability; key requirements for consent and contestability mechanisms; individual vs a collective or community-driven form of consent/contestability; how should we incorporate the social context into such mechanisms; questions of scale; and what are barriers to adoption in practice.

Lastly, I learned that incentive structures on an organizational and institutional level can shift when our own incentives shift on a personal level. For example, if you work at a company and believe in its mission and vision, you can and should hold the executive team accountable to it. How do you measure progress towards that? What are the signals to watch out for and know that you’re getting off track on local, regional, and global scales? One way to do that is through your organization’s theory-of-change. Does your organization have a theory-of-change? Do you ask about the theory-of-change and all of these questions when you interview at a company?

Multi-stakeholder engagement

Learning from Mozilla’s decades of movement building work globally, my time as a senior fellow helped me think about both practical multi-stakeholder engagement in AI design and governance as well as how we can creatively leverage generative AI within participatory mechanism design.

Multi-stakeholder engagement has always been key to participatory research and mechanism design, and has been extensively studied across a large number of academic disciplines. One way Mozilla enables such engagement is through their global Mozilla Festival — MozFest — collaborative event and community. At the core of what makes MozFest unique is the shared-by-design federated organizing process grounded in the principles of co-design and co-ownership of the participatory mechanism. At its heart are the wranglers who design and facilitate virtual and physical spaces, fostering participation and growth.

In 2023 I organized two virtual and two in-person MozFest multi-stakeholder engagement workshops on the topics of — Prototyping algorithmic contestability for large language models (together with Nick Vincent), Sustainability, justice, and socio-ecological dimensions of AI transparency (together with Tamara Kneese, Becky Kazansky, and Melissa Pineda Pinto), Critical feminist interventions in trustworthy AI and technology policy, and Centering trustworthy AI in the just sustainabilities of social and environmental ecosystems (together with Kiito Shilongo and Roel Dobbe).

Key themes in what emerged from the workshop was that — there’s a need to focus on operationalization and enforcement — how do we evolve key performance indicators for guidelines and principles? There’s a need to connect individual cases of algorithmic harm or risks to opportunities for action and response mechanisms. It is also important to recognize epistemic injustice —to pay attention to the language we use and acknowledge that there’s often a lack of diverse and non-western perspectives.

Building on these workshops, I’m currently leading a project, co-designing a domain-specific large language model evaluation protocol and tools in collaboration with Kwanele and the Data & Society research institute. Multi-stakeholder engagement efforts are needed in order to prototype and experiment with alternatives that could lead to building trust in generative AI systems and tools. Such efforts need to center underrepresented voices in close collaboration between practitioners from civil society, policy experts, researchers, and industry.

Looking forward

In the last couple of years, I’ve had a front-row seat to foundational and applied technologies supporting the current AI revolution, interdisciplinary research, and global policy. And yet the road ahead in maturing AI safety guardrails and organizational practice is a long one. Technology companies have a vested interest in trustworthy AI, data security, and all of what I have shared about in this article, and an imperative interest in getting AI right given their global reach. In 2024 I’m looking forward to helping organizations grow their capacity to respond to the challenges of building trustworthy AI products at scale.

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency