The AI Social License

Bogdana Rakova
4 min readJan 12, 2024

A short story about an artifact from a possible future world, exploring a science fiction approach to interrogating the present state of governing AI systems.

“In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.” — Douglas Hofstadter, “I Am a Strange Loop”

Hello, World.

The year is 2028. You’ve just been awakened from a time chamber you entered four years ago. At the time that you entered the chamber, you found it very difficult to make sense of the world around you: a collapsing global economy, climate disasters, conflicts, and personal mental health breakdowns. Eventually, you signed a contract with a hibernation service that would keep you alive in a capsule and would only wake you up when the world was safe for you and your community.

As part of your reentry into the world, you are given a medical checkup. As you walk into the clinician’s office, you realize that the doctor has an AI assistant they are talking to. You are immediately concerned about the AI agent and ask the clinician to explain. They present you with the card below titled “AI Social License.” It has a QR code, which you’re instructed to scan with your phone.

Generated with OpenAI’s GPT 4 and DALL-E

The AI Social License

2028 is a world where each AI agent must have a social license to operate. The AI Social License (ASL) is a digital document and a conversational interface that anyone can interact with. You can ask questions, decide if you can trust it, and report issues or concerns. Specifically the ASL is designed to enable a more meaningful form of consent and contestability with regards to data and its use.

An AI agent’s social license can give you critical information about the agent including its origin. Just like a driver license, the ASL has “a date of birth” and “address” — specific details regarding when and how the AI agent was built and deployed and by whom. Unlike a driver license, it provides much more information including provenance, certification data, a privacy policy, and data stewardship policy. You can interact with all of this information through a human-centered conversational interface online.

The social license is a living socio-technical contract between the AI agent and society at large. Unlike the kinds of contracts everyday people are asked to “agree” to when interacting with software products and services, the social license redistributes power and rebalances information asymmetries.

  • You can add contextual metadata to the ASL about your experience with the AI agent. Other people interacting with the agent can then see that metadata.
  • There’s a 3rd party validation — i.e. “AI Consumer Reports” - experts continuously test the AI agent and document what things can go wrong — all this data is available to you when you interact with the ASL.
  • If there was a miscommunication between you and the AI agent, you can use the ASL to ask questions and better understand how the AI agent made a decision. You could also choose to get assistance from a person if you need further help in understanding the decision.
  • Consent is a co-evolving practice that transforms over time — at any point of your interaction with the AI agent, you can ask questions to the ASL to better understand the scope of consent, how your data could be used, who is it shared with and for what purpose.
  • Individual consent isn’t enough. It is essential that we also embrace a culture of consent on a collective level. When multiple people interact with the ASL of an agent, they are engaging in collective sensemaking about it.

The learner’s permit

The first milestone on the road to getting the ASL is to obtain a provisional instruction permit, sometimes called a “learner’s permit.” This is for AI agents who are learning to operate “in the wild” and working on the requirements to get an ASL.

To apply for a learner’s permit for an AI agent, the AI company building it needs to register it in a public database and provide sufficient information about its training data, optimization algorithm, intended use, benchmarks and evaluation testing.

In the next trial period of time:

  • Domain experts are part of an evaluation process that examines the real world operations of the agent in particular use cases.
  • Members of the public are brought in during consultations to co-design constraints on the real world operations of the agent.

The outcomes are documented and added as parameters to the agent’s ASL.

The history of the future

You find that an ASL allows you new kinds of agency and control over your interactions with an AI agent. You wonder about what else has to exist in a world where every AI agent has a social license to operate. How does that contribute to more meaningful consent mechanisms and responsible and safe AI systems? You are struck by how much the world has changed since you decided to hibernate.

A design fiction approach

The concept of the AI social license described here is a provocation through which we aim to discuss the consequences, raise design considerations, and hopefully shape decision making. Join me during an online event on January 19th to explore this further and contribute to co-creating rather than just debating possible futures and experiencing their consequences and implications in community.

RSVP here.

Resisting the status quo of friction in the context of AI innovation, we intend to open and join new discursive spaces grounded in a speculative everything approach to the blurry boundaries between fact, fiction and friction in AI. Learn more in this blog post.



Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency