Designing frameworks that allow for Intentions, Commitments and Exuberance in AI

Bogdana Rakova
8 min readJul 2, 2018

Often times people talk about Artificial Intelligence without ever defining what is it that they call AI. Narrow and general intelligence, transformative AI, provably beneficial AI, extended intelligence as well as infinitely many other concepts. The goal of this writing is to give a working definition of the AI we’re hereby concerned with and embark on an exploration of how we could learn from history and employ intentionality, commitments and exuberance as first principles in the design of an AI system. All of this is work in progress and it is only through co-creation that we can create algorithms that we all enjoy. Please reach out, comment, and share your thoughts.

Feb 3, 2018, Cambridge Massachusetts, USA

You find yourself sitting in a room at the MIT Media Lab together with some of the best researchers and practitioners in the newly emerged field of Ethics and Governance of AI. You see Prof. Jonathan Zittrain as he starts his talk by acknowledging that nobody really knows what the ethics and governance of AI should look like, not even what AI looks like. His definition strikes you with its simplistic accuracy:

Arcane
Pervasive
Tightly coupled
Adaptive

Autonomish systems

Autonomish decision making systems are being employed in all layers of society. Predicting risk scores based on biased datasets and oversimplified linear regression models. If the underlying data is racist, is it the machine’s job to make society more fair? You look around and are relieved to realize the diversity of people in the room as the broad range of perspectives and understandings is perhaps the best guide in the discussion. This is the opening day of a class thought for the first time. It is called Ethics and Governance of AI and is a collaboration between Harvard’s Berkman Klein Center and MIT Media Lab. Students from both institutions were competing for a spot at the class lead by Jonathan Zittrain and Joi Ito. The classroom is shared with this year’s Assembly cohort which is a 4-month high density workshop laser focused on this field.

Is it possible for Arcane Pervasive Tightly coupled Adaptive Autonomish systems to have intentions, commitments and exuberance? Let’s start backwards! (Upside down, blindfolded, sitting on a thought made of the lavender smells coming from the field outside...)

Exuberance

the quality of being full of energy, excitement, and cheerfulness; ebullience.

There has been a supernova of exuberance in the intersection of Art and AI. Deep Dream and Alpha Go are only a couple recent examples from the broad field of Computational Creativity. If Alpha Go was our AI leap towards developing intuition and creativity, what are those examples for commitments and intentionality?

Commitments

1. the state or quality of being dedicated to a cause, activity, etc.
2. an engagement or obligation that restricts freedom of action.

What better way to think about commitments in the context of AI than to learn from the ways we share commitments between each other. I’ve been so grateful to discover and connect with a friend who communicates that concept though her art work.

Crystal Jean Baranyk and Adam Phelps created an art piece called Temple of Commitments where different pollinators form the shape of a flower.

“I chose to depict mutualistic symbiosis, the kind of symbiosis in which all species involved benefit from one another. Why is this important in terms of commitment between human beings? When we make commitments to one another it is a reminder of the deep interconnectedness we wish to share. Even in the roughest patches we commit to attempting to follow nature’s better examples by being vulnerable, being present, and knowing how much we mean to each other.”
— Crystal Jean Baranyk

The art piece was part of a heartwarming commitment to what truly matters in the world of Intentions, Commitments and Exuberance between humans. Read more about it here.

The Machine Learning research community have done fascinating work related to the so called concept of an objective function which measures an AI system or agent’s “happiness”. Yann LeCun, in his talk “A Path to AI” during the Beneficial AI conference last year talked about two ways to think about designing objective functions:

  • Hardwire a safeguard objective function, a built in instinct that would get the machine to by default be social or have some other inartistic behavior. Still it’s very difficult to design an objective and be sure there’s no side effects.
  • We should be able to train the system to train it’s objective function though something called adversarial training in the field of Inverse Reinforcement Learning. The goal there is to train the objective to emulate the (unknown) objectives of the human trainers.

Are objective functions commitments?

I think that a mix of intrinsic and trainable objectives is the closest we are to implementing the concept of commitments in AI systems. Ultimately, when thinking about the ethical implications of AI driven systems, we have a responsibility to take action and collectively challenge the kinds of objectives that AIs should comply to in order to operate in the products and services we use.

In the amazingly complex and fast-paced world we live in, everyone is involved, everyone is a policy maker. You are a policy maker and you can influence the way AI Systems behave in the world. At the end of the day, we are all six steps away from each other. The world looked very different back in 1929 when the Hungarian writer Frigyes Karinthy wrote a short story called Chains. “Now we live in fairyland.” — says Karinthy and tells a story how you can keep playing the Six degrees of separation game not only with people but with events and things that happen around you.

The 19th-Century Vision of the Year 2000, perhaps the fairyland that Karinthy and his contemporaries imagined for us . Published by Isaac Asimov in his book “Futuredays: A Nineteenth Century Vision of the Year 2000”.

Waiting for things to somehow figure themselves out is not a promising route and we’ve learnt that the hard way already. “Other worlds are possible, and we are going to be living in them.” — Long Now recently interviewed José Luis de Vicente, curator of the “After the End of the World” art exhibition in Barcelona, about the role of art in addressing climate change.

We should be thinking about designing systems that allow for open participation, encourage and protect the whistleblowers and activists, demand for diversity and create ways for everyone to influence the commitments of the AI that is employed in the products and services we use.

Intentions

A thing intended; an aim or plan.

Is it possible for a machine or a complex AI system to posses intentionality? This question relates to the concept of strong AI and the field of Artificial General Intelligence(AGI). Strong AI was named “strong AI” by John Searle in his Chinese room argument. Searle’s main claim is that “Instantiating a computer program is never by itself a sufficient condition of intentionality.” The Chinese room argument is a thought experiment where Searle interrogates the relationships between:

  • a language,
  • a script,
  • the background story defining the context of the script,
  • a batch of questions about the script,
  • a batch of answers to those questions.

He locks himself in a room and becomes an instantiation of a computer program that is capable of accurately answering questions related to a script in Chinese. However he knows no Chinese. To him, Chinese writing is just many meaningless squiggles. If we could program an AI whose answers to a set of questions are absolutely indistinguishable from those of native Chinese speakers, what does this mean for the nature of human understanding?

How is understanding related to intentionality? “Our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them.” — in his work Minds, brains, and programs Searle argues that:

The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain.

Many leading AI researchers, scientist and philosopher have expressed their answers to the question of achiving computational human understanding. You have probably seen all the work happening at some of the information technology giants such as Google. You may be concerned.

Shane Legg, co-founder and now a leading scientist at DeepMind, responds to Searle’s argument in his PhD work on Machine Superintelligence. He says that whether or not an agent understands what it is doing is only important to the extent that it affects the measurable performance of the agent. If the performance is identical, as Searle suggests, then whether or not the room with Searle inside understands the meaning of what is going on is of no practical concern.

To try to tie all of this together I want to tell you a story about Symbiosis.

Symbiosis

In 1991, after a lifetime of biological research, the scientist Lynn Margulis published Symbiosis as a Source of Evolutionary Innovation. Rather than biology, her work focused on the interactions in emergent mutualistic systems. The first system she examines was that of the coral/zooxanthellae symbiosis, wherein the zooxanthellae live in coral cells and provide nutrients to the coral as it lives. Take away one of them and the other one will die. Margulis recognized that we needed a new word and a new framework to understand and describe organisms as systems, rather than individuals — holobiont.

Symbiotic relationships scale up quickly to forests, cities, the notion of culture and many others to create a worldview that not just removes the individual from the center, it also removes the idea of a center. Photo Credits Tiffany Lin, @TiffClin

Consider the symbiotic relationship we have with technology. How should we think about the ethical implications of AI-powered products and services? What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits? Perhaps the design of systems that give us hooks to better define and debug AI’s intentions, commitments and exuberance is the only way to put us on a trajectory of ever being able to answer these questions.

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency