Borrowing frames from other fields to think about algorithmic response-ability

This article is inspired by a talk that happened at the All Tech Is Human event in San Francisco. A fascinating community I’ve been part of for a year. The goal here is to introduce a way of thinking about algorithmic fairness, accountability, and transparency, which allows us to reflect a single totality. Dualities are not seen as contradicting each other but instead reflecting a single totality in which nothing is left out.

Data has always been used as a way to oppress. No need to believe me, watch this great talk on the past, present, and future of Data Governance. My first experience with this happened with a datapoint I could not change — my birth date. Beyond the actual datapoint, what stories we tell about that data makes a huge impact. Growing up, I often had that feeling that there’s something wrong. For example, I remember that people who had cars usually had a monthly budget which they will set apart from their salary and use when the Police nudged them into paying them under the table. Corrupted behaviors were happening so often that they soon became part of the culture, there were jokes, songs, movies, books where these behaviors were seen as normal. I think that, once those behaviors became part of culture, they became immensely difficult to change.

Similarly, as explored by anthropologist Nick Seaver, we need to consider the notion of algorithms as culture. Algorithms are not singular technical objects that enter into many different cultural interactions, but are rather unstable objects, culturally enacted by the practises people use to engage with them [1]. The thing is, though, that often, cultures are circular [2].

A circle with a center and four quadrants is a symbol used in many traditions all around the world. The word for circle in Sanskrit (a language with a 3,500-year history) is called “mandala”. The Tibetan Buddhist tradition has developed the so called mandala principle which could bring new perspectives to many social and technological challenges we face. How did the understanding of the mandala come to the west? It was, in part, through the work of Carl Jung who writes that:

“The mandala serves a conservative purpose namely to restore a previously existing order. What restores the old order simultaneously involves some elements of new creation. In the new order the old pattern returns to a higher level, the process is that of an ascending spiral which grows upward while simultaneously returning again and again to the same point.” [3]

In his work, he discovered that drawing mandalas is a process of ordering that is taking place within the psyche. This ordering effect on the human psyche is not the result of conscious reflection or cultural effort. It is a pre-existing condition of consciousness that such patterns help bring it into focus. This is why Jung found the mandala to be present in so many cultures and mythologies spanning the globe and the history of humanity itself.

To me, the mandala principle of deconstructing the self, relates to a crucial step in the pipeline of almost any AI System today. It is called PCA — Principal Component Analysis — a statistical algorithm through which an engineer finds a lower dimensional basis for representing high dimensional data. This is how datasets with millions of columns are transformed into a set of columns of linearly uncorrelated variables called principal components, which would then contribute to learning a better machine learning model faster.

According to the mandala principle of Tibetan Buddhism, we can put dualities to use by seeing that they are not actually contradictions of each other. Instead, they are interrelated and reflect a single totality in which nothing is left out. Beyond the PCA algorithm, as a preprocessing step for analyzing data, I believe that what this means is that we need to consciously think about that totality of perspectives when designing algorithmic systems. Similarly, there’s a need for all-encompassing evaluation frameworks incorporating long-term and interdisciplinary thinking.

We learn from Robert Sapolsky’s classes on Human Behavioral Biology about the fundamental problems with categorical thinking, still this is what AI systems are best at. Human Behavioral Biology tells us that within-category distinctions are often ignored while between-category distinctions are enlarged. For example, let’s say you’re doing a test and the needed score to pass is 66. Suddenly, there’s a world of difference between a 65 and 66, not particularly different but because they are close to the boundary, the between category distinctions are much bigger. We see that the core nature of classification algorithms is problematic only by itself.

Much less attention has been put into algorithms that could find the common ground between categories instead of pushing their subjects away into boundary conditions. This has been explored in the field of Common Sense Reasoning. Currently, it comprises of the fields related to teaching AI agents about Intuitive Physics, Intuitive Psychology and Common Facts. It is related to what we know from the field of Developmental Psychology about what infants know about the world. Common sense is immensely intricate and very often uncommon. One of the founders of the field of AI, Marvin Minsky, has a chapter in his fascinating book The Society of Mind titled Uncommon Sense. In his book, he explores the mental agents cooperating in the mind of a toddler, while he or she builds a tower of blocks for the enjoyment of seeing it collapse in random directions. In reality, the individual behavior of a single block tells us little about the behavior of the tower. This is what Buckminister Fuller called Synergy — behavior of whole systems unpredicted by the behavior of their parts taken separately. He wrote about the need of a deeper understanding of Synergetics along with Cybernetics [4].

Synergetics was developed as a field by the theoretical physicist Hermann Haken and is related to the concepts of self-organization and emergence [5]. It is an interdisciplinary science explaining the formation and self-organization of patterns and structures in open systems far from thermodynamic equilibrium. Self-organization occurs in many physical, chemical, biological, robotic, and cognitive systems. Examples can be found in swarm animal behavior as well as artificial and biological neural networks.

Image for post
Image for post

More will follow in future posts further exploring how these ideas relate to the design and development of algorithmic systems. Immense gratitude to Laura Musikanski, Robert Harris, and Bonnitta Roy for their contribution in helping me work on these ideas.

[1] Seaver, N. 2017. Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society.
[2] Rothman, J. 2014. The battle hymn of the tiger family.
[3] Jung, C.J., 1964. Man and His Symbols.
[4] Fuller, R.B., 2008. Operating manual for spaceship earth. Estate of R. Buckminster Fuller.
[5] Haken, H. 1984. The Science of Structure: Synergetics.

Social change through response-able systems that empower participation and inclusion. Ethics and Governance of AI. AI Safety Geek.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store