A ghost story about human self-determination within algorithmic sociotechnical systems

Bogdana Rakova
2 min readNov 5, 2019

Do you ask yourself what happens to the ghostly presence that remains after you interact with a digital system?

The very moment you open a web page, scroll down, scroll up, and down again, thumbs up, thumbs down … you leave digital traces, even a kind of fingerprints that could be used to identify your mood, the likelihood you’ll buy a product, the likelihodd you’d like the next song or movie, and infinitely many other tiny choices. These traces are everything but obvious. Web crawlers, apps, advertisers, so many entities are after them. More than a Halloween costume idea, it is interesting to make visible the ghostly presence left from these interactions — I’ll call my ghostly remains Casper_307 for the purposes of this short essay. Increasingly, interactions in the physical world are also part of Casper_307, the friendly ghost. For example, every time you slide a credit card, use public transport (San Francisco plans to stop selling paper BART tickets, you can only travel with a chip card), the picture that TSA just took of you at the airport, the tokens you needed to buy in order to shop at a farmers market, are all part of the traces that other parties might be after. Casper_307 is not only an online presence but a very physical one as well.

How is agency distributed between myself and (my) Casper_307?

First of all, could I even call it “mine”? I have no way to know that, but my intuition tells me that an organization whose mission is scale, often through turning interactions into product, can’t care less about the unique mystery of my personal friendly ghost. Instead it is likely my Casper_307 is blurred into a ghostly complex made of many other “similar” ghosts. That is all OK I think, as long as I have some agency over making sense of what Casper_073 is and how it’s been put to work from all these other parties.

Exploring a day of Casper_307’s life, together with my team lead and collaborator Rumman Chowdhury, we set out to bring critical considerations from the field of Human Factors and Supervisory Control to understand the relationship between a user, their friendly ghost, and an algorithmic model. Specifically we asked, who takes the supervisory role in the repeated interactions between a human and an algorithmic system?

We’ve worked on identifying a gap in the evaluation metrics that are widely used to measure performance, and go on to propose a human-algorithmic interaction metric called we call a barrier-to-exit. It aims to be a proxy for quantifying the ability of the AI model to recognize and allow for a change in user preference. See the full paper here — https://arxiv.org/abs/1909.06713

What do you think? What ghostly thoughts come to mind?

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency