The Map is not the Territory, and yet we are all mappers

Bogdana Rakova
2 min readJan 22, 2020

--

Good maps allow us to extend our senses. They bring awareness and agency by enabling action based on more informed decisions. This is also what has partly driven us to explore the intersection between organizational structure and AI. Not all of AI work in general but specifically the work encompassing the issues of fairness, accountability, explainability of Machine Learning (fair-ML), there’s a need for practical and pragmatic map-making which could empower fair-ML practitioners to do the work that they do while making sure there’s an alignment between their work and Society at large. That has been our vision, while building on prior work [1,2,3,4] and conducting a study employing ethnographic techniques such as structured and semi-structured interviews with fair-ML practitioners over the past seven months.

Ethnography allows us to build rich context around the people within organizations building AI-enabled technology. Leveraging decades of work of organizational theorists such as Wanda Orlikowski, we need to consider “the duality of technology, that is, technology is physically constructed by actors working in a given social context, and technology is socially constructed by actors through the different meanings they attach to it” [5]. That is, all technology is physically and socially constructed and enacted by people and therefore can be changed by people. To further explore this field, we’re hosting a session:

An interactive tutorial session during the upcoming ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) of ML to explore the intersection of organizational structure and fair-ML work.

We invite you to join us during the conference or virtually through our tutorial microsite here — https://organizingandai.github.io/.

References:

[1] Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900–915.

[2] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 329 59–68.

[3] Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019, April). Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 600). ACM.

[4] Veale, M., Van Kleek, M., & Binns, R. (2018, April). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 chi conference on human factors in computing systems (p. 440). ACM.

[5] Orlikowski, W. (1992). The Duality of Technology: Rethinking the Concept of Technology in Organizations. Organization Science, 3(3), 398–427. Retrieved January 9, 2020, from www.jstor.org/stable/2635280

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency