Integrity beyond the Terms of Service Agreement in a Human + AI world

Bogdana Rakova
8 min readNov 25, 2020

Our interactions with algorithmic systems lead us to develop relationships which extend beyond the digital realms. Inevitably, they have first and second order implications on our innate sense of agency, autonomy, and identity. Disentangling these implications leads us to question the existing legal and other frameworks that operate on the level of human-algorithmic interactions. This essay is a short summary of related reflections from attending and participating at the 2020 AAAI Spring and Fall Symposia tracks on Towards Responsible AI in Surveillance, Media, and Security through licensing and Conceptual Abstraction and Analogy in Natural and Artificial Intelligence. It covers the concepts of generalization, generativity, some limitations of Terms of Service (ToS) agreements in the context of human-algorithmic interactions, and the opportunities for new organizing frameworks such as behavioral use licensing and dynamic algorithmic service agreements.

Generalization is the central problem in AI. Generalization is what’s referred to as the ability to handle situations (or tasks) that differ from previously encountered situations. It involves being able to navigate uncertainty, novelty, and autonomy. Google Brain researcher François Chollet describes two categories of generalization — systems-centric and developer-aware. Systems-centric generalization represents an AI model’s ability to adapt to situations which it hasn’t previously encountered, while developer-aware generalization represents its ability to adapt to situations that could not be anticipated by the creators of the system (unknown unknowns).

Slide from François Chollet’s talk “What Deep Learning Can Do, What It Can’t, and What We Can Try Next”, see his paper: On the Measure of Intelligence

Generalization vs. Generativity

In his work, The Generative Internet, Prof. Jonathan Zittrain from Harvard Law School describes generativity as “a function of a technology’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility.” While the leverage and adaptability dimensions of this definition roughly correspond to the concepts discussed as generalization by AI researchers, ease of mastery and accessibility bring forth other kinds of considerations. For example, “ease of mastery reflects how easy it is for broad audiences both to adopt and to adapt [technology]: how much skill is necessary to make use of its leverage for tasks they care about, regardless of whether the technology was designed with those tasks in mind.” Furthermore, in Zittrain’s view of generativity, we need an accessibility dimension where accessibility reflects how easily can people both use and control a technology.

Generativity has also been discussed by sociologists in the context of human identity. Philip Sheldrake and the Internet Identity Workshop project describe generative identity as “approaching digital identity for psychological, sociological, and ecological health.” By building on works by organizational theorists Wheatley and Kellner-Rogers, Sheldrake describes identity, relationships, and information as interrelated. Aligned with the work of these scholars, we see the need for discussing the generalization concept in AI across the dimensions of identity, relationship, and information. In the realm of regulatory frameworks for AI, Sheldrake brings to our attention that the General Data Protection Regulation’s definition of personal data is directly in the information domain. Beyond the informational domain, his research explores what he and others call interpersonal data. A concept also related to what sociologists have framed as warm data — a term used to describe information about relationships, originating in the works of anthropologist and cyberneticist Gregory Bateson.

In summary, this short essay invites us to expand the traditional concept of generalization in AI —

Measuring generativity in AI relates to investigating the flows of information between a model and the actors involved in the sociotechnical context within which the model exists. Two conceptual frameworks that help us characterize generativity could be (1) the model’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility and (2) the interdependencies between identity, relationship, and information.

ninefotostudio/Shutterstock

Coming to terms

One of the artefacts that exists in the interaction layer between people and an AI model (and more generally many software systems) is a Terms of Service (ToS) agreement. ToS agreements were created in the mid-1990s to protect software providers and can include accountability, liability and opt-out provisions in addition to privacy policies.

Casey Fiesler et al. studied the ToS agreements of 116 social media platforms in order to understand the landscape of ethical and regulatory considerations of data collection. They categorize the ToS provisions in four ways: (1) prohibition on automated data collection; (2) prohibition on manual data collection; (3) prohibition on any data collection; and (4) a requirement to obtain permission for data collection. Their work shows that ToS provisions are ambiguous, inconsistent, and lack context. The gray area of ToS violations has in some cases been considered a violation of the Computer Fraud and Abuse Act (CFAA)(18 U.S.C.§1030) and has subsequently lead to users of online platforms committing suicide. In conclusion, Fiesler et al. propose that “ethical decision-making for data collection should extend beyond ToS and consider contextual factors of the data source and research.”

Here we seek to expand on this critical look at ToS specifically in the context of human-algorithmic interactions, by discussing five kinds of sociotechnical concerns: accommodating and enabling change, co-constitution, reflective directionality, friction, and generativity. Building on the work exploring the concept of generative human identity, we find these concepts to be helpful in characterizing the gaps in the interface layer between people and AI.

Many of the most common ML modeling approaches used in industry today rely on modeling techniques which depend on people not changing their patterns of behaviour. Even without years of Greek Philosophy classes, many computer scientist would agree with Heraclitus‘s notion — change is the only constant. Furthermore, we seek inspiration from the concept of mutability in Social Science. Mutability is closely connected to our ability to change our dynamic and multifaceted human preferences as well as multiplicity of identities. AI system creators need to be cognizant of the mutability of the data variables which are used in the algorithmic decision-making process and allow for change to happen. Similarly, the evaluation frameworks which are employed by AI System creators as well as the licensing frameworks need to be adaptable to the dynamic nature of human identities.

“Identity, relationships, and information are reciprocally defined and co-constitutivewrites organizational scientist Margaret Wheatley. In other words, we cannot easily separate information from our identity and our relationships. Therefore, the way we measure progress in AI as well as the AI licensing frameworks we decide to use, need to encompass all of these dimensions. A co-constitutive human-algorithmic interaction layer could also be a way for interdisciplinary stakeholders to cooperate on identifying and addressing the ethical challenges of AI Systems. For example, it could provide a means to practically operationalize the AI well-being impact assessment process proposed by the IEEE 7010 Standard, where concrete well-being metrics are defined in the domains of: (1) Affect, (2) Community, (3) Culture, (4) Education, (5) Economy, (6) Environment, (7) Human Settlements, (8) Health, (9) Government, (10) Psychological Well-Being/Mental well-being, (11) Satisfaction with life, and (12) Work.

As someone who has done applied and research work in AI recommender systems, the question I always had while working in that domain was — What if online platforms allowed us to separate the claims we make about ourselves from the claims other human and algorithmic actors have made about us? This question brings us to the dimension of friction. Similarly to mutability, many current AI systems or ToS agreements do not allow for friction. Users of a certain platform are not usually notified when a change in the algorithm has happened. Even if users receive notifications of changes in the ToS, often times they don’t have any opportunity to express their preferences to the ToS on a more granular level. They do not have a way to internalize what could be the impact of such change and often do not have the autonomy to act upon it outside of choosing to completely opt out of the platform.

Regarding generativity, currently ToS agreements may differ from one country to another but they often fail to address the unique needs of cities, collectives, and individual communities of place or practice. New kinds of frameworks which operate in the human-algorithmic interaction layer could allow for a greater level of generativity, in the intersection of people’s generative human identity and the potential for generative technology.

Behavioral Use Licensing for Responsible AI

The organizers of the AAAI Towards Responsible AI in Surveillance, Media, and Security through licensing symposia are an interdisciplinary group which advocates the use of licensing to enable legally enforceable behavioral use conditions on software and data. In their recent paper, Contractor et al. argue that “licenses serve as a useful tool for enforcement in situations where it is difficult or time-consuming to legislate AI usage.” The goal is to enable and incentivize AI developers to control for the responsible downstream use of their technology. They could do that by explicitly specifying its permissive and restrictive secondary uses through licensing clauses. As one of the promising steps in that direction, the conference organizers have started a broader conversation in the AI research community — “Peer reviewers should require that papers and proposals rigorously consider all reasonable broader impacts, both positive and negative.” Ultimately, small changes to the academic process could have a ripple effect in AI industry by empowering the voices of marginalized stakeholders.

Specification is inextricably linked to the challenge of generalization in building AI systems. Most recently, a group of 40 researchers across seven different teams at Google have identified underspecification as one of the biggest concerns when deploying AI in any industry domain. Licenses could provide a framework for practitioners to put more attention to the downstream use of their work and in this way also expand on testing the systems before they are put in production. For example, in the context of natural language processing, Ribeiro et al. discuss behavioral testing of AI language models. More broadly, robustness is an active area of AI research as discussed by Ortega et al. and the DeepMind safety team:

Source: Building safe artificial intelligence: specification, robustness, and assurance —” Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.”

The challenges of specification, robustness, and assurance were also at the core of the discussions during the AAAI Conceptual Abstraction and Analogy in Natural and Artificial Intelligence conference. Specifically, top researchers investigating the challenges of AI generalization, expressed the need for (1) designing new kinds of benchmarks as well as (2) a critical look at what conclusions are made based on those benchmarks. Similarly, there’s a need for a “developmental approach” to AI inspired by the field of Developmental Psychology. Almost every single research talk at the conference made a reference to Douglas Hofstadter’s work on disentangling the relationships between analogy, concepts, and cognition.

Reinforcing the need to go beyond accuracy metrics, academic researchers and practitioners have a responsibility to investigate and spread awareness about the (un)intended consequences of the AI algorithms and systems to which they contribute. Ultimately, new kinds of metrics frameworks, behavioral licencing or ToS agreements could empower participation and inclusion in the responsible development and use of AI.

--

--

Bogdana Rakova

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency