Integrity beyond the Terms of Service Agreement in a Human + AI world

Slide from François Chollet’s talk “What Deep Learning Can Do, What It Can’t, and What We Can Try Next”, see his paper: On the Measure of Intelligence

Generalization vs. Generativity

In his work, The Generative Internet, Prof. Jonathan Zittrain from Harvard Law School describes generativity as “a function of a technology’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility.” While the leverage and adaptability dimensions of this definition roughly correspond to the concepts discussed as generalization by AI researchers, ease of mastery and accessibility bring forth other kinds of considerations. For example, “ease of mastery reflects how easy it is for broad audiences both to adopt and to adapt [technology]: how much skill is necessary to make use of its leverage for tasks they care about, regardless of whether the technology was designed with those tasks in mind.” Furthermore, in Zittrain’s view of generativity, we need an accessibility dimension where accessibility reflects how easily can people both use and control a technology.

Measuring generativity in AI relates to investigating the flows of information between a model and the actors involved in the sociotechnical context within which the model exists. Two conceptual frameworks that help us characterize generativity could be (1) the model’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility and (2) the interdependencies between identity, relationship, and information.

ninefotostudio/Shutterstock

Coming to terms

One of the artefacts that exists in the interaction layer between people and an AI model (and more generally many software systems) is a Terms of Service (ToS) agreement. ToS agreements were created in the mid-1990s to protect software providers and can include accountability, liability and opt-out provisions in addition to privacy policies.

Behavioral Use Licensing for Responsible AI

The organizers of the AAAI Towards Responsible AI in Surveillance, Media, and Security through licensing symposia are an interdisciplinary group which advocates the use of licensing to enable legally enforceable behavioral use conditions on software and data. In their recent paper, Contractor et al. argue that “licenses serve as a useful tool for enforcement in situations where it is difficult or time-consuming to legislate AI usage.” The goal is to enable and incentivize AI developers to control for the responsible downstream use of their technology. They could do that by explicitly specifying its permissive and restrictive secondary uses through licensing clauses. As one of the promising steps in that direction, the conference organizers have started a broader conversation in the AI research community — “Peer reviewers should require that papers and proposals rigorously consider all reasonable broader impacts, both positive and negative.” Ultimately, small changes to the academic process could have a ripple effect in AI industry by empowering the voices of marginalized stakeholders.

Source: Building safe artificial intelligence: specification, robustness, and assurance —” Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.”

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Bogdana Rakova

Bogdana Rakova

45 Followers

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency