What The Zoom Controversy Teaches Us About AI and Consent

Bobbi Rakova
7 min readAug 10, 2023

--

A blog post about the challenges to informed consent in AI; the risks of dark design patterns; and what to do about it.

If you’re following recent news about the changes in Zoom’s Terms-of-Service (ToS) agreement, you might agree that we only know about them through the work of the few people who actually read the entirety of the ToS. I first learned about this through a post on Hacker News and this article by Alex Ivanovs.

In fact, according to the Internet Archive’s WayBack Machine, Zoom’s new stance on AI has been in effect at least since March 31st. See the archived version of the terms here. It took weeks for the internet to notice.

In summary, two of the ToS clauses discuss the use of data for machine learning and artificial intelligence training and testing. Zoom distinguishes between two kinds of data - customer content and service generated data.

  • Customer content includes any data that you may upload to a Zoom chat as well as the data generated though using the product — visual displays, transcripts, analytics, and others.
  • Service generated data includes “any telemetry data, product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services.”

According to clauses 10.4 and 10.2, Zoom is allowed to use both of these kinds of data for machine learning and artificial intelligence training and testing.

Source: https://explore.zoom.us/en/terms/

On August 8th, Zoom made an adjustment to clause 10.4, which now states that, “For AI, we do not use audio, video, or chat content for training our models without customer consent.” However, privacy experts point to the fact that this adjustment covers only customer content — service generated data can still be used for training AI without customer consent. Furthermore, service content includes “content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services” (clause 10.2).

So how do we know what kinds of data is being generated? Is AI used to generate such service data? (For further reading on this, see this comment by Sean Hogle, (an attorney specializing in tech and intellectual property).

AI-enabled value-added services

In response to the public backlash, Zoom wrote a blog post about the ToS changes. Again, note that the blog post was published on August 7th, while the changes have been in place at least since March 31st.

In the blog post Zoom talks about a value-added service where a user decides to livestream on YouTube. Still, a number of questions remain: What other value-added services are in development and how will a user learn about them? Value for whom and at what cost? What are the implications and accountability in cases of failures which cause downstream sociotechnical harms on the individual or collective level?

A recent patent application (2023–08–03) provides some insight. Zoom talked about “sentiment scoring for remote communication sessions” i.e. producing sentiment scores for participants in a Zoom meeting or across a number of meetings.

For example, they write:

“Such a sentiment analysis can provide a sentiment score for the customer representing their feeling or sentiment during the sales meeting, based on a positive sentiment, negative sentiment, or a neutral sentiment. It would be highly valuable for sales representatives within a sales team, for example, to learn about the sentiment of a prospective customer overall for a conversation, or during specific segments focused on certain topics, in order to understand customer sentiment and behavior better overall or for specific topics, and to formulate strategies for improving prospective customer sentiment in areas where it is negative.”

Here is how they imagine an analytics dashboard which shows the sentiment analysis of participants in a sales meeting:

Soure: Patent application number US20230244874A1

In the public blog post they also talk about a meeting summary generative AI feature. They show a screen where account owners and administrators can enable the feature which includes data sharing. Participants in the Zoom meeting are notified when the feature is enabled. The options users are presented with include the two buttons “Leave Meeting” and “Got it.”

Source: https://blog.zoom.us/zooms-term-service-ai/

The risks of dark design patterns

Notifying users of using AI to process their data does not equal meaningful consent. For example, what happens when there is a functionality failure in the sentiment analysis or the summarization feature which might impact my performance evaluation as a sales representative? As Deborah Raji et al. write, AI functionality failures lead to real world harm. Furthermore, Renee Shelby et al. develop a taxonomy of sociotechnical harms of algorithmic systems which provides an all encompassing view of risks and implications.

While there is a request for user consent for the AI service, how would users know about the details of that consent — what exactly are they giving up when they click “Got it”? What might be other uses of their data now or in the future? What specific permissions do the data usage rights include for content used in delivering the value-added service? While Zoom stresses that the user owns their content, they now have a license to use that content in generative AI features.

Increasingly, users need to be aware of the risk of deceptive design or dark design patterns which may mislead them in harmful ways. These are tricks used by websites and apps to get you to do things you might not otherwise do, like buy products, sign up for services or switch your settings. Furthermore, there are critical justice and equity implications. Advocacy experts and researchers have shown that deceptive design patterns harm some people more than others.

Better consent mechanisms

Human-centered design has been a dominant innovation methodology in AI products and services. What would a human-centered design approach to consent look like? Projects such as the Consentful Tech Zine and the CARE Principles for Indigenous Data Governance offer some examples.

Scholars have argued that consent needs to be informed. The practice of informed consent comes from the fields of medicine, clinical practices, and biomedical research — “A person gives an informed consent … if and only if the person, with substantial understanding and in substantial absence of control by others, intentionally authorizes a health professional to do something” (Beauchamp, 2011).

In order to exercise meaningful informed consent, people need a new kind of sociotechnial forum that empowers them to understand the terms of their choices and their potential downstream implications. This is our goal with the Terms-we-Serve-with framework. Together with Megan Ma and Renee Shelby, we propose that technology companies could more meaningfully engage with the communities they serve through a feminist inspired approach and human-centered design of user agreements.

Such a consent-ability forum could contribute to a relational and reparative approach to algorithmic accountability. It would establish a process and tools that unmask and undo algorithmic harm.

What to do

We’ve lost the benefit of the bargain that contract law once promised and the road to redress of potential AI harms (when it exists) has been described as close to impossible (see this blog post on Mozilla’s comments on the EU’s proposal for an AI liability directive). Consumers are left with no actual choice and ability to track how companies are changing their agreements to cover the use of AI —

15.1 General Changes. Zoom may make modifications, deletions, and additions to this Agreement (“Changes”) from time to time in accordance with this Section 15.1. Changes to these Terms of Service will be posted here or in our Service Description located here, which you should regularly check for the most recent version. … Changes to this Agreement do not create a renewed opportunity to opt out of arbitration (if applicable). If you continue to use the Services after the effective date of the Changes, then you agree to the revised terms and conditions.

Still, what we could do is speak up and voice our concerns within circles of friends and families, networks of communities, institutions, and organizations. I’m reminded of my early fascination with writer and civil rights activist Maya Angelou. In an interview with Bill Moyers, she shares:

MAYA ANGELOU: You only are free when you realize you belong no place — you belong every place — no place at all. The price is high. The reward is great…
BILL MOYERS: Do you belong anywhere?
MAYA ANGELOU: I haven’t yet.
BILL MOYERS: Do you belong to anyone?
MAYA ANGELOU: More and more… I belong to myself. I’m very proud of that. I am very concerned about how I look at Maya. I like Maya very much.

A tech company which sees me as “a product” may consider that I belong to them … I’ve given my consent. However, more and more, I choose to belong to myself and the communities of like-minded people who are critical of what giving and taking consent means.

Thank You!

UPDATED (15/08/23): As of August 11th, the latest updates of the terms-of-service, privacy policy and public blog post from the company repeatedly claim that Zoom doesn’t use any call content, attachments or screen sharing in order to train Zoom’s, or any other companies’, AI models. Read more about the updates from Mozilla’s advocaacy standpoint here.

Special Thanks To Xavier Harding and Kevin Zawacki!

--

--

Bobbi Rakova
Bobbi Rakova

Written by Bobbi Rakova

Senior Data Scientist, Responsible AI at DLA Piper, ex-Senior Trustworthy AI Fellow at Mozilla, working on improving AI contestability and human agency

No responses yet