COMMENTARY

Not so private room: Zoom’s AI privacy fiasco exposes how vulnerable we are to Big Tech's whims

The video conference company's latest affront to data protections epitomizes why few trust Zoom in the first place

By Rae Hodge

Staff Reporter

Published August 11, 2023 3:02PM (EDT)

Zoom Meetings logo is seen displayed on a smartphone. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)
Zoom Meetings logo is seen displayed on a smartphone. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)

If there's been one topline tech story this week that feels synecdochal of the United States' entire tech-politik zeitgeist, it's got to be Zoom's latest privacy fiasco. The video conference software-maker caught heat this week over a Terms of Service (ToS) change allowing it to harvest private user meeting data for artificial intelligence training. And, yes, once again it was the internet's privacy vanguard which sounded the alarm and unleashed a deafening roar of pushback.

In the midst of this public-relations nightmare, Zoom's response to its estimated 800 million customers wasn't an apology or course correction, but an unctuous "clarification" doubling down on the new ToS. The move drew only more ire as calls for federal investigation grew.

 

An already fed-up remote workforce began realizing something disturbing. From now on, whenever our bosses hold a mandatory meeting using Zoom — or our professors, physicians, attorneys, accountants, government agencies or therapists require us to use it — there's a real chance that we could be cornered into giving Zoom the legal right to record our every word and deed in those meetings. And then to use, manipulate and distribute those recordings and associated metadata however Zoom pleases within the widest interpretation of the law. Forever.

By Wednesday, Zoom's stock had fallen from a prior Friday high of $71 to $65 a share.

The entire eruption was sparked by Zoom's new AI-powered product features. Combining tech from OpenAI and Anthropic, Zoom IQ was soft-launched in March with a quiet ToS change and is currently offered on a free trial basis. Basically, Zoom IQ spits out AI-generated summaries of your meeting video and chat logs. The Zoom account owner can turn these features on or off. If turned on, they'll need to manually deselect Zoom's on-by-default data sharing option after solving a pop-up maze of sub-menus. 

If administrators or account owners turn the features on, Zoom says they will "be presented with a transparent consent process for training our AI models using your customer content," according to the company's latest blog post.

The Zoom account owner will need to manually deselect Zoom's on-by-default data sharing option after solving a pop-up maze of sub-menus.

As a mere participant, you'll be notified of the meeting host's choices via pop-up. But that's all you get from a company with an embarrassingly long rap-sheet of security issues. If your boss, for instance, insists on using the AI features as a matter of convenience during one of those mandatory "cameras on!" meetings — and can't be bothered to navigate Zoom's sub-menus to deselect data sharing — then your only choice is to either agree to let Zoom surveil and forever-use your face and voice, or leave the meeting and tell your employer to kick rocks.

Some choice, huh? 

"For AI, we do not use audio, video, or chat content for training our models without customer consent," wrote Chief Product Officer Smita Hashim in the blog post. "These terms of service work together with our privacy statement and in-product privacy notices, which in turn aim to ensure that usage of customer content is based on your consent."

Notably, though, Zoom has a history of using AI, and many of its AI-driven features had uses that were known back in early 2021: Pornography and fraud detection, virtual backgrounds, video compression, background audio muffling, live transcription, Zoom Rooms with auto-tracking, integration with AI assistants and letting Google read your Zoom traffic so it can prioritize that part of your internet connection for faster speeds.

A Tuesday LinkedIn post from Zoom CEO Eric Yuan used similar language as Hashim's blog post.

Zoom might as well say it won't use recordings of you to train AI "unless you use Zoom" since, in order to use the product at all, you have to consent to Zoom's ToS.

"It is my fundamental belief that any company that leverages customer content to train its AI without customer consent will be out of business overnight," Yuan wrote. "Given Zoom's value of care and transparency, we would absolutely never train AI models with customers' content without getting their explicit consent."

If you find yourself doubting whether a more duplicitous and insulting use of the word "consent" could be possible in consumer tech, come sit by me. I've been to frat parties that show more respect for my consent and recorded image than this. Zoom might as well say it won't use recordings of you to train AI "unless you use Zoom" since, in order to use the product at all, you have to consent to Zoom's ToS.

As Damien Williams opined for Wired, "coerced consent is no consent at all." That point is meticulously explicated by TechCrunch's Natasha Lomas in her surgical analysis. As she details at length, this ham-fisted attempt to redefine consent from something given freely by the individual user to a choice made on the user's behalf by another person (the account holder) is poised to become a significant legal liability to Zoom under European GDPR laws (General Data Protection Regulation). 


Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.


It's important to also notice Zoom's specific word choice preceding the list of data types — "do not use" is meaningless and completely impotent in this context. It is an expression of the company's current policies or state of operation, not a promise. "Do not use" is a phrase that specifically avoids any language that would allow the company to be held accountable if it changes its mind tomorrow — as opposed to a future-tense "will not use," the phrase later appearing in both the blog post above and the ToS following public outcry.

But that's also basically meaningless and no one should be impressed, since the Zoom ToS includes in section 15.2 that "you agree that Zoom may modify, delete, and make additions to its guides, statements, policies, and notices, with or without notice to you."

That's a rather polite rider compared to 15.1 which strips you of even more rights by letting you know that "changes to this Agreement do not create a renewed opportunity to opt out of arbitration (if applicable). If you continue to use the Services after the effective date of the Changes, then you agree to the revised terms and conditions."

It's particularly galling to see Zoom's anti-judiciary nose-thumbing in these clauses given the company's history of being called to court over users' privacy and security concerns.

Just because most of us are so beat down by tech-surveillance bootheels on our necks that we no longer have the energy to be pissed off about companies which demand you waive constitutional rights for the privilege of even looking at their stupid products — it doesn't mean forced arbitration clauses like those in Zoom's sections 27.1 through 27.12 are any less dystopically banal and insulting. It just means I've got to recommend that you — after consulting with your lawyer, of course — follow the instructions of 27.11 and email opt-out@zoom.us within 30 days of creating a new account so you can reject the arbitration agreement that Zoom never should have included in the first place.

It's particularly galling to see Zoom's anti-judiciary nose-thumbing in these clauses given the company's history of being called to court over users' privacy and security concerns.

By late 2020, the Federal Trade Commission announced a settlement with Zoom which cornered the company into beefing up security practices and being more clear about user privacy misrepresentations. The FTC alleged that these misrepresentations included Zoom holding cryptographic keys that would allow the company to access the full contents of any Zoom meeting and that any recorded meetings stored on the company's cloud storage would be immediately encrypted. The FTC says, however, the files were stored unencrypted on Zoom's servers for up to 60 days before being transferred to secure cloud storage.

Then there was the $86 million class-action lawsuit that Zoom settled in August of 2021 over privacy invasion allegations after it shared users' personal data with Facebook, Google and LinkedIn. The suit included another privacy-security dovetail, calling the company to the floor for falsely saying it offered end-to-end encryption (which it introduced in April 2020 after the suit was filed) while allegedly not doing enough to prevent those once-notorious Zoom-bombing intrusions by hackers.

If Zoom wanted to be taken seriously about the integrity of its corporate motives regarding its use of your data — whether that's the actual videos it keeps of you, or the telemetry-like "Service Generated Data" on your use of the software — it would first need to take the proverbial knives out of our backs. It would need to drop these wretched arbitration clauses and the "we can change our minds any time" clauses out of its ToS.

They're the equivalent of crossing your fingers behind your back while shaking hands on a deal.

Zoom could instead go the opposite direction if it wants to prove its self-proclaimed "commitment to transparency and user control." It could answer the questions it left hanging when it invited Hacker News readers to engage then dodged critical concerns. Clunky lay-writer as I might be, Zoom could not only remove the offending clauses, it could add further language saying it "does not and will not retain the right to use, now or in the future" any of the ridiculous list of data types in the new changes.

"This is another example of why we need strong federal data privacy legislation that will actually protect us from corporate surveillance and stop companies like Zoom from harvesting and abusing our data."

Sections 10.1, 10.2, 10.4 and 10.6 — the specific bits at the center of the uproar — are an odious and self-evident grab for rights to control users' image and data. The company's handling of your Customer Content, as Zoom calls it (your video, text, and files uploaded), appears in 10.4.

The section reads more like the plan for a bank heist than the ToS of some (let's be honest here) annoyingly bloated corporate software:

"You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content."

In bold-faced type, Zoom has now grafted on a single line: "Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent."

In a petition, digital rights advocates Fight for the Future have taken up the push for more change.

"Zoom is trying to mislead users, hiding significant privacy concerns within confusing policy language and conflicting statements. This is another example of why we need strong federal data privacy legislation that will actually protect us from corporate surveillance and stop companies like Zoom from harvesting and abusing our data for profit. But we also need Zoom to address these issues, and publicly state that it will not use our data to train its AI," the group writes.

I'm not a doomer. I don't think you can have any sort of life in journalism without the belief that good change can be won as much in the margins as it can in the main. But I am a realist who sees no evidence that we have ever obtained any victory for US digital rights by politely asking for them to be benevolently bestowed upon us by tech companies, billionaires and the congress members who cash their lobbying checks.

 

We need your help to stay independent

 

Rather, all evidence points to the contrary: The Big Tech privacy grab is only going to get bigger as AI use spreads. It's not going to stop until federal privacy and digital rights protections are in place. As generative AI grows, the shoggoth has to be fed on user data, which companies can squeeze more cheaply from you via commandeering your consent than they can by paying you for your data.

Until we have rights enshrined in law, all we can do is cross our own fingers and hope Zoom doesn't have any of our saucy skin vids laid aside from the days of COVID-safe virtual dates, and that some future breach of its cloud storage doesnt turn the company into a de facto dark web porn dealer.

In the meantime, whither Bellingcat goeth, so go I. And luckily, there are still some admirable virtual meeting rooms open that provide their hosting company an excellent view of exactly nothing.


By Rae Hodge

Rae Hodge is a science reporter for Salon. Her data-driven, investigative coverage spans more than a decade, including prior roles with CNET, the AP, NPR, the BBC and others. She can be found on Mastodon at @raehodge@newsie.social. 

MORE FROM Rae Hodge


Related Topics ------------------------------------------

Ai Commentary Data Harvesting Privacy Surveillance Technology Zoom