Was Zoom's ToS Controversy Really a "Failure?"
According to Zoom it was, but that's pretty hard to believe. It was more likely a calculated risk they hoped would fly under the radar.
I make strange connections between seemingly unrelated things. It's a strength of mine as I can see similarities and patterns repeated in different places. But for people who aren't me, it can be a non-sequitur. That’s what happened recently when I was listening to Rihanna and immediately thought of the Zoom and its relationship to its customers. The song is "Take a Bow", and this line jumped out at me:
Don't tell me you're sorry 'cause you're not
Baby when I know you're only sorry you got caught
Upon hearing that, I said to my husband, "Oh it's just like Zoom! They are only sorry they got caught" To which he replied, "What the hell you talking about?" After 13 years together, at least he's grown to accept my random statements without context even if he still doesn’t understand them most of the time.
Zoom recently did its best to embody that lyric when it quietly updated its terms of service, hoping no one would notice, and then 'made amends' by giving users a choice that isn't a choice. They claimed it was an internal failure, apologized, and then showed up with proverbial flowers they ripped out Ethel's flowerbed down the street.
Alright, so Rihanna might not have been the best segue into an article about irresponsible AI practices, but it was fun to write least.
Do I think the Zoom ToS was an “internal failure”? Of course not. I think it was a calculated risk on Zoom's part, and that they are really only sorry they got caught.
Zoom does a bad thing, gets caught
Zoom changed its Terms of Service back in March that made the changes to their Terms of Service that allowed them free reign to use your data from using the service to train their AI models. The change flew under the radar for several months until one meticulous user read the changes and noticed the change.
The updated changes what Zoom will do with two types of data - service-generated data and content data. Service-generated data are things like telemetry, product usage data, behavior data, location data, which features you use, etc. While content data which is your video, audio, and anything else you create while on a Zoom meeting. The March update to Zoom's TOS originally did not allow users to opt out of Zoom using this data to train its AI models. In effect, that meant that Zoom had the legal right to use your face, voice, mannerisms, way of speaking, and innumerable other aspects of your person to train its AI models if you opted to use their service.
Then this week (August 7), Zoom put out a garbled, contradictory response to the controversy. Their CPO came out to address the misconception saying that they would not use personal data in the exact way their ToS said they could, which is a weird disconnect. Later in the week, they put out another statement calling the situation a "failure." They amended the policy to include an opt-out for content data, but not for service-generated data. The opt-out is more of a token measure than actual consent. It's just a "you can consent, or you can leave the meeting." For many Zoom users, "just leaving the meeting" isn't an option.
Want to know more?
“Don’t tell me you’re sorry cause you’re not…”
We need to start with the obvious--this wasn't a mistake or an "internal failure" as the CEO says. It was a calculated risk by the leadership at Zoom. The entire industry around privacy, compliance, and legality risks of generative AI has been talking about these issues almost non-stop since ChatGPT went live in late 2022. Early looks at the EU AI Act starting in 2021 have had consent as a major component.
If we look even more recently, Google generated a lot of concern and raised eyebrows over the updates it made to its privacy policy giving them the right to use pretty much anything publicly available online to train its AI models.
For a company as big as Zoom that is as risk-averse as Zoom, there's no reasonable explanation as to why they didn't identify the reputational and potential legal risk associated with slurping up data without consent or notification. This leaves the most logical conclusion - it was an intentional risk.
So why try to pull a fast one?
The easiest answer is that generative AI is contentious right now. Folks object to using personal data to train models. They are concerned about trade secrets, copyright, unintended disclosure, environmental impact, and of course the "Annihilation of the human race." (/eye-roll). There is a lot for people to object to and a lot of negative press that comes with those objections. So Zoom wanted to make as little of a splash as possible while still getting their hands on that training data.
This brings us to the next point--any AI is only as good as the data used to train it. For Zoom to offer AI solutions that folks want to use, they need to have the data to train that AI. Zoom is sitting on a mountain of data that it desperately needs to remain competitive. I've touched on this before in a different context: an AI Pause was not realistic because it would put those who complied with the AI Pause at an extreme disadvantage against those who didn't comply.
The situation with Zoom is the same. If they don't keep barrelling ahead with AI, someone else will and that threatens their position in the market. So Zoom needs the data but knew that people would be upset about the collection. So, they brushed it under the rug and hoped no one would notice. Or at least, they hoped that enough time would pass before anyone noticed change that they could get some data. Which is more or less what happened.
Their consent consolation is a token gesture at best. Understanding that the majority of folks use Zoom with jobs, consent isn't something most individuals can freely object to. If your organization or your boss uses AI features and Zoom is your primary tool for video conferences, well ... your consent is immaterial here because you can't just join meetings at your company.
Zoom knows this too. It's a wonderful option to make it seem like they are backtracking, but the number of folks who can realistically opt-out is likely small. Zoom heals reputational damage but doesn't lose out on much data to train their models. It's a win-win for Zoom.
Users on the other hand don't get the same glowing outcome. In theory, users will get better AI tools to use in Zoom, but not everyone will get to use those tools. Most AI tools are targeted at larger, enterprise companies with price tags that match. If you aren't part of one of those companies, Zoom is using you to make features better to give to folks with more money than you.
A portent of things to come
Zoom is just a small part of the larger picture. Google has already changed its privacy policy to say it can use pretty much anything you post online to train its models. Microsoft's ToS regarding AI is narrower in scope but still set a worrying precedent.
To me, this is the most concerning part of the situation. Zoom, Google, and Microsoft updating their TOS to include AI provisions to codify their exploitation of their users is just like thunder in the distance. We know this situation is going to get much more complicated and much worse before it gets better. We've already seen businesses like Reddit that have taken steps to further exploit their users in favor of creating additional revenue streams, and we can expect more of the like.
So keep an eye out. It's more important than ever to read the terms, to read the privacy policy. Take the time to understand what you are agreeing to. Zoom is just the beginning, and others will follow and hope you don't notice.
What do you think? Was this actually an “oops” by Zoom, or an calculated risk? Let’s chat about it in the comments!