clock menu more-arrow no yes mobile

Filed under:

Is Zoom using your meetings to train its AI?

Zoom returns to the office — and to its problematic privacy ways.

A person having a videoconference with a doctor.
A vaguely worded terms of service update made a lot of people think their private meetings were being used to feed Zoom’s AI machine.
David Espejo/Getty Images
Sara Morrison is a senior Vox reporter who has covered data privacy, antitrust, and Big Tech’s power over us all for the site since 2019.

The week isn’t even half over and it’s already been a bad one for Zoom, the videoconferencing service that boomed during the pandemic. It’s facing yet another privacy scandal, this time over its use of customer data to train artificial intelligence models. And its recent demand that its employees return to the office is a bad sign for the completely remote work life that Zoom’s eponymous product tried to help make possible.

Yes, the company that became synonymous with videoconferencing at a time when seemingly everyone was remote is now saying that maybe not everything can be done apart. It’s not just Zoom that’s doing this — there is a larger trend of companies calling their employees back to the office after months or years of working from home — but it seems particularly ironic in this case.

Now, Zoom’s not making everyone come back all the time. Its recent memo to employees says that everyone who lives within 50 miles of a Zoom office will have to work out of it at least twice a week. This “structured hybrid approach,” the company said in a statement to Vox, “is most effective for Zoom.”

“We’ll continue to leverage the entire Zoom platform to keep our employees and dispersed teams connected and working efficiently,” the company added.

It’s not the best look when a company that relies on people doing as many things remotely as possible wants its employees to do some things together. If even Zoom, the company that helped Make Remote Work Possible, doesn’t want its employees to work remotely all the time, it might be time to Zoom wave away your dreams of working from home every day.

Lots of people are still using Zoom, of course. But the company has fallen back down to Earth as more people went outside and needed Zoom less. Its stock price is back to roughly where it was before the pandemic; it expressed concern in its most recent annual report that it will not be able to convert enough of its large free user base to paid subscribers to remain profitable. Like many tech companies, Zoom had a round of layoffs, cutting 1,300 jobs — 15 percent of its workforce — in February. It has more competition from Google Meet and Microsoft Teams and even Slack, which would all surely love to lure Zoom’s considerable user base away from it for good. But it remains profitable. Just not as profitable as it was, and for understandable and predictable reasons.

Even so, you’d think it wouldn’t want to risk upsetting a user base that now has plenty of other options by sneaking a line into its terms of service that taps into a widespread fear: that generative AI will replace us, very much helped along by the data we’ve unknowingly provided. And yet, that’s exactly what Zoom did.

The company released an updated and greatly expanded TOS at the end of March. Companies do this all the time and almost no one takes the time to read them. But Alex Ivanovs, of Stack Diary, did take the time to read it. On Sunday, he wrote about how Zoom had used the TOS update to give itself what appeared to be some pretty far-reaching rights over customers’ data, and to train its machine learning and artificial intelligence services on that data. That, Ivanovs believed, could include training AI off of Zoom meetings — and there was no way to opt out of it.

Here’s what the TOS says, emphasis ours:

You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software. You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement.

You can see why Ivanovs thought that Zoom wanted to use customer data and content to train its AI models, as that’s exactly what it seems to be telling us. His article was picked up and tweeted out, which caused an understandable panic and backlash from people who feared that Zoom would be training its generative AI offerings on private company meetings, telehealth visits, classes, and voice-over or podcast recordings. The idea of Zoom watching and ingesting therapy sessions to create AI-generated images is a privacy violation in more ways than one.

That’s probably, however, not what Zoom is actually doing. The company responded with a small update to its TOS, adding: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.” It also put up a blog post that said it was just trying to be more transparent with its users that it collects “service generated data,” which it uses to improve its products. It gave a few examples of this that seem both innocuous and standard. It also promoted its new generative AI features, which it does use customer content to train on only after obtaining consent from the meeting’s administrator.

But the fact remains that Zoom’s initial TOS wording left it open to be interpreted in the creepiest way possible, and, after a series of privacy and security missteps over the years, there’s little reason to give Zoom the benefit of the doubt.

Quick summary: Zoom was dinged by the FTC in 2020 for claiming that it offered end-to-end encryption, which it didn’t, and for secretly installing software that bypassed Safari’s security measures and made it hard for users to delete Zoom from their computers. It’s under a consent order for the next 20 years for that. Zoom also paid out $85 million to settle a class action lawsuit over Zoombombing, where trolls join unsecured meetings and usually show sexually explicit, racist, or even illegal imagery to an unsuspecting audience. It was caught sending user data to Meta and LinkedIn. Oh, and it played fast and loose with its user numbers, too.

There’s also still a question, even after Zoom tried to clear things up, of what counts as Customer Content and what counts as service generated data, which it’s given itself permission to use.

“By its terms, it’s not immediately clear to me what is included or excluded,” Chris Hart, co-chair of the privacy and data security practice at law firm Foley Hoag, said. “For example, if a video call is not included in Customer Content that will be used for AI training, is the derivative transcript still fair game? The whiteboard used during the meeting? The polls? Documents uploaded and shared with a team?” (Zoom did not respond to a request for comment on those questions.)

Ivanovs, the author of the blog post that brought all of this to light, wasn’t satisfied with Zoom’s explanation either, noting in an update to his post that “those adjustments ... [don’t] do much in terms of privacy.”

So, yeah, not a great few days for Zoom, although it remains to be seen just how damaging this is to the company in the long run. The fact is, Zoom isn’t the only company that people have real fears about when it comes to its use of AI and how it trains its models. OpenAI’s ChatGPT, which is trying to insert itself into as many business offerings as possible, was trained off of customer data obtained through its API until, OpenAI said, it realized that customers really don’t like that. There are still concerns over what it does with what people put directly into ChatGPT, and many companies have warned employees not to share sensitive data with the service because of this. And Google recently had its own brush with social media backlash over how it collects training data; you might have read about that in this very newsletter just a few weeks ago.

“I do think the reaction to Zoom’s terms changes reflects the concerns that people are generally having over the potential dangers to individual privacy given the increasing ubiquity of AI,” Hart said. “But the changes to the terms themselves signal the increasing and likely universal business need to organically grow AI technologies.”

He added: “To do that, though, you need a lot of data.”

A version of this story was also published in the Vox technology newsletter. Sign up here so you don’t miss the next one!

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.