Published on the 22/08/2023 | Written by Heather Wright
Lust for data hits (immovable) public outrage…
Zoom has updated its terms and conditions again, saying it won’t use customer data to train its AI models, following a barrage of criticism and concern over invasion of privacy.
The issue flared when the company discretely updated its terms of service to note that it could use customer and service generated data – including from videoconferences, audio, shared content – to train both Zoom and third-party AI models.
Cue a furore on social media as concerns, with concerns over customer data – which could include sensitive company meeting, online doctor visits and the like – being used to train the models, and no option to opt out.
“Privacy protection laws and access to quality data must be carefully balanced.”
The terms of service enabled Zoom to train algorithms on information including uploaded customer content, files, documents, transcripts, analytics and visual displays.
In the face of public ire, Zoom quickly spoke up, saying it wouldn’t use audio, video or chat customer content to train its models, without user consent. In an accompanying blog post, Zoom chief product officer Smita Hashim said the intention in the changed customer data terms was to ensure the company could provide value-added services, such as meeting recordings ‘without questions of usage rights’.
But the ‘without user consent’ caveat did little to calm concerns, with many pointing out that it essentially meant you either accept possible use of your Zoom sessions for AI training, or simply don’t use Zoom. (Something which could be deemed illegal under the European Union’s GDPR where ‘forced consent’ is not permissible.)
Within days Zoom updated its terms again. It now says it will use customer content for legal, security and safety purposes, but states categorically that ‘Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like customer content, such as poll results, whiteboard and reactions, to train Zoom or third-party AI models’.
Zoom, which rode high during the pandemic but has since fallen back down to earth, cutting 15 percent of its workforce and flagging concerns about the lack of success transitioning free users to paid subscriptions, has already introduced several generative AI features.
Zoom IQ Meeting Summary and IQ Team Chat Compose create automated meeting summaries and compose messages, using OpenAI technology.
Zoom’s situation provides a cautionary tale to organisations on both sides of the data fence. On the one hand, there’s the issue of use of data – with companies racing to develop lucrative AI tools, the idea of data as the new oil has stepped up another gear as organisations seek the data required to feed the ever hungry algorithms. ChatGPT and its ilk have been trained on copious amounts of data – much of it copyrighted and scraped from the internet.
A number of lawsuits have already been filed against generative AI companies, including Microsoft, GitHub and OpenAI, for the use of content for training models.
ChatGPT has recently enabled non-API users to switch off training so new conversations aren’t used to train the company’s models.
It’s not just the tech companies either: Organisations are increasingly looking to use the data they own, including valuable customer data, to build or train generative AI models.
And on the flip side, there’s the question of where your organisation’s data might resurface if entered into generative AI tool – something Amazon learned to its detriment earlier this year when the company saw ChatGPT responses which mimicked internal Amazon data. Its staff had been using the internal data to ChatGPT.
There’s also the potential impact from exposing personally identifiable information, covered under privacy laws.
While regulations such as the privacy acts in both Australia and New Zealand and the European Union’s GDPR provide some protections around data privacy, there are no enforceable AI-specific regulations across Australia or New Zealand preventing data from being used in the ways Zoom was planning. Some countries, such as the United States, have proposals on the table for data protection in the age of AI.
In Australia the government released a discussion paper in June looking at whether the existing regulatory environment is adequate to ensure safe and responsible development and use of AI.
It notes the requirement for rich, large and quality data sets to allow algorithms to be designed, tested and improved.
“However, access to and application of these datasets have the potential for individuals’ data to be used in ways that raise privacy concerns. Privacy protection laws and access to quality data must be carefully balanced to enable fair and accurate results and minimise unwanted bias from AI systems,” the discussion document notes.
Copyright and intellectual property laws, too, could be called on.
For now though, it’s a case of treading warily with your data use – and ensuring transparency – to avoid the public backlash experienced by Zoom, and, no doubt, reading those terms of service or directly asking vendors the tough questions – including who owns the data, how it’s being used, who it’s being shared with and how long it is being stored for.