Facebook struggles to stem Covid-19 misinformation flow

Published on the 20/05/2020 | Written by Heather Wright


Facebook covid19 misinformation

AI falters with Covid, but sees success with hate speech…

Facebook says 89 percent of hate speech it removed in the three months to March was detected by its machine learning and AI systems, but it admits Covid misinformation is proving to be more challenging.

The company’s latest Community Standards Enforcement Report shows that a year on from the Christchurch Mosque shootings, 9.6 million pieces of hate speech on Facebook were acted on (up from 5.7 million the previous quarter), with improvements to its machine learning systems enabling it to remove 89 percent of hate posts before users reported them. That’s up from 80 percent in Q4 2019 (and doesn’t include organised hate and terrorism posts).

The improvements come on the back of several updates to the company’s systems, enabling it to train on datasets that don’t have to be manually curated, build classifiers that understand the same concept in multiple languages, and to analyse ‘multimodal content’ – content, such as memes, made of images and text combined.

“These are difficult challenges and our tools are far from perfect.”

When it comes to Covid misinformation and profiteering, Facebook CEO Mark Zuckerberg says the social media platform has been removing any harmful misinformation ‘that could put people in imminent physical danger’ and working to limit the spread of ‘broader misinformation’. That includes putting warning labels – some 50 million in April alone – on dubious content, based on work by independent fact-checking partners.

“We have a good sense that these warning labels work, because 95 percent of the time that someone sees content with a label, they don’t click through to view that content,” Zuckerberg says.

Since the beginning of March the company has also removed more than 2.5 million pieces of content trying to exploit Covid for financial gain, such as the sale of masks, hand sanitiser and Covid test kits.

“But these are difficult challenges, and our tools are far from perfect. Furthermore, the adversarial nature of these challenges means the work will never be done,” the company says.

Other comments from the company suggest that AI isn’t playing quite such a big role when it comes to detecting Covid misinformation, such as posts directing readers to the Plandemic video, which spread like wildfire recently.

In a telling statement, indicating the reliance on human fact checkers as the first line of defence, the company says it’s using its AI systems, including newly deployed ones, to ‘take Covid-19-related material our fact-checking partners have flagged as misinformation and then detect copies when someone tries to share them’.

Mike Schroepfer, Facebook chief technology officer, says one big challenge is the adversarial nature of the work, with people uploading slightly different variations of images designed to evade Facebook’s systems.

The company launched its SimSearchNet – a ‘convolutional neural net-based model’ specifically built to detect near exact duplicates of images – to address the issue. The system enables Facebook to detect the ‘thousands or millions of copies’ of information detected by the human fact checkers, leaving them free to catch new instances of misinformation rather than the near-identical variations of content they’ve already seen.

“This system runs on every image uploaded to Instagram and Facebook and checks against task-specific human-curated databases,” Facebook says. “This accounts for billions of images being checked per day, including against databases set up to detect Covid-19 misinformation.”

For Facebook and its aligned businesses, reduce reliance on human fact checkers, is a critical factor with Zuckerberg admitting that the move to working from home has impacted Facebook’s effectiveness, with less human review available. Moderators, who recently settled a US$52 million class action to compensate them for mental health issues, particularly PTSD from viewing distressing information, aren’t allowed to access potentially sensitive data from home computers, leaving Facebook relying on its AI systems more than ever.

“We do, unfortunately expect to make more mistakes until we’re able to ramp everything back up,” he says.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere