Facebook’s new ‘independent’ oversight board

Published on the 02/07/2019 | Written by Jonathan Cotton


Facebook independent board

Why is content moderation being driven by Facebook? (Where’s the UN, the G20?)…

Last November Facebook announced it would be conducting ‘consultations’ into its content review process with the intention of creating an external, independent body responsible for deciding what content is and is not appropriate for the social networking site.

“What should be the limits to what people can express?” mused Facebook autocrat Mark Zukerburg in a blog post at the time. “What content should be distributed and what should be blocked? Who should decide these policies and make enforcement decisions? Who should hold those people accountable?

“I’ve increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on our own.”

That’s a sensible position when it’s your job to manage the expectations of 2.4b users, hence the consultation process and new input from more than 2,000 people from 88 countries around the world.

Facebook has now released a summary of the findings of that project in a document titled Global Feedback & Input on the Facebook Oversight Board for Content Decisions.

The document describes the methodology and consultation process in detail and tries hard to summarise a lot of information and conflicting viewpoints.

“Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average”

According to Facebook, three key ‘themes’ have emerged from the consultation which it says are reflected in the summary,

“First and foremost, people want a board that exercises independent judgment – not judgment influenced by Facebook management, governments or third parties,” says the summary.

“The board will need a strong foundation for its decision-making, a set of higher-order principles – informed by free expression and international human rights law – that it can refer to when prioritising values like safety and voice, privacy and equality.”

Secondly, Facebook says that there is a strong public desire for more transparency from the content review process.

Thirdly, “people want a board that’s as diverse as the many people on Facebook and Instagram.

“These [board] members should be experts who come from different backgrounds, different disciplines and different viewpoints, but who can all represent the interests of a global community.”

It’s all very broad stuff – one of the hazards of attempting something as ephemeral as ‘global consensus’. That’s the conclusion reached by the report too.

“The Oversight Board is the work of years, not months,” says the report. “The issues raised during this initial consultation period and discussed in this report are not exhaustive, nor could they be.

“The design of the Board is still only a first step. How it works in practice, and how it is improved when it doesn’t work, will determine the real value of this or any similar enterprise,” reads the report’s conclusion.

“To be clear: the Oversight Board will not solve all of Facebook’s problems,” says Nick Clegg, VP of global affairs and communication.

So what’s next? Those hoping for a clear indication won’t be satisfied with what’s been released, and will instead have to wait for the publication of the board’s charter, currently scheduled for release in early August.

Over the past year Facebook’s investment in content moderation have been stepped up: Facebook says 30,000 employees have been onboarded over the last 12 months for safety/security and content monitoring roles. The company reviews around two million pieces of content every day.

The social media giant has been working with AI systems to detect certain objectionable content automatically. An example of it in action: 99% of terrorist content – a priority target for the project – is flagged by artificial intelligence before it is reported by a human.

Less egregious behaviour is being addressed as well. Sensationalist or provocative material is significantly more likely to be engaged with. To avoid this type of material from dominating news feeds, Facebook’s AI systems are being trained to penalise ‘borderline’ content – content that doesn’t violate Facebook’s terms specifically but is undesirable nonetheless.

“Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average,” says Zuckerburg. “Even when they tell us afterwards they don’t like the content.”

Some will be disappointed at both the pace and scope of Facebook’s content moderation initiative here, but if nothing else the consultations will likely drive further public engagement with Facebook’s plan for managing user content, so making the results of the consultation public – limited though they are – seems worthwhile.

Nominations for the new 40-member oversight panel will be open soon, says Facebook.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere