Facebook maths: fake news + privacy breaches = $110b gone

Published on the 30/07/2018 | Written by Pat Pilcher

Does Facebook’s dramatic share value slump simply reflect the numbers or are deeper trust and confidence issues at play?…

After announcing marginally slower growth than analysts expected, Facebook’s share value slumped by around 20 percent wiping a whopping US$110 billion (NZ$161.8 billion) off it’s market cap, recording the largest ever single day loss by any company on the US sharemarket.

And it wasn’t that the results were bad. Quite the opposite. Q2 revenue was up 42 percent over the same quarter last year to hit US$13.2 billion, and user numbers up 11 percent to 1.47 billion daily active users. Analysts reacted to forecast revenue growth slowing to single figure quarterly growth, but this still equated to around 25 percent annual growth. Facebook remains a gigantic and rapidly growing cash machine.

Some argue that it was all due to the impact of the Cambridge Analytica brouhaha. Others attribute last Thursday’s brutal trading to the EU’s data privacy regulations. A few even say the damage may have come from poor progress selling adverts in their Stories offering.

Speculation aside, one thing is sure: Facebook’s management team have been unwilling to discuss their own theories. A rainforest worth of media output has been printed as financial journalists, sensing blood in the water, zeroed in on Facebook’s numbers.

Their analysis makes for sobering reading. Over the last few years Facebook’s margins have typically sat at the 50 percent mark, but more recently it has dropped down to the high 30s. According to Facebook CFO, David Wehner, this is because costs are rising faster than revenues. No doubt a major factor in that is the jump in staff numbers. Employee count increased a whopping 47 percent to 30,275.

The rate of user growth is slowing too. Forty percent year on year user growth has long been typical for Facebook, but more recently this has dropped. The outlook for user growth in some markets looks decidedly lacklustre. In Europe, Facebook user numbers declined, and in the US market, user growth was almost flat.

Facebook’s statement on Thursday was a profit warning. When these happen with other corporates, it is not unusual for investors to ask if these first few raindrops are signs of an imminent downpour.

The trouble is that for Facebook, its investors are already jittery. Facebook’s reputation is coming under a lot of highly public scrutiny. CEO Mark Zuckerberg and his team’s inability to deliver anything approaching reassurance around data privacy and fake news has been a significant source of investor nervousness as some speculate that these two issues could undermine the Facebook business model.

As a result of the Cambridge Analytica scandal, several investigations were launched in the US, UK and India. While Facebook said users willingly handed over data, the reality is significantly more complicated. Facebook’s policy allowed the collection of friended users’ data by the app’s creators and academics. Selling the data to third parties or using it to target advertising was not.

While the data breach scandal is now dying down, the fake news issue remains a thorn in Facebook’s side. As with everything Facebook related, the numbers are enormous. Of the 1.5 billion daily Facebook users globally, an estimated 81 million accounts are said to be fake. Many of these accounts are used to spread false news.

Dealing with this mountain of data involves sifting through 300 petabytes of data on Hive (Facebook’s data warehouse), making matters worse, Facebook generates an additional four petabytes of data daily.

Initially, Facebook relied on algorithms to combat fake news. The company is notoriously secretive about its algorithms and with good reason. They drive the business. With Facebook’s news feed an algorithm weighs up thousands of factors to determine whether you see cat pictures or grizzles about the latest idiocy out of parliament.

The news feed is also the most difficult for Facebook to control. It has been tuned to grab the attention of users and to stimulate their engagement and generally feed their sense of outrage. These same optimisations also work successfully to promote false news.

Facebook declared war on clickbait and false news in 2016, partnering with fact-checkers and tweaking the monetisation of known fake news sites, limiting their ability to generate cash by plying their trade. First and foremost, Facebook has been working on machine-learning systems to deal with clickbait and fake news, but by its own admission this is about 34 percent successful in identifying it.

While banning purveyors of false news on Facebook had some effect, success ultimately proved limited. The users spreading false news responded by setting up fake accounts and continue to mock Facebook’s community standards. Facebook advised they deleted 583 million fake accounts in the first quarter of 2018 alone, and estimates 3-4 percent of user accounts are fake. With fake accounts producing a daily tsunami of fake news to sift through, the challenge is enormous.

To help with the task, Facebook has integrated machine learning systems used by its Instagram subsidiary for combatting cyberbullying. The company also keeps substantial human-curated datasets and are using a machine-learning product called DeepText.

A large and rapidly growing number of employees routinely go through hundreds of thousands of both random and suspect posts. Their job is to identify and to classify clickbait. The theory is that the algorithms learn word combinations that are considered clickbait. An extra layer of insurance gets added because the machine learning tools analyse the social connections of the accounts posting suspect material. With enough data and learning, this system is hoped to be as accurate as a human but vastly more scalable and faster.

While Facebook has a massive database of stories flagged by partnered fact-checkers and user comments, they’re still massively reliant on humans to sort the digital wheat from the false news chaff. Limited success in using machine learning to combat false news has seen Zuckerberg admitting to investor analysts that Facebook was ramping up FTEs to fight security and privacy issues despite warnings that revenues were slowing.

Late last year, Zuckerberg stated that Facebook was adding 3,000 more content moderators in addition to the 4,500 moderators already monitoring content on Facebook.

The inescapable fact is that Facebook has suffered a degradation in trust, and trust is a very important ingredient when it comes to investor confidence.

Post a comment or question...

Your email address will not be published.

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow iStart to keep up to date with the latest news and views...