After much debate, in June, national lawmakers passed a controversial law that would see social media platforms fined over US$50 million for failing to remove content flagged as misleading, or hate speech within 24 hours — despite warnings from experts the measures would like be entirely unenforceable.
While the legislation applies to any company identifiable as a social network, it is Facebook that has drawn the bulk of political ire on the issue of “fake news” — perhaps understandably, as it is by far and away the world’s largest.
Such is the pressure being applied to the company, in April, Facebook published a report detailing its plans to tackle “fake news” — a phenomenon it defined as a key plank of “information operations” carried out by both government and non-state actors, which seek to use the social media platform “to distort domestic or foreign political sentiment” in order to achieve “a strategic and/or geopolitical outcome.”
It claimed the company had identified three main components involved in “information operations” — targeted data collection, content creation and false amplification, a triumvirate used to steal and expose private information, spread false stories to third parties via fake accounts, and manipulate political discussion.
Nonetheless, some critics suggested the publication was a mere public relations exercise, designed to offer the facade of being proactive about an issue widely touted as a problem.
Facebook Cracks Down on Fake News Epidemic… But Admits Problem Non-Existent
In a little-acknowledged section of the report, the authors themselves admitted the reach of both “false amplifiers” and “fake news” was miniscule, with such content accounting for one tenth of one percent of overall civic engagement on Facebook.
The paper also made no mention of whether Facebook would be cracking down on articles about astrology, climate change denial, conspiracy theories, homeopathy, intelligent design, the paranormal and other such content of questionable veracity and value that routinely filters through its network — or indeed, misleading or stealth advertising, which Facebook sanctions for display across its network and keeps the company’s lights on.
Moreover, the next month, Zuckerberg himself seemed to row back somewhat on the company’s determined commitment to preventing the proliferation of misleading content in a 6,000-word open letter.
“We know there is misinformation on Facebook, [but] there is not always a clear line between hoaxes, satire and opinion. In a free society, it’s important people have the power to share their opinion, even if others think they’re wrong. Our approach will focus less on banning misinformation, and more on surfacing additional perspectives and information,” he wrote.
Even if only indirectly, the Facebook founder’s missive acknowledged the contradiction at the core of the “fake news” debate — who decides what’s “real” news and what’s not, and how and why, is just as important as the very question of what content is “fake” and what’s not. Yet, politicians, journalists and social media platforms themselves have failed to consider this inconsistency, much less offer a satisfactory remedy — the universal solution is said to be the introduction of fact-checking units.
These teams are charged with indentifying potentially dubious content, and investigating its reliability. The issue of who fact checks the fact checkers has gone unremarked upon, much less explored.