If Facebook Became a Digital Censor

Niko Efstathiou
Pro Journo Davos 2017
7 min readFeb 2, 2017

--

December 21, 2017

BEIJING — Amid a mass of red and blue banners — with Facebook’s mission (“Making the world more open and connected”) written across in big white letters — Mark Zuckerberg and Ren Xianliang, China’s deputy head of cyberspace administration, shook hands in front of a packed crowd of excited students at Tsinghua University. They were there to celebrate the company’s official launch in the country. Eight years after China banned Facebook, the company was re-entering the market after authorities deemed Facebook’s content-suppression software to be in full compliance with the country’s censorship laws.

“We are very happy that Facebook and China are restoring the world’s faith in the digital space by making sure that all online issues are related to reality,” Ren told the news conference. Zuckerberg said he was excited to see where 750 million Chinese users would take Facebook’s community. He ended with a salutation in Mandarin, earning an ovation from the audience.

With all the smiles at Tsinghua, it was easy to forget how resistant China had been to Facebook’s attempts to enter its booming online landscape. The ban had come swiftly in 2009, after independence activists in the Xinjiang region used the social media platform for internal communications. The government blocked all access to the social media network. In the years that followed and despite numerous visits by Zuckerberg, China remained unyielding.

But that changed drastically in 2016 when Facebook began developing news-filtering software to potentially satisfy China’s censorship demands. That project initially provoked controversy, but Donald Trump’s election and the furor around fake news upended the context. That’s because the story of how Facebook re-entered China is also about how attitudes toward fake news have shifted technologies and opinions on censorship in the 21st century, both in the U.S. and abroad. In fact, what brought the tech titan and the world power together was the global rise of fake news.

Fake News’ Ascent

Following Trump’s surprise victory, fake news quickly became the phrase of the moment, following a falsehoods zeitgeist that prompted the Oxford Dictionary to select “post-truth” as its word of the year in 2016. Stories ranging from Pope Francis endorsing Trump for the presidency to Hillary Clinton selling weapons to ISIS sprung up on people’s news feeds in the weeks before the election, dominating the digital public discourse.

View full interactive infographic here: https://public.tableau.com/views/FakeNews/Dashboard1?:embed=y&:display_count=yes

Policymakers, media representatives and tech companies alike expressed concern about the risks that fabricated stories could pose for democracy. Supporters of Trump’s defeated opponent, Hillary Clinton, blamed the phenomenon for her loss. Trump, meanwhile, called media-reported criticisms of him fake. At this year’s World Economic Forum in Davos, Switzerland, a special session gathered senior Facebook and media figures to discuss how fabricated stories can be combated.

“I think anybody who is engaged in media has to figure out how to deal with fake news, and they have to do it quickly,” Joel Benenson, a chief strategist for Clinton’s and Barack Obama’s campaigns, told Pro Journo after the Davos meeting. Benenson acknowledged the dangers in regulating media but made it clear that the unrestrained flood of intentionally false stories would make real democracy impossible.

“There are risks: You don’t want government interfering, and you need a vibrant free press,” Benenson said. “But if you leave these platforms completely open, that is not a sustainable proposition for democracy.”

Initially, Facebook had strongly resisted the idea that it should accept editorial responsibility for the content published on its platform; it was fearful of the costs and the extra regulation this could mean. But under intense public pressure — and a German bill that proposed up to a 500,000-euro fine per fake article — the company began to take steps to demonstrate it was acting against the fabricated stories. In January, after a series of false reports targeting German Chancellor Angela Merkel surfaced on social media, the company released a fake news-filtering mechanism that allowed users to report articles as fake.

The system sent articles flagged by a sufficient number of users to Correctiv, a third-party fact-checking nonprofit staffed by investigative journalists who would work to determine the content’s validity. Stories found to be false were then marked as “disputed” and placed under a restriction by Facebook’s news feed algorithm, with a warning sent to users who chose to share them.

The U.S. version of the system arrived in March, with the fact checking delegated to major American news outlets as well as nonprofits. The Associated Press, ABC News, PolitiFact and Snopes signed on. The initiative earned early praise, with some saying it seemed as if Facebook had developed a promising response to counter the post-truth age.

But after that enthusiasm, the reception for the new system quickly took a different turn. A number of the news outlets signed on to fact-check began to lose interest: The value of experienced reporters dedicating their time to debunking often obviously absurd pieces quickly came into question. More important, the sheer amount of fake posts being flagged, exacerbated by troll armies, demanded impossibly large numbers of staff to police, far more than the half-dozen tasked by each organization.

At the same time, users and media commentators asked why Facebook wasn’t simply removing most of the posts automatically. That demand grew far louder when people realized the tool that would allow this already existed: in China.

The content-suppression algorithm that Facebook developed for China was first reported in 2016. The mechanism, unlike the U.S. one, did away with the flaggers and fact checkers and simply removed posts preemptively by deleting them based on keyword analysis. Facebook was not responsible for choosing which posts were removed — that was handed to a third-party company, which set the keywords.

The algorithm developed for China’s social media networks could be applied effectively to fake news reports. There was a growing outcry that it could be applied in China but not in the U.S. In response, Facebook agreed to begin using the tool in collaboration with some of its fact-checking partners.

Yet as Facebook launched its countermeasures against false stories, Chinese authorities saw the frenzy over fake news as an opportunity to expand censorship over their country’s online activity. Chinese officials quickly began labeling articles critical of their rule “fake news.”

From Propaganda to Censorship

The blurring of debunking and censorship happened quickly. Already, Facebook generally obeyed foreign governments’ requests to block access to content — between July and December 2015, the company blocked as many as 55,000 pieces of content in about 20 countries. But the new content-suppression mechanism became a faster and far more effective tool for authoritarian governments. Whereas previously the governments had to make requests, their flaggers and supporters could now do the job for them.

This past March, Turkey became the first country to use the content-suppression software to not just restrict circulation and flag stories but also to remove content altogether. Articles deemed “insulting to the president” or to “Turkish identity” often evaporated from Facebook. Zeynep Gülec, a self-exiled former reporter for the newspaper Hürriyet, published an open letter in which she said “fighting for media freedom in Turkey was always hard, but with Facebook’s new tool it has become almost impossible.”

China, for its part, accepted Facebook’s opening to its market after delegating the fact-checking responsibility to Zhenli, a newly created Chinese company based in Beijing. It is nominally independent, but Zhenli’s ownership is murky, connected to businessmen linked to senior Communist officials. A number of its staffers have previously worked at state-run news agencies.

Back at Home

Back in the U.S., the effects of Facebook’s content filtering have been less monolithic. The rollout of the system met with a fierce public backlash and a polarized conventional media. Some of Facebook’s fact-checker partners pulled out, warning it risked giving the impression that they had the right to delete competitors’ content. Regulators warned that the system risked breaching antitrust laws.

Facebook, which had hired an editorial staff to help manage the filtering, has become the focus of intense partisan criticism, particularly from Republicans who say the mechanism is rigged. President Trump has attacked Facebook’s and others’ news-filtering services, accusing the “crooked fact-checkers” of bias and of backing an agenda critical of his administration. “Journalists are among the most dishonest human beings on earth,” Trump has said repeatedly.

Democrats have also attacked the mechanism after a series of embarrassing deletions of ironic posts and those examining distressing issues, such as gun violence and pedophilia. Civil liberties groups have questioned the desirability of controlling speech so strongly, even when it is false. Meanwhile, alternative social media networks have become increasingly popular among right-wing users.

It’s been a year since fake news was the expression of the moment at Davos, and concerns about the potential risks of government abuse around the world have proved to be true. Facebook’s news-filtering tool has opened the Pandora’s box of digital censorship, with authoritarian governments across the globe requesting similar — and increasingly harsher — content-removal policies but without the accountability guaranteed by Western liberal democracies. And even in such democracies, this hasn’t restored a healthy democratic debate, as many were hoping.

It seems that fake news has trapped Facebook between Scylla and Charybdis. If it leaves this content unchecked, it allows for propaganda to dominate its news feeds. But if it filters the fake reports, it effectively facilitates censorship.

--

--