Facebook Missed 10.1 Billion Opportunities to Stop Juicing Hoaxes and Propaganda in 2020, Study Finds

We may earn a commission from links on this page.
Image for article titled Facebook Missed 10.1 Billion Opportunities to Stop Juicing Hoaxes and Propaganda in 2020, Study Finds
Photo: Lionel Bonaventure (Getty Images)

If Facebook had moved sooner to restrict hoax, toxic, misleading, or other content attempting to interfere with the 2020 elections, it could have limited its reach by around 10 billion views, according to a report by the advocacy group Avaaz.

Researchers for Avaaz identified the 100 highest-performing pages on Facebook that had shared content classified as misinformation by the company’s own third-party fact-checkers, including at least twice within 90 days, per Time. Those pages received significantly more engagement than others throughout the summer of 2020, amid both the run-up to the election, the novel coronavirus pandemic, and nationwide protests against police racism and brutality triggered by the Minneapolis police killing of Black resident George Floyd.

Advertisement

Avaaz found that “had Facebook tackled misinformation more aggressively and when the pandemic first hit in March 2020 (rather than waiting until October), the platform could have stopped 10.1 billion estimated views of content on the top-performing pages that repeatedly shared misinformation ahead of Election Day.” This allowed some of the disreputable pages to catch up in terms of social media interactions to major networks with far more followers like CNN in July and August, at the height of Black Lives Matter protests, according to the report.

Advertisement
Advertisement

While the researchers took pains to note that disinformation targets individuals of pretty much every ideology on Facebook, the majority of it targeted conservatives, dovetailing with prior research that far-right misinformation spreads far and wide on the site compared to other types of content:

[...] It is important to note that the pages we identified spanned the political spectrum, ranging from far-right pages to libertarian and far-left pages. Of the 100 pages, 60 leaned right, 32 leaned left, and eight had no clear political affiliation. Digging further into the data, we were able to analyse the breakdown of the misinformation posts that were shared by these pages and we found that 61% were from right-leaning pages. This misinformation content from right-leaning pages also secured the majority of views on the misinformation we analysed - securing 62% of the total interactions on that type of content.

Advertisement

In fact, Facebook did worse at policing content it knew was fake or misleading. Avaaz found that the top 100 stories which had actually been “debunked by fact-checkers working in partnership with Facebook” still racked up 162 million views, “showing a much higher rate of engagement on top misinformation stories than the year before, despite all of Facebook’s promises to act effectively against misinformation that is fact-checked by independent fact-checkers.”

The report argues that even when Facebook identified misinformation, its automated systems regularly failed to detect copycat accounts reposting the same material. Before runoff elections in Georgia, Avaaz found, over 100 political ads containing claims debunked by Facebook fact-checkers were allowed to be seen 52 million times. Nearly half came from Senate candidates, who like other political candidates are exempted from Facebook rules on misinformation in ads and thus freely allowed to lie in them.

Advertisement

“The scary thing is that this is just for the top 100 pages—this is not the whole universe of misinformation,” Fadi Quran, an Avaaz campaign director and one of the authors of the report, told Time. “This doesn’t even include Facebook Groups, so the number is likely much bigger. We took a very, very conservative estimate in this case.”

Avaaz also found that Facebook did little to interfere with the explosive growth of Pages and Groups calling for violence during and after the 2020 elections, which eventually played a significant role in pro-Donald Trump riots at the Capitol on Jan. 6, where five died.

Advertisement

The group’s researchers identified a subset of Pages and Groups which had “adopted terms or expressions in their names and/or “About” sections commonly associated with violent extremist movements in the USA (e.g. QAnon, Three Percenters, Oathkeepers, Boogaloo and variants such as “Big Igloo,” etc).” To that list, they added Pages and Groups “glorifying violence or that call for, praise, or make light of the death or maiming of individuals for their political beliefs, ethnicity, sexual orientation, as well as tropes and imagery commonly associated with extremist actors.”

The resulting 267 Pages and Groups had a combined following of roughly 32 million. Over two-thirds (68.7 percent) of them had names referencing the QAnon conspiracy theory, a loosely organized anti-government movement popular on the far right called Boogaloo, or various militias. Roughly 118 of those pages were “still active on the platform and have a following of just under 27 million [as of Feb. 24]—of which 59 are Boogaloo, QAnon or militia-aligned.”

Advertisement

Facebook, which has long insisted it is rising to the challenge of keeping its platform free of hate speech and calls for violence, said the report wasn’t accurate.

Company spokesperson Andy Stone told Mother Jones that the “report distorts the serious work we’ve been doing to fight violent extremism and misinformation on our platform” and relies on “a flawed methodology to make people think that just because a Page shares a piece of fact-checked content, all the content on that Page is problematic.”

Advertisement

According to the Associated Press, Facebook said on Monday that it had already taken down four and would remove another 14 of the 118 Pages and Groups identified by Avaaz, saying they “actually violated” the company’s policies.

Compelling Facebook to do more may be easier said than done. Avaaz’s recommendations in the report include regulation requiring social media firms to issue “comprehensive reports on disinformation and misinformation, measures taken against it, and the design and operation of their curation algorithms,” to downrank “hateful, misleading, and toxic content from the top of people’s feeds,” and to require firms to notify users when they saw or interacted with misinformation. Each of these solutions raises obvious First Amendment issues, given that they would require the government to define certain categories of speech and compel private companies to act on that basis.

Advertisement

Another solution proposed by Avaaz is rewriting Section 230 of the Communications Decency Act, the law that shields websites from most liability for user-generated content and is one of the building blocks of the modern web. The group recommended that the law be altered to “eliminate any barriers to regulation requiring platforms to address disinformation and misinformation,” but wasn’t specific as to which changes.

While pressure has been building on both sides of the aisle to change Section 230—with congressional Dems generally citing misinformation and hate speech, and their Republican colleagues citing baseless conspiracy theories about liberal bias—experts on internet law have generally shot down those efforts, saying they could have massive unintended consequences across the internet. Avaaz acknowledged as much, with its team writing in the report that Joe Biden’s “administration should not pursue the wholesale repeal of Section 230.”

Advertisement