Friday, September 27, 2024
More
    HomeBusinessFollowing Musk’s lead, Youtube and Facebook are giving up on policing conspiracies

    Following Musk’s lead, Youtube and Facebook are giving up on policing conspiracies

    Facebook and YouTube are receding from their role as watchdogs against conspiracy theories ahead of the 2024 presidential election

    Three small pools with social media icons for YouTube, Facebook, and Twitter (or X) with an empty lifeguard chair behind them bearing the sign "swim at your own risk."
    (Washington Post illustration; iStock)

    Social media companies are receding from their role as watchdogs against political misinformation, abandoning their most aggressive efforts to police online falsehoods in a trend expected to profoundly affect the 2024 presidential election.

    An array of circumstances is fueling the retreat: Mass layoffs at Meta and other major tech companies have gutted teams dedicated to promoting accurate information online. An aggressive legal battle over claims that the Biden administration pressured social media platforms to silence certain speech has blocked a key path to detecting election interference.

    And X CEO Elon Musk has reset industry standards, rolling back strict rules against misinformation on the site formerly known as Twitter. In a sign of Musk’s influence, Meta briefly considered a plan last year to ban all political advertising on Facebook. The company shelved it after Musk announced plans to transform rival Twitter into a haven for free speech, according to two people familiar with the plans who spoke on the condition of anonymity to describe sensitive matters.

    The retrenchment comes just months ahead of the 2024 primaries, as GOP front-runner Donald Trump continues to rally supporters with false claims that election fraud drove his 2020 loss to Joe Biden. Multiple investigations into the election have revealed no evidence of fraud, and Trump now faces federal criminal charges connected to his efforts to overturn the election. Still, YouTube, X and Meta have stopped labeling or removing posts that repeat Trump’s claims, even as voters increasingly get their news on social media.

    Trump capitalized on those relaxed standards in his recent interview with former Fox News host Tucker Carlson, hosted by X. The former president punctuated the conversation, which streamed Wednesday night during the first Republican primary debate of the 2024 campaign, with false claims that the 2020 election was “rigged” and that the Democrats had “cheated” to elect Biden.

    On Thursday night, Trump posted on X for the first time since he was kicked off the site, then known as Twitter, following the Jan. 6, 2021, assault on the U.S. Capitol. Musk reinstated his account in November. The former president posted his mug shot from Fulton County, Ga., where he was booked Thursday on charges connected to his efforts to overturn the 2020 election. “NEVER SURRENDER!” read the caption.

    Musk’s ‘free speech’ agenda dismantles safety work at Twitter, insiders say

    The evolution of the companies’ practices was described by more than a dozen current and former employees, many of them speaking on the condition of anonymity to offer sensitive details. The new approach marks a sharp shift from the 2020 election, when social media companies expanded their efforts to police disinformation. The companies feared a repeat of 2016, when Russian trolls attempted to interfere in the U.S. presidential campaign, turning the platforms into tools of political manipulation and division.

    These pared-down commitments emerge as covert influence campaigns from Russia and China have grown more aggressive, and advances in generative artificial intelligence have created new tools for misleading voters.

    Experts in disinformation say the dynamic headed into 2024 calls for more aggressive efforts to combat it, not less.

    “Musk has taken the bar and put it on the floor,” said Emily Bell, a professor at the Tow Center for Digital Journalism at Columbia University, where she studies the relationship between tech platforms and news publishers. For the 2024 presidential election, misinformation around races is “going to be even worse,” she added.

    The social media platforms say they still have tools to prevent the spread of misinformation.

    “We remove content that misleads voters on how to vote or encourages interference in the democratic process,” YouTube spokesperson Ivy Choi said in a statement. “Additionally, we connect people to authoritative election news and information through recommendations and information panels.”

    Meta spokeswoman Erin McPike said in a statement that “protecting the U.S. 2024 elections is one of our top priorities, and our integrity efforts continue to lead the industry.”

    Changing Facebook’s algorithm won’t fix polarization, new study finds

    Yet it is already changing what some users see online. Earlier this month, the founder of a musical cruise company posted a screenshot on Facebook appearing to show Illinois Gov. J.B. Pritzker (D) falsely signing a bill that would allow undocumented immigrants to become police officers and sheriff’s deputies. “In Illinois American citizens will be arrested by illegals,” reads the post, which has been shared more than 26o times.

    Fact-checkers at USA Today, one of dozens of media organizations Meta pays to debunk viral conspiracies, deemed the post false, and the company labeled it on Facebook as “false information.” But Meta has quietly begun offering users new controls to opt out of the fact-checking program, allowing debunked posts such as the falsified one about Pritzker to spread in participants’ news-feeds with a warning label. Conservatives have long criticized Meta’s fact-checking system, arguing it is biased against them.

    Meta Global Affairs President Nick Clegg said the ability to opt out represents a new direction that empowers users and eases scrutiny over the company. “We feel we’ve moved quite dramatically in favor of giving users greater control over even quite controversial sensitive content,” Clegg said. McPike added that the new fact-checking policy comes “in response to users telling us that they want a greater ability to decide what they see.”

    YouTube has also backed away from policing misleading claims, announcing in June it would no longer remove videos falsely saying the 2020 presidential election was stolen from Trump. Continuing to enforce the ban would curtail political speech without “meaningfully reducing the risk of violence or other real-world harm,” the company argued in a blog post.

    Inside the civil rights campaign to get Big Tech to fight the ‘big lie’

    These shifts are a reaction from social media executives to being battered by contentious battles over content and concluding there is “no winning,” said Katie Harbath, former director of public policy at Facebook, where she managed the global elections strategy across the company.

    “For Democrats, we weren’t taking down enough, and for Republicans we were taking down too much,” she said. The result was an overall sense that “after doing all this, we’re still getting yelled at … It’s just not worth it anymore.”

    For years, many of Meta’s trust and safety teams operated like a university. Driven by curiosity, employees were encouraged to seek out the thorniest problems on the platform — issues such as fraud, abuse, bias and attempts at voter suppression — and develop systems to help.

    But in the last year and a half, some workers say there has been a shift away from that proactive stance. Instead, they are now asked to spend more of their time figuring out how to minimally comply with a booming list of global regulations, according to four current and former employees.

    That’s a departure from the approach tech companies took after Russia manipulated social media to attempt to swing the 2016 election to Trump. The incident transformed Mark Zuckerberg into a symbol of corporate recklessness. So the Meta CEO vowed to do better.

    Zuckerberg apologizes, promises reform as senators grill him over Facebook’s failings

    He embarked on a public contrition tour and vowed to devote the company’s seemingly infinite resources to protecting democracy. “The most important thing I care about right now is making sure no one interferes with the various … elections around the world,” Zuckerberg told two Senate committees in 2018, the same year a Wired cover depicted him with a bruised and bloody face.

    In the run-up to the 2020 presidential election, social media companies ramped-up investigative teams to quash foreign influence campaigns and paid thousands of content moderators to debunk viral conspiracies. Ahead of the 2018 midterms, Meta gave reporters tours of its so-called “war room,” where employees monitored violent threats in real-time.

    Civil rights groups pressured the platforms — including in meetings with Zuckerberg and Meta COO Sheryl Sandberg — to bolster their election policies, arguing the pandemic and popularity of mail-in ballots created an opening for bad actors to confuse voters about the electoral process.

    “These platforms were making all sorts of commitments to content moderation and to racial justice and civil rights in general,” said Color of Change President Rashad Robinson, whose racial justice group helped organize an advertising boycott by more than 1,000 companies including Coca-Cola, The North Face and Verizon following the police murder of George Floyd.

    Zuckerberg once wanted to sanction Trump. Then Facebook wrote rules that accommodated him.

    They instituted strict rules against posts that might lead to voter suppression. As Trump questioned the validity of mail-in ballots in 2020, Facebook and Twitter took the unprecedented step of attaching information labels such as, “This claim about election fraud is disputed” to scores of misleading comments. Google restricted election-related ads and touted its work with government agencies, including the FBI’s Foreign Influence Task Force, to prevent election interference campaigns.

    In early January 2021, rioters incited by Trump assaulted the U.S. Capitol after organizing themselves, in part, on Facebook and Twitter. In response, Meta, Twitter, Google and other tech companies suspended Trump, forcibly removing the president from their platforms.

    The moment was the peak of social media companies’ confrontation with political misinformation.

    But as the tech giants grappled with narrowing profits, this proactive stance began to dissolve.

    In the summer of 2021, Meta’s Clegg embarked on a campaign to convince Zuckerberg and the company’s board members to end all political advertising on its social media networks — a policy already in place at Twitter. Meta’s decision not to fact-check politicians’ speech had triggered years of controversy, with activists accusing the company of profiting off the misinformation contained in some campaign ads. Clegg argued the ads caused Meta more political trouble than they were worth.

    Two years after Jan. 6, Facebook mulls if Trump is still a threat

    While Zuckerberg and other board members were skeptical, the company eventually warmed to the idea. Meta even planned to announce the new policy, according to two people.

    By July 2022, the proposal had been shelved indefinitely. Internal momentum to impose the new rule seemed to plummet after Musk boasted of his plans to turn Twitter into a safe haven for “free speech”a principle Zuckerberg and some board members had always lauded, one of the people said.

    After Musk’s official takeover later that fall, Twitter would eventually rescind its own ban against political ads.

    “Elon’s position on that stuff definitely shifted the way the board and industry thought about [policy],” said one person who was briefed on the board discussions about the ad ban at Meta. “He came in and kinda blew it all up.”

    How Mark Zuckerberg broke Meta’s workforce

    Almost immediately, Musk’s reign at Twitter forced his peers to rethink other industry standards.

    On his first night as owner, Musk fired Trust and Safety head Vijaya Gadde, whose job it was to guard the companies’ users against fraud, harassment and offensive content. Soon after, just days before the midterms, the company laid off more than half of its 7,500 workers, crippling the teams responsible for making high-stake decisions about what to do about falsehoods.

    The cuts and the evolving approach to moderating toxic content prompted advertisers to flee. But while advertisers were leaving, other tech companies were paying close attention to Musk’s moves.

    In a June interview with the right-leaning tech podcast host Lex Fridman, Zuckerberg said Musk’s decision to make drastic cuts to Twitter’s workforce — including by cutting non-engineers who worked on things such as public policy but didn’t build products — encouraged other tech leaders such as himself to consider making similar changes.

    “It was probably good for the industry that he made those changes,” Zuckerberg said. (Meta has since laid off more than 20,000 workers, part of an industry-wide trend.)

    The Elonization of Mark Zuckerberg: How the Meta CEO is playing it cool

    Musk reinstated high-profile conservative Twitter accounts, including Jordan Peterson, a professor who was banned from Twitter for misgendering a trans person, and the Babylon Bee, a conservative media company. Musk also brought back Republican politicians including Trump and Rep. Marjorie Taylor Greene (Ga.), whose personal account was banned for violating the platform’s covid-19 misinformation policies. He simultaneously suspended the accounts of journalists including Washington Post reporter Drew Harwell, CNN reporter Donie O’Sullivan and others who reported on Musk.

    A spike in hate speech on the site followed as users tested boundaries.

    The political winds facing Silicon Valley were shifting, too. Trump’s 2020 election rigging claims had inspired a slew of Republican candidates to echo his rhetoric, cementing election denialism as a core Republican talking point. In a May poll by CNN, 6 in 10 Republican voters said they believed Trump’s falsehoods that the 2020 election was rigged.

    Soon after Musk’s Twitter acquisition, scores of Republican candidates and right-wing influencers tested Meta, Twitter and other social media platforms’ resolve to fight election misinformation. In the months leading up to the midterms, far-right personalities and GOP candidates continued to spread election denialism on social media virtually unchecked.

    This year, GOP election deniers got a free pass from Twitter and Facebook

    Mark Finchem, the Republican candidate seeking to oversee Arizona’s election system as the state’s secretary of state, made a fundraising pitch on the eve of the 2022 election, falsely arguing on Facebook and Twitter that his Democratic opponent, Adrian Fontes, was a member of the Chinese Communist Party and a “cartel criminal” who had “rigged elections” before.

    When Twitter, seemingly in response to journalists’ questions, appeared to restrict his account, Musk declared he was “looking into” complaints that Finchem was being censored. Later that evening, Finchem was back to tweeting his message. He thanked Musk “for stopping the commie who suspended me from Twitter a week before the election.”

    Last year, Meta dissolved the responsible innovation team, a small group that evaluated the potential risks of some of Meta’s products, according to a person familiar with the matter, and simultaneously shuttered the much-touted Facebook Journalism Project, which was designed to promote quality information on the platform.

    “What was once promoted as part of an essential component of Meta’s role in helping secure democracy, election integrity and a healthy information ecosystem, appears now to have been expendable,” said Jim Friedlich, executive director of the Lenfest Institute for Journalism, which served for two years as a lead partner in helping execute Facebook’s journalism grantmaking.

    Trump’s ‘big lie’ fueled a new generation of social media influencers

    Now, Meta is eyeing ways to cut down on having to referee controversial political content on its new Twitter-like social media app, Threads. Instagram head Adam Mosseri, who led efforts to build Threads, said earlier this year that the platform would not actively “encourage” politics and “hard news,” because the extra user engagement is not worth the scrutiny.

    But even as it tries to retreat from the political culture wars, there’s no hiding from the coming election.

    Soon after the company launched Threads, Meta started warning users who tried to follow Donald Trump Jr. on the new social network that his account has repeatedly posted false information reviewed by independent fact-checkers. Trump Jr. posted a screenshot of the message on rival Twitter, complaining that “Threads not exactly off to a great start.”

    A Meta spokesperson responded by saying, “This was an error and shouldn’t have happened. It’s been fixed.”

    After the incident was over, Clegg told The Post he hopes in the future such politically fraught debates will disappear.

    “I hope over time we’ll have less of a discussion about what our big, crude algorithmic choices are and more about whether you guys feel that the individual controls we’re giving you on Threads feel meaningful to you,” he said.

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    - Advertisment -
    Google search engine

    Most Popular

    Recent Comments