Technology

Social media platforms’ ‘flawed policies’ amplify election fraud claims: report 

Social media companies have weak policies on misinformation and have failed to enforce them consistently ahead of the 2022 midterms, according to a new report released Monday.

The report, from New York University’s Stern Center for Business and Human Rights, faults Meta, Twitter, YouTube and TikTok for not taking a proactive approach to address misinformation, including a growing trend of election denialism and false claims of fraud.

They say the lack of a proactive approach threatens the approaching election.

Although social media companies have pledged to tackle election misinformation, the report said the companies’ “flawed policies and inconsistent enforcement result in the continued amplification of election denialism, especially in key battleground states.”

The report highlights Facebook’s continued exemption of politicians from its fact-checking program as a concern for amplifying the spread of election denialism. 

It also slam’s Twitter for having an “on-again/off-again enforcement” of its Civi Integrity Policy in a way that has allowed election denialism to “gain momentum since early 2021.” 

The report also focuses on video content, stating that YouTube has allowed its platform to be “exploited by proponents of disinformation” and that TikTok is “increasingly plagued by political misinformation.” 

Paul Barrett, an author of the report, said the risk is heightened by the number of Republican candidates who have embraced election denialism. 

“In a sense, the problems are intensifying, even though it’s an off year, even though it’s not a presidential election. And I don’t think the platforms have appreciated the degree to which election denialism seems to have become sort of a permanent part of Republican politics,” he told The Hill.

“Rather than being extra vigilant, it seems like the platforms are kind of just going through the motions there. They’ve put out statements and said these are our policies — they’re basically the same policies they’ve had in the past,” he said. 

In response, the platforms responded to researchers by defending the policies they have in place.

Meta spokesperson Tom Reynolds said in a response cited in the report that the platforms’ systems are designed to “reduce misinformation, not to amplify it.” 

“Any suggestion otherwise is wrong. We use a combination of artificial intelligence, human review, and input from partners — including fact-checkers— to address problematic content, which again is not aligned with our business interests,” Reynolds said. 

The report says YouTube did not provide an on-the-record response to the report, but the company earlier this month put out an announcement about its efforts to combat misinformation, including a pledge to enforce policies “consistently for everyone, regardless of the speaker’s public figure status.” 

A TikTok spokesperson said in a statement cited in the report that the company prohibits and removes election misinformation and works with fact-checkers to assess content. 

Twitter, which has faced the brunt of scrutiny after a whistleblower recently came forward alleging widespread security deficiencies, told the report’s authors it uses triage resources to pay attention to election-related misinformation in the U.S. and globally. 

To address the issue, the report recommends greater transparency of the platform’s algorithms, either from the platforms themselves or the government passing laws to force their hand.

It also calls for independent audits to check on the platforms, enhanced fact-checking, removal of “demonstrably false content” and consistent policies. 

The report also says platforms have to focus more on the “next threat” in a proactive rather than reactive way.

“Being able to figure out what problems are on the horizon and what they want to do about them before they are disseminated widely on their platforms would be a huge improvement,” Barrett said.