Cybersecurity

Report highlights Instagram, deepfake videos as key disinformation threats in 2020 elections

Instagram will likely be the main social media platform used to disseminate disinformation during the 2020 election, while altered “deepfake” videos of candidates will pose a threat as well, according to a report out on Wednesday. 

The report on disinformation tactics during the 2020 election, put together by New York University’s (NYU) Stern Center for Business and Human Rights, also pinpointed China, Russia and Iran as countries likely to launch such attacks against the U.S. in the lead-up to the elections.

{mosads}But foreign states will not be alone, with NYU finding that domestic sources of disinformation, such as users within the U.S. creating and circulating it, will be more prevalent than overseas sources.

Voter suppression will be the main target of both streams of disinformation, with the report warning that “unwitting Americans” could also be manipulated into participating in rallies and protests. 

The report from NYU emphasized that while “social media companies are playing better defense than they did in 2016,” it called on them “to step up their games in anticipation of 2020.”

Michael Posner, the director of NYU’s Stern Center, said in a statement that “taking steps to combat disinformation isn’t just the right thing to do, it’s in the social media companies’ best interests as well.”

“Fighting disinformation ultimately can help restore their damaged brand reputations and slows demands for governmental content regulation, which creates problems relating to free speech,” Posner added. “The platforms should seize this opportunity to implement sensible, necessary reforms that help build public trust and confidence in them while safeguarding our elections against improper interference.” 

The report comes after Russia carried out a social media disinformation campaign through social media outlets such as Facebook in 2016.

The NYU report includes a list of recommendations that social media companies can consider to defend themselves against the spread of disinformation. 

These include improving the identification of deepfake videos, removing false content instead of opting not to promote it, stepping up efforts to patrol for disinformation on Instagram and WhatsApp, and hiring a senior content overseer to report to the CEO of each platform on efforts to guard against disinformation. 

Another step NYU recommends is the passage of the Honest Ads Act. The bipartisan bill, reintroduced earlier this year after it didn’t get a vote during the last Congress, is sponsored by Sens. Amy Klobuchar (D-Minn.), Lindsey Graham (R-S.C.), and Mark Warner (D-Va.), and aims to improve transparency around who purchases political ads.  

In an interview with The Hill, Paul Barrett, the report’s author and the deputy director of NYU’s Stern Center, said that he “doesn’t think the Honest Ads Act has much of a chance” of passage due to a lack of Republican support for election security bills in the Senate. 

Barrett noted that this is a difficult area to legislate, and argued that Congress should prioritize educating the public on the disinformation threat instead. 

“Congress would be making a huge contribution if there were hearings, particularly if there were bipartisan hearings … that educate people as to where we’ve been and what’s likely to come,” Barrett said. “We need more digital literacy, and Congress could use its position to provide that.”

Last month, Instagram rolled out a new tool that allows users to report misinformation on the platform. The flagged content is then reviewed by Instagram’s fact-checking program. If the content is deemed to be false, the company, which is owned by Facebook, will allow it to remain up but will not promote it. 

According to Instagram, the company will have an update to its policies around removing “extreme forms of misinformation” soon. 

Instagram follows a “remove, reduce, inform” policy around misinformation, removing content that violates policies, reducing the spread of content that is “problematic” but does not violate policies, and informing users with additional information about that content. 

“We know that our adversaries are always changing their techniques, so we are constantly working to stay ahead,” said Facebook spokesman Tom Reynolds about Instagram’s disinformation policy.

“We’ve developed smarter tools, greater transparency, and stronger partnerships to better identify emerging threats, stop bad actors, and reduce the spread of misinformation. We also know that security is never finished and we can’t do this alone, so we are working with policymakers and outside experts to make sure we continue to improve,” he added.

Facebook also took steps to address disinformation threats by revamping its political advertisements policy last week, tightening its identification rules for those trying to buy ads. 

Google published a white paper in February detailing its approach to disinformation, which is based around “making quality count in our ranking systems, counteracting malicious actors, and giving users more context.”

Despite these strides in the fight against disinformation, Barrett cautioned against complete optimism.

“I think this is a situation that is going to require tremendous effort on the part of these companies and law enforcement, and we have to be prepared for potentially a chaotic situation,” Barrett said. “The fact that we anticipate it is good, but we don’t know what we don’t know.”