The views expressed by contributors are their own and not the view of The Hill

The only way to manage disinformation in the 2020 election: Question everything

iStock

The World Health Organization has said the world isn’t battling just a pandemic but also an “infodemic” — “an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it.”

Consider that nearly half of the Twitter accounts posting about COVID-19 are likely bots aimed at sowing discord in the United States, according to a Carnegie Mellon University study.

Social meddling emerged after the death of George Floyd as well, with U.S. government officials believing foreign adversaries have flooded social media “to amplify criticism of the United States.”

Of course, we’ve seen this movie before. Fake news pumped out by tweeting bots played a potentially decisive role in the 2016 Brexit vote and in Donald Trump’s presidential victory, the private, non-profit National Bureau of Economic Research has said.

As the 2020 election season nears, it is acutely obvious that many of the same tactics will be employed by those who seek to sow chaos and division in our electoral process. The threat may be even higher than in 2016 because social media usage has only increased during the pandemic.

Are we powerless to stop this malfeasance?

The answer may lie in the same “zero trust” approach that enterprise cybersecurity leaders have embraced to protect their networks from hackers.

What is ‘zero trust’?

As it pertains to information security, zero trust refers to the idea that everything that connects to a company’s infrastructure — whether it’s a device, a document, or a person, should be regarded as untrustworthy until proven otherwise.

Zero trust is — on its face — a simple yet powerful concept: to eliminate the assumption of trust within a network. Yet this idea runs counter to what makes the internet such a dynamic and useful tool in which everyone can share anything at any time.

This is the fine line that companies are attempting to balance as they look to implement a zero trust approach: How do you provide the unfettered access users demand without introducing unnecessary risk to your network environment?

In a similar vein, in a world in which more people are acquiring and sharing their news via the unfiltered channels of social media, how can you keep the disinformation trolls at bay and minimize their impact?

What the social media platforms can do

Social media platforms have a responsibility to do everything in their power to stop misinformation from spreading. They say they’re trying. Facebook has been putting warning labels on some pieces of COVD-19-related content. Twitter has taken similar steps (even if the tweets are by Trump).

These moves suggest that social media platforms have moved more toward a factual verification strategy. But do they go far enough? While Facebook and Twitter are doing more to monitor and call attention to misinformation after it appears, they could be doing more to proactively stop the fakery before it happens.

A best practice that enterprises use to avoid attacks is threat modeling. As defined by the Open Web Application Security Project, “threat modeling works to identify, communicate, and understand threats and mitigations within the context of protecting something of value.”

This is a more holistic approach that attempts to identify every possible exposure point that malicious actors could take advantage of and then mitigate the threat. By doing a better job of identifying fake accounts as soon as they’re created, for example, Twitter could nip misinformation in the bud rather than handling it after posts appear.

What we all can do

Social media platforms have an obligation to do more, but every one of us can be smarter about spotting perpetrators of misinformation.

The Black Lives Matter Global Network issued some excellent guidelines last fall. We’d all be wise to heed them as the election approaches. The guidelines include examining the account’s bio to see if anything looks suspicious, looking for accounts that post and retweet in unusual patterns, and being wary of accounts that claim to be in the U.S., but almost always post from a non-U.S. based time zone, and watching out for multiple, apparently different accounts posting the same message.

The promise of the internet was in its potential to democratize and decentralize the flow of ideas. In that respect, it has been an unabashed success. But as we’ve learned over the past four years, the free flow of ideas between individuals should not be conflated with the notion of trust.

A more aggressive approach presumes that untrusted agents exist both inside and outside the network, and therefore trust must be removed from the equation. Ironically perhaps, the elimination of the assumption of trust, might in fact be the best way to ensure trust in our most sacred of institutions.

Kunal Anand is Chief Technology Officer at cybersecurity company Imperva. He previously co-founded a company called Prevoty (which was acquired by Imperva) and served as director of technology at BBC Worldwide. He also previously held security/engineering roles at Gravity, Myspace and the NASA Jet Propulsion Lab.

Tags 2020 election Deception Disinformation Donald Trump Facebook Fake News Internet bot misinformation Propaganda techniques Social media Twitter

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more