There’s always a palpable sense of uncertainty before the U.S. midterms: Which party will control Congress? And which policy agendas will be reinforced or derailed? But in recent years, there’s an added element of unease: How will tech platforms undermine civility and public trust in the election?
Despite boasting of election war rooms and content moderator armies, social media platforms continue to foster an environment that often harms, rather than helps, democracy. For the past several years, my nonprofit Mozilla Foundation has been monitoring elections online not just in the U.S., but also Germany and Kenya. And while each region has its own context, we have identified a handful of universal problems. Problems like dark money digital ads, misled voters and opaque algorithms that fuel lies just as often as truth.
For years, American voters — and residents of other democracies — have had to endure digital political ads of mysterious provenance. They’ve had to weather campaign pitches masquerading as organic content and online foreign influence operations. And they’ve had to confront fake accounts, botched fact-checking and hyper-targeted ads.
There’s no silver bullet for addressing these issues, but there is a necessary first step: transparency. Today’s social media platforms are incredibly opaque about their advertising, content moderation and algorithms. This prevents outsiders — from regulators to independent watchdogs — from diagnosing problems and thinking up solutions. Platforms face almost no oversight or accountability for their decisions — or mistakes. And while we’re lurching toward elections across the globe with little transparency, there are bright spots: proposed bills and potential product fixes that could make platforms more transparent.
One of the universal problems that plagues platforms during elections is poor, uneven enforcement. For example, TikTok — one of the most influential but under-scrutinized tech platforms — purportedly banned political advertising in 2019. But researchers proved that ban is ineffective and easy to evade. In 2021, Mozilla revealed that political advertising was still thriving on TikTok, and just masquerading as organic content. Partisan groups in the U.S. on all sides of the political spectrum could simply pay influencers on the platform to relay their political messaging.
Meanwhile, ahead of the Bundestag election in Germany last year, TikTok vowed to label all posts related to the election. But this automated approach to content moderation failed, with many political posts slipping through the cracks — and with overtly non-political posts receiving labels. At the core of both these failures was transparency, or a lack of it — no window into how enforcement is faring, or access to a public ad library for easy research. And it’s happening all over again in 2022.
Social media newsfeeds and trending features present other problems that amplify election disinformation. In Kenya last year, when the country was debating reproductive health legislation, Twitter’s trending section was routinely manipulated by political actors, according to researchers. What Kenyans saw as “trending” wasn’t actually genuine conversation — it was partisan content that political groups paid Twitter users to tweet about and make trend. (Twitter later acknowledged this research, permanently suspending more than 240 related accounts.)
Kenyan TikTok experienced a similar problem ahead of this year’s August elections: Content on the platform spread hate speech, incitement and other political disinformation to millions of voters just weeks before ballots were cast, researchers found. (Like Twitter, TikTok acknowledged the findings, removing offending content.)
Problems like these aren’t contained to East Africa; American Twitter users are also misled by the trending feature. What appears there — whether a politician’s name or a culture wars issue — isn’t actually determined by volume alone. It’s determined by a mix of volume, newness and personalization. This means a relatively small amount of people talking about a controversial issue can make it trend, surfacing it to millions more users. Once again, lack of transparency is the problem here: TikTok’s “For You” page and Twitter’s trending feature operate as black boxes.
But opacity doesn’t have to be the norm. Platform transparency is thankfully, finally on the policy agenda. There are bills in Congress and political candidates in the U.S. who realize the importance of transparency, like the Platform Accountability and Transparency Act (PATA), a bipartisan effort by Sens. Chris Coons (D-Del.), Rob Portman (R-Ohio) and Amy Klobuchar (D-Minn.). PATA would require platforms to share data with independent researchers. The bill would also give independent researchers safe harbor to conduct investigations, meaning platforms couldn’t arbitrarily sue them or shut them down — as they have in the past. In print and TV advertising, we don’t give publications and broadcasters the runway to define the rules on political ads; nor should we give that agency to digital platforms. Further, common sense laws like these can also have a ripple effect and strengthen democratic elections outside the U.S., where platforms are even more negligent.
Meanwhile, good faith technologists at these platforms can deploy product updates that make a difference. Twitter could disable its trending feature before, during and after election periods. And TikTok should be held to the same standard as other platforms and release an ad library, which allows researchers to more easily study who’s paying for influence on the app. My colleagues in civil society have proposed a range of other sound ideas, like a circuit breaker-type mechanism that prevents content from going viral without human oversight.
November’s midterm elections are fast approaching, and Americans should brace for an element of election disorder online — there’s no way to make systemic changes in a matter of weeks. But Americans can support lawmakers who are putting transparency on the agenda. And they can urge the platforms they use to make product updates for the public good — be that through petition, or by simply signing off.
Ashley Boyd is vice president of advocacy at Mozilla Foundation, where she leads research and campaigns around platform and artificial intelligence (AI) transparency.