The views expressed by contributors are their own and not the view of The Hill

3 ways to increase social media platforms’ transparency

a photo of a phone with social media apps
iStock

The rise of social media platforms is correlated with increased political polarization in the U.S. As much as President-elect Biden might promise to unify the nation, we face an algorithmic headwind from social media platforms that have created just the opposite thus far, immunized as they are from any consequences of their ad-driven business model. 

In the upcoming November 17 hearing on “Censorship and Suppression of News Articles” featuring the CEOs of Twitter and Facebook, lawmakers should focus on crafting rules that achieve the degree of operating transparency that has made the U.S. financial system the most trusted system in the world. They should recall that banks misbehaved for decades as long as no one was looking. Over time, we fixed the major problems across the industry through increased transparency.

At the moment, there is virtually no transparency of how social media platforms operate. Do they care about the purposes and goals of their users? Do we know what their recommendation and targeting algorithms do all day long and why? Do they keep an audit trail of their interventions against users, advertisers or content, let alone the consequences of their actions? If we wanted to examine the political content published and disseminated on a platform a few years ago in order to determine whether a crime was committed, can we access the history?

The answers to these questions are “no.” We certainly wouldn’t let banks operate like this. Regulators can demand to see their operating history at any time, such as all loans made or rejected, all ads and offers shown, and almost anything else that might link their actions to criminal or unethical activity. Without sufficient regulatory oversight, they manipulated markets, favored clients unethically, engaged in shady off-balance-sheet and soft-dollar arrangements, destroyed audit trails and much more. That changed, especially after the great financial crisis. Social media platforms are the new cowboys, and the risks to society this time around are much higher. 

The question of whether platforms are publishers or distributors of content is a distraction. During the last hearings, Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey reassured us that they would moderate content using a combination of artificial intelligence and humans. The AI will filter the obvious cases, and humans will handle the rest. This is wishful thinking, fraught with First Amendment violations, high cost of error and ultimately subject to human inconsistency and value judgments on which there will never be agreement. Indeed, given the closer-than-expected election outcome, one might ask whether it might have been different if Twitter had not barred the controversial New York Post tweet about Hunter Biden. True or not, it is a scary possibility when Jack Dorsey, Mark Zuckerberg and other operators of social media platforms can influence important political outcomes. 

So, what transparency do we require, where clear rules of law can be applied to the operation of social media platforms? There are three actions that can be taken — related to the platform’s users, its algorithms and audit trails of important platform activity and actions. 

First, platforms should moderate accounts, not posts. As long as an account doesn’t violate the law or platform rules around pornography etc., it should not be censored. But this requires “knowing your user” (KYU) well, much like “knowing your customer” (KYC) laws in industries such as finance. Platforms must vet who they are hosting and why. It is similar to opening a bank account. Can you prove you are who you say you are? And what is your purpose in using the platform? You can use pseudonyms and multiple accounts, but you must be authenticated by the platform. The majority of accounts are low risk, sharing pictures and apple pie recipes. But if you are heavily connected and/or publish a lot, you could belong to a higher risk category.

The second requirement is transparency of the platform’s recommendation algorithms, so that their larger impacts on society can be studied scientifically instead of being shrouded in secrecy. The advertising model in its current form encourages “data banditry” to increase attention and profit maximization, regardless of the potential side effects. For example, if connecting “similar” people leads to more overall engagement and hence more profits, algorithms will ruthlessly pursue the creation of dense connections among similar people, for example, by “closing triangles”— connecting two people who share a common link to a third person. Research now shows that densely connected networks can result in echo chambers that sow division and increase polarization. This type of collateral damage is an “externality” imposed on society as a side effect of pursuing engagement and profit maximization. For example, what if Facebook’s algorithms were causing high rates of depression among kids or an increase in crime and no one knew it?

In order to address such questions, platform operators should be made to maintain verifiable audit trails on all of their interventions, including targeting, just like media companies and others are required to do. Otherwise we have no way of knowing whether and how algorithms will learn to manipulate us without our knowledge. Certainly, they will get better and better at what they do the more they observe our behavior, and we have no idea what other potential collateral damage lies around the corner, other than the echo chambers and polarization that we are already witnessing. At the moment, Facebook has no incentives to look for such problems, let alone keep or share the data required to identify them. 

Lawmakers should focus on the causes of the current problem instead of getting distracted by the symptoms, such as partisan accusations of platform bias and how platforms should moderate content. These are inherently intractable, except for the obvious cases such as pornography and violence.

What we require from platforms is much simpler: transparency on users, algorithms and an auditable history of the platform’s activity. During the November 17 hearings, lawmakers should focus on how to achieve these three types of transparency in a non-partisan manner. The financial services industry provides a productive template for how to move forward.

Vasant Dhar is a professor at New York University’s Stern School of Business and the director of the PhD program at the Center for Data Science.

Tags Criticism of Facebook Facebook Jack Dorsey Mark Zuckerberg Social information processing Social media Transparency Twitter

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.