The views expressed by contributors are their own and not the view of The Hill

Elon Musk is right about Twitter and solving Section 230

After making headlines by announcing an offer to purchase Twitter, it is easy to see why one might characterize Elon Musk’s behavior as self-promotional. But the move, which culminated surprisingly quickly with Twitter’s sale to Musk, creates an opportunity to amend Section 230, and provides a blueprint for the governance of social media platforms, which has become a pressing large-scale societal issue.

Musk proposes that Twitter’s algorithm should be open source and that “there should be no behind the scenes manipulation, either algorithmically or manually.”

This blueprint could help solve the most vexing problem confronting lawmakers, namely, how to regulate social media platforms in a way that strikes the right balance between free speech and societal harm. Musk’s answer is to start with a light touch, figure out the rules, make them visible and apply them uniformly. Amendments would be recorded in an immutable public ledger.

For many years, I have similarly argued that algorithmic transparency of social media platforms is a necessary requirement for trust. Regulation is needed because social media platforms have become the de facto public square. Former President Obama noted in 2017 that we needed to “have a conversation about their business model that recognizes they are a public good as well as a commercial enterprise.” He pointed to the challenges in a democracy when people have “entirely different realities that are being created, with not just different opinions but now different facts — different sources, different people who are considered authoritative.”

Musk criticizes Twitter’s opaque data governance policy, which carries risks of harmful suppression and amplification of content, hindering free speech. Social media platforms including Twitter and Facebook counter that they are employing thousands of humans for content moderation and commend themselves for how well they are doing. But the root of the problem is much deeper, and no amount of human power can solve it.

Content moderation suffers from what psychologist and economist Daniel Kahneman calls “noise” in human decisionmaking. Even when the rules for decisionmaking are well-specified, different judges make opposite decisions on the same data, and individual judges are often inconsistent, impacted by irrelevant factors such as mood and weather. But algorithms can cause unintended harm that may become evident only much later, if at all. My NYU Stern colleague Jonathan Haidt argues that there is sufficient evidence to link algorithms to increases in self-harm and suicide in teenage girls. This only became apparent over time, based on studies conducted in several countries. Without transparency, we are blind to such risks.

In arguing for algorithmic transparency as the solution, I have proposed two related policies that have worked in financial services:

1.     Know your customer (KYC). Banks do this for the government because it is important to be able to collect taxes and maintain financial records.

2.     Demonstrate compliance with the rules – such as acting as a fiduciary and not engaging in market manipulation – on demand by regulators. In reality, financial regulators audit a very small number of enterprises, but the credible threat of regulation is a major deterrent, since all activity is subject to audit at any time.

And yet, for social media, where democracy itself is at stake, we have let the platforms establish the rules through Section 230 — which gives platforms blanket immunity for anything published on them,. This is asking for trouble. Letting private enterprise design and operate the public square invites scandals like Cambridge Analytica and the questionable moral behavior revealed by whistleblower Frances Haugen. Societal consequences invariably take a back seat to shareholder interest.

Musk’s proposal would lead to governance of platform content and users based on policy specified in open-source code, which aligns with my proposal for algorithmic transparency. When Trump was ejected by Facebook and Twitter, I opposed the move because it was not based on a transparent policy, but rather on feelings. I proposed that the “objective functions” underlying their algorithms be made visible to the public, and a transparent policy implemented for censoring content and users that include warnings and appeals. Making the rules transparent would eliminate errors of incorrectly suppressing or amplifying content that is okay, and not banning content that is not okay.

Finally, it is important that policy should be informed by science, and here initial strides have already been made. Chris Bail demonstrates, for example, that our need for validation by peers amplifies extreme voices and dampens moderate ones. The objective functions of platforms encourage such behavior. Perhaps different objective functions, more in the spirit of Stack Overflow – which rewards content that many people find most “useful” – would lead to more civilized discourse than content that stokes the most emotion. Social media platforms are in the best position to do the science in a way that is visible to us.

Transparent governance of the digital public square based on public input would represent a major step forward for a liberal democracy. Musk is onto something. As long as the rules are based on public input and allow for regulatory oversight, the seeds of Musk’s proposal are a step in the right direction.

Vasant Dhar is professor at the Stern School of Business and the Center for Data Science at NYU and host of the podcast “Brave New World.”

Tags Barack Obama Elon Musk Elon Musk Musk tweets Obama Section 230 Twitter Twitter ban Twitter takeover

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.