The views expressed by contributors are their own and not the view of The Hill

The Supreme Court could soon change the internet forever — here’s what you need to know 

In the midst of wars in the Middle East and Ukraine, social media companies are struggling to handle an onslaught of misinformation, which is often spread by bot and troll accounts. An upcoming decision by the Supreme Court might make this problem go from bad to worse.  

On Sept. 29, the Supreme Court announced it would rule on whether to allow recent laws passed in Florida and Texas that restrict social media sites from removing certain users and posts. These broad laws aim to prevent censorship of political candidates and all user viewpoints. The tech trade association NetChoice said these laws, currently blocked by federal courts, would “transform speech on the internet as we know it today.” 

The share of Americans ages 18 and over who use social media has ballooned from 50 percent to 72 percent in the last decade, according to Pew Research. As global conflicts escalate and the 2024 U.S. presidential election approaches, the Supreme Court’s decision may have a profound impact on the future of online speech and on democracies worldwide. 

Lawmakers in Florida and Texas have argued their legislation is necessary to prevent social media platforms from unfairly censoring or suppressing conservative viewpoints. To their point, civil discourse is the backbone of well-functioning democracies. In fact, both sides of the political aisle have accused the social media giants of having biased moderation practices. A 2020 poll by Pew found that 75 percent of Americans — conservatives and liberals alike — believe Facebook and Twitter/X censor political views. 

Herein lies the rub. While the intentions behind these states’ legislation may be well-meaning, its approach is fundamentally flawed.  

The First Amendment of the Constitution protects individuals from government censorship. The question the Supreme Court is grappling with is the extent to which the First Amendment provides such rights to private enterprise. 

There’s a history of case law protecting the rights of privately owned publishers and social networks to make their own editorial decisions — including algorithmically sorted content. The U.S. Court of Appeals (11th Circuit) ruling in May 2022, which blocked Florida’s law, stated “while the Constitution protects citizens from governmental efforts to restrict their access to social media…no one has a vested right to force a platform to allow [a citizen] to contribute to or consume social media content.” 

However, in a contradictory ruling at the same time, the 5th Circuit upheld the Texas law, denying private companies’ First Amendment rights. Of note, both circuit courts were comprised of three Republican judges.   

If the Supreme Court sides with the 5th Circuit, thereby forcing social media sites to publish or not publish certain types of speech, in effect the government will ironically be trampling on the First Amendment rights of social media companies. This would set a troubling precedent. Once the government gets involved in making moderation decisions for social platforms, it’s easy to imagine how future lawmakers could abuse such powers — for instance, by forcing social platforms to promote the government’s position in international conflicts or to advocate for or against the validity of election results or the effectiveness of vaccines, depending on the party in power. 

Another problematic outcome in this scenario is that some social sites might feel compelled to take a “hands off” approach and cease all moderation entirely, due to their fear of being fined by the government and sued by users. If history is our guide, they would then be overrun with spam, hate speech, pornography, bullying, doxing, violence incitement, and exponentially more misinformation than we see today. Paradoxically, this would make social media unusable for most people regardless of their politics. This was the user experience on “anything goes” sites like 8Chan as well as Secret, whose founder shut the site down and refunded its investors

There are relevant cases we can look to for hints of the Supreme Court’s forthcoming decision. For instance, the 2018 ruling determined that a Colorado baker had the right to refuse a wedding cake for a gay couple because it went against his religious beliefs. This set a strong precedent for the rights of business operators to allow or deny service at their discretion. Given the conservative majority on the high court, it’s conceivable it would be sympathetic to Florida and Texas on this matter. Yet by that same measure, the court’s history and conservative leanings suggest it is more likely to side with private enterprise over government intrusion.   

Taking all this into account, what are the best solutions to support free expression, mitigate political bias, and reduce misinformation on social media?  

One step is to upend the monopoly power of the current social giants. Stronger antitrust enforcement would better allow social media upstarts the opportunity to reach critical mass and become effectively competitive. The Federal Trade Commission’s (FTC) current antitrust lawsuit against Facebook supports this premise.  

A second step is the passage of legislation supporting data and content interoperability. This playing field leveler would make it easy for all social media users to move their content, contacts, and fans/followers from site to site. This would help break down monopolistic barriers that currently keep users bound to a particular platform. There have been stalled initiatives attempting to legislate this in the past few years. Recently, in a show of bipartisanship, five senators co-sponsored the re-introduction of the ACCESS Act (Augmenting Compatibility and Competition by Enabling Service Switching). It’s high time to get this done. 

A third step is renewed legislative protection of Section 230, which has come under fire recently from both political parties. In fact, in the 5th Circuit’s legal interpretation, as summarized by Princeton Legal Journal, “Section 230 means that social media companies are not the publishers of content posted on their platforms, and thus are not afforded the same First Amendment protections as other publishers.” This is an unprecedented view contradicting the purpose and protections afforded by this cornerstone of the internet.   

Enacted in 1996, Section 230 remains critically important for a functioning free market. It gives sites broad leeway to moderate at their discretion without liability for content posted by their users. This protects startups and small- and medium-sized companies without massive funds or armies of lawyers from being bankrupted by content liability lawsuits.  

Perhaps most importantly, a fourth step is narrowly targeted legislation requiring large social networks like Facebook, Instagram, Twitter/X and TikTok to implement mandatory user ID verification. The verification system wouldn’t collect any user data beyond the minimum necessary to verify users as real people. Russia, Iran, China and other nefarious actors routinely deploy bot and troll farms on social media to spread misinformation and disrupt democracies while hiding behind the cloak of anonymity. Mandatory user ID verification is the best option we have to mitigate this threat. Taking a page from the European Union’s Digital Services Act, which applies to all platforms operating in the EU, this new legislation can be effective to all large social sites operating in the U.S. 

Instead of dictating content moderation policies for private companies, the Supreme Court can uphold the First Amendment rights of private enterprises, and lawmakers can focus on strengthening free market fundamentals and fair competition online, while supporting authentic civil discourse by minimizing anonymous bots and trolls. In this scenario, if users don’t like the moderation policies of Facebook, Instagram, Twitter/X, TikTok, etc., they can seamlessly migrate to viable alternatives. That’s the true spirit of capitalism that supports both private enterprise and civil discourse in democratic societies. 

Mark Weinstein is a world-renowned tech thought leader and privacy expert. He is the founder of the social network MeWe, which he left in July 2022, and is currently writing a book on the intersection of social media, mental health, privacy, civil discourse and democracy.