The views expressed by contributors are their own and not the view of The Hill

How the Supreme Court could change social media as we know it

People gather as the Supreme Court begins its new term and to hear the first arguments, on Capitol Hill in Washington, Monday, Oct. 3, 2022.
Associated Press/J. Scott Applewhite
People gather as the Supreme Court begins its new term and to hear the first arguments, on Capitol Hill in Washington, Monday, Oct. 3, 2022.

Social media companies currently are under attack from two directions in court. The legal pincer movement could cripple efforts by platforms like Facebook, YouTube, Twitter and TikTok to determine how they present content to billions of users. 

The first assault comes from critics seeking to blow a hole in the broad liability protection that social media platforms enjoy under a 1996 law known as Section 230 of the Communications Decency Act. Specifically, the U.S. Supreme Court announced on Monday that it would consider whether platforms can use Section 230 to shield themselves from being sued for recommending content related to Islamic State terrorism. A majority of justices may be prepared to punch a hole in that shield, ruling that social media companies can be held accountable in court for amplifying at least certain kinds of harmful content. 

Meanwhile, Republicans in Texas and Florida are engaged in a second hostile maneuver by enacting laws that facilitate lawsuits against platforms for removing content based on the viewpoint expressed. In September, a federal appeals court upheld the Texas statute as constitutional. The U.S. Court of Appeals for the Fifth Circuit ruled that rather than limiting the First Amendment right of social media companies to choose what expression appears on their platforms, the law discourages “censorship” by the companies. A different federal appeals court earlier had struck down most of the Florida statute, making it likely that the Supreme Court will soon step in to resolve the lower court conflict. 

If the social media industry loses on both fronts, it could face a legal environment in which it can be punished for promoting some types of controversial material and for blocking such posts. To make this concrete: a platform simultaneously could be sued when its automated recommendation system erroneously promotes extremist posts inciting violence and for efforts to filter out dangerous extremism.  

It isn’t clear how the industry would respond to this outcome, but the legal confusion could severely complicate attempts to better protect users and society at large from hateful content and disinformation. 

The Section 230 dispute stems from a mid-90s law meant to protect message boards and other nascent internet businesses from potentially crushing litigation over content posted by others. Skeptics of Section 230 say that it has outlived its usefulness and now unfairly insulates giant social media corporations such as Meta, which owns Facebook and Instagram, and Google, which owns YouTube. The Supreme Court has agreed to review two cases in which Section 230 thwarted lawsuits brought against Google, Meta and Twitter by relatives of terrorism victims who are trying to hold the platforms accountable for allegedly amplifying the violent exhortations of the Islamic State. 

The plaintiffs say that the social media companies didn’t just passively host Islamic State videos. The lawsuits are based on the claim that the platforms actively recommended the material to specific users in response to their online behavior and other factors.  

number of bills introduced in Congress would create a legislative exception to Section 230 to allow suits targeting the automated amplification of dangerous incitement and possibly other harmful content. But the bills have failed to gain momentum amid surging partisan animosity: Democrats want to see more hateful material removed; Republicans contend that too much of their content is already being taken down.  

The social media companies have warned that even seemingly narrow limitations on their liability shield inevitably would lead to a broader erosion of Section 230 and the collapse of a business model based on the near-instantaneous posting of vast amounts of user-generated content.  

As a logical matter, it is difficult to see how the Supreme Court could limit a Section 230 exception for amplification. That’s because platform recommendations are not the only form of amplification. The mere ranking of content in users’ feeds, apart from highlighted recommendations, has the effect of amplifying certain content over the billions of posts theoretically available on a platform.  

better approach to Section 230 reform would be to permit clearly delimited exceptions for certain subject matters — such as terrorist recruitment and incitement — while explicitly preserving the liability shield against other claims. But this more nuanced approach would require bipartisan congressional action that has seemed impossible. 

The Texas and Florida laws ought to present a legal knot that’s less difficult for the Supreme Court to untangle. In upholding the Texas statute, the Fifth Circuit made a fundamental mistake in applying the First Amendment when it said that social media users have a free speech right to say what they want, while at the same time, the platforms are forbidden to censor them.  

That’s exactly backward. The First Amendment by its terms — “Congress shall make no law …” — applies only to government speech restrictions. In a number of precedents, the Supreme Court has extended the First Amendment and other constitutional provisions to cover state actions, too. But corporations like Meta or Google are not obliged to allow anyone to say anything on digital platforms they own. In fact, the companies themselves enjoy a First Amendment right to determine what speech they facilitate, just as news outlets do not practice illegal censorship when they decide which writers may contribute to their web or print editions and what they may say. 

As elementary as this constitutional analysis may seem, I worry about whether the Supreme Court’s conservative majority would follow the Fifth Circuit’s lead, possibly embracing the unfounded but widely held contention on the political right that social media platforms are part of a liberal conspiracy to “cancel” conservative views. Members of that majority — including three justices appointed by former President Donald Trump — did not hesitate to assert themselves aggressively during the last term in major cases involving abortion, guns and religion. This term, the high court has already said it will rule on affirmative action, gerrymandering and voting rights. 

Now, social media is on the docket as well, and once again, the results could be ominous. 

Paul M. Barrett is the deputy director and senior research scholar at the NYU Stern Center for Business and Human Rights, which recently published a report he wrote on how social media companies amplify false claims of election fraud. 

Tags Donald Trump first amendment mark zuckerberg jack dorsey sundar pichai facebook twitter google ceo big tech hearing misinformation content moderation control Politics of the United States Section 230 of the Communications Decency Act social media platforms

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Judiciary News

See All