The views expressed by contributors are their own and not the view of The Hill

Why free speech advocates should be rooting for Google

File - In this Sept. 24, 2019, file photo a sign is shown on a Google building at their campus in Mountain View, Calif. (AP Photo/Jeff Chiu)

President Joe Biden and his predecessor, Donald Trump, agree on at least one issue: Large social media companies need to be held more accountable for their decisions about what content appears on their platforms. Both have advocated curtailing or repealing Section 230 of the Communications Decency Act, which shields the platforms from liability related to content posted by third parties. 

Now, with oral arguments before the Supreme Court scheduled for Feb. 21, the case of Gonzalez vs. Google could redefine how free speech works on the internet. 

In November 2015, Nohemi Gonzalez was killed in an Islamic State terrorist attack in Paris in which 129 innocent people lost their lives. Relatives of Gonzalez sued Google in U.S. federal court, arguing that the company should be held responsible for providing material support to Nohemi’s killers because its YouTube platform recommended videos uploaded by Islamic State that helped spread the terrorist group’s message.  

YouTube bans incitement of violence, and Google maintains that the platform generally has been successful in enforcing its policy. But with 500 hours of video uploaded to the site every minute, some banned content inevitably gets through. 

Lower courts blocked the Gonzalez lawsuit under Section 230. Enacted in 1996, Section 230 states that providers of “interactive computer service,” which today includes social media platforms like YouTube, may not be sued for harm linked to content posted by users of the platform.  

Section 230’s drafters recognized that if every potentially irresponsible post could become the basis of a lawsuit, online platforms would sharply constrict the content they allowed, muffling free speech. 

The Gonzalez family essentially is asking the Supreme Court to create an exception to Section 230 for “targeted recommendations,” meaning instances when a platform automatically recommends content to certain users. There are two problems with this request. First, the statute lacks any reference to recommendations or evidence that Congress had such an exception in mind. Second, the proposed exception would unravel Section 230 in its entirety because some form of “recommendation” is intrinsic to nearly every method by which social media platforms identify, sort and present content. While it may seem as if lawyers for the Gonzalezes are seeking a narrow exclusion, the exception would swallow the rule. 

That outcome might be fine with Justice Clarence Thomas, who noted in October that, “many [lower] courts have construed [Section 230] broadly to confer sweeping immunity on some of the largest companies in the world.” It is not clear whether Thomas or any other members of the six-justice conservative supermajority come to the issue bearing hostility toward the platforms based on the view widely held on the political right that major social media companies improperly censor conservative views. 

The reach of Section 230 may seem like an esoteric legal question, but the consequences are quite real. If faced with the wave of litigation that Congress sought to prevent, social media companies almost certainly would restrict the third-party expression they allow on their platforms. An enormous volume of user-generated speech would disappear. And large platforms would not be the only ones affected. Litigation threats could shut down crowdsourced sites like Wikipedia, consumer review businesses like Yelp and all manner of websites and blogs that invite user debate. New startups might never get aloft. 

I don’t come lightly to Google’s defense. The Center for Business and Human Rights at New York University’s Stern School of Business, where I serve as deputy director, has published extensively on social media’s damaging effects on human rights and democracy. I have argued that platforms like YouTube and Facebook have not done enough to prevent their platforms from being exploited for the spread of hateful, divisive and misleading content. Nevertheless, the center has filed a friend-of-the-court, or amicus, brief warning the Supreme Court that embracing the Gonzalez family’s position could eviscerate Section 230 and silence much of the free speech the law was designed to promote. 

Here’s why a supposedly narrow exception for recommendations would actually lead to a sea change: The plaintiffs try to make a distinction between content that platforms merely host and content circulated by “recommendation algorithms” using “artificial intelligence to determine what material to recommend to each user.” But drawing this line is a futile exercise. The presentation of third-party material by platforms almost always involves some form of recommendation by an automated system driven by artificial intelligence, which identifies and retrieves content based on users’ past online behavior and preferences.  

Recommendation, in short, is what social media platforms do. When a Facebook user logs into his account, he sees a vertically arranged series of posts that an algorithm has assembled from a far larger pool of content posted by the user’s online “friends,” among other material. Likewise, when a Twitter user opens her “timeline,” she sees a list of tweets by account holders she follows. Twitter will have algorithmically selected and ranked the tweets based on the user’s online behavior and the perceived likelihood she will engage with the content by, for example, “liking” or “retweeting” it. Most people would refer to what Facebook and Twitter are doing as “ranking” content. But as a practical matter, it could just as accurately be referred to as “recommending” it. 

In the absence of Section 230’s liability shield, even large, financially stable social media platforms would seek to avoid civil liability claims by sharply limiting the third-party content they allow on their sites. 

The internet speech most likely to disappear would be that of vulnerable individuals and members of groups that have historically suffered from unlawful discrimination. Minority views are the most susceptible to demands that social media platforms remove supposedly harmful speech. Those silenced include political dissenters and human rights advocates who might struggle to express themselves publicly via alternative avenues. In a twist on what has been called the “heckler’s veto,” cynical manipulation of litigation-wary platforms could lead to takedown demands that silence critics of large corporations and other influential institutions. 

It’s worth noting that Section 230 does not block lawsuits based solely on a digital platform’s own speech or conduct. Lower courts have identified a range of such claims, including one lodged against a site that allegedly violated the Fair Credit Reporting Act by selling user data, another that a short-term home rental service violated a local ordinance against unlicensed rentals and a third that a website violated anti-discrimination laws by soliciting the sex, family status and sexual orientation of users seeking roommates. 

To be sure, the Gonzalez suit raises troubling issues surrounding online incitement of terrorism. But this quintessentially legislative assessment is a task for Congress, not the Supreme Court. And that’s why the justices should refrain from imposing a judicially crafted exception onto a law that helped make the internet an engine of free speech.  

Paul M. Barrett is the deputy director and senior research scholar at the Center for Business and Human Rights at New York University’s Stern School of Business, where he writes about the effects of social media on democracy.