The Section 230 fight Congress should be having
On Oct. 28, the CEOs of Facebook, Google and Twitter are scheduled to receive another verbal beating on Capitol Hill.
After similar encounters in recent years, the ritual of lawmakers attacking senior executives from Silicon Valley has become familiar. The imminent hearing before the Senate Commerce Committee is striking, though, for two reasons: It comes just days before a bitterly contested national election, when one would expect senators’ attention to be elsewhere. And the legislators’ cudgel of choice will be an obscure statutory provision known as Section 230 of the Communications Decency Act of 1996.
Meanwhile, President Trump has been tweeting out demands for the repeal of Section 230, the law that protects internet companies from most lawsuits concerning user posts. And no fewer than 15 bills have been introduced in Congress that would curtail Section 230 in various ways.
All of which raises the question: what exactly is going on here?
A big part of the answer is that in the run-up to Election Day, Republicans are trying to work the referees, meaning browbeat the major social media platforms into not removing or labeling conservative disinformation about election integrity or the coronavirus pandemic. Democrats have been working the refs, too, although less aggressively. Unfortunately, all of the partisan rancor obscures ways that Section 230 could be constructively amended.
The White House reportedly has been pushing congressional Republicans to introduce legislation and hold hearings such as the upcoming Commerce Committee session. Senators have gone on the record about Trump’s specific requests.
Enacted a quarter century ago to boost nascent online enterprises, Section 230 not only protects internet sites from litigation over user posts; it also gives sites wide latitude to remove content they deem “objectionable.” Libertarian groups like the Electronic Freedom Foundation call Section 230 “the law that makes a free Internet possible.” In contrast, skeptics see an unjustified form of protectionism now enjoyed by the most powerful and sometimes-irresponsible companies in the industry.
This debate over Section 230 is worth having, but it’s not what you’ll hear if you tune into the Commerce Committee proceedings. Instead, Republicans are going to threaten the CEOs — Mark Zuckerberg of Facebook, Sundar Pichai of Google and Jack Dorsey of Twitter — that they risk losing some or all of Section 230’s protection as punishment for their companies’ alleged censorship of conservative views. “For too long, social media platforms have hidden behind Section 230 protections to censor content that deviates from their beliefs,” Commerce Committee Chairman Roger Wicker (R-Miss.) said in September about a bill he’s cosponsoring — the Online Freedom and Viewpoint Diversity Act — which would drastically constrict the sort of content that platforms could remove.
Conservatives do sometimes see their posts or tweets labeled as suspect or taken down. On Oct. 6, Trump falsely asserted that COVID-19 is “far less lethal” than ordinary flu and nothing to be afraid of. Twitter tagged the statement as “misleading and potentially harmful,” and Facebook removed it altogether. A few hours later, Trump tweeted, “REPEAL SECTION 230!!!”
But this illustrates major platforms actually doing their job of weeding out disinformation bearing on a public health crisis. There is no evidence to speak of that shows a systemic campaign to squelch conservative views online. Two analyses by The Economist and a third by a researcher at the conservative American Enterprise Institute support this conclusion. So does an audit of Facebook conducted for the company by a Republican former senator and an outside law firm.
To be sure, Democrats also have expressed hostility toward Section 230. Last December Presidential candidate Joe Biden said Section 230 should be “revoked.” More recently, his campaign has released open letters calling Facebook the “nation’s foremost propagator of disinformation about the voting process” because of its failure to remove blatant falsehoods spread by Trump about election integrity.
“What’s really happening is that both parties are beating up on tech,” Carl Szabo, vice president and general counsel of the digital trade association NetChoice, told me. “The Republicans want to see less content moderation, and the Democrats want to see more.” Szabo is not wrong, but overall, most of the energy behind the effort to bully the platforms comes from the GOP side of the aisle.
It’s unrealistic in the current frenzied political environment to expect substantive progress on this front, but in a hoped-for future period of calm deliberation, there are ways that Section 230 could be updated and improved.
A group of scholars brought together by the University of Chicago’s Booth School of Business recommended in a 2019 report that lawmakers restructure Section 230 so that internet companies would enjoy the provision’s immunity benefits only if they agree to new obligations. One such obligation could be ensuring that algorithms don’t promote sensationalistic, unreliable material to boost user engagement. Another could be disclosing more granular data on content moderation methods and advertising practices.
Beyond improving Section 230, there are other ways to elevate the conduct of social media platforms, as I explain in a recently published report for New York University’s Stern Center for Business and Human Rights.
Congress could create a new Digital Regulatory Agency that would focus on making platforms more transparent and accountable. This would have to be done without empowering the agency to supervise specific content decisions because that would violate the First Amendment’s Free Speech Clause. But the new government body could avoid unconstitutional entanglements by, for example, auditing whether platform algorithms are promoting sensationalistic content to spur user engagement.
A Digital Regulatory Agency also could oversee the prevalence of harmful content — an idea suggested by Facebook earlier this year in a white paper. Facebook defines “prevalence” as the frequency with which users view deleterious content. The agency would establish prevalence standards, and if a company’s prevalence level exceeded a preset threshold, that company could be subject to greater oversight or fines.
Maybe someday Congress will seriously contemplate how to improve content moderation by social media platforms. But sadly, that is unlikely to happen when the Senate Commerce Committee convenes later this month.
Paul M. Barrett is deputy director of the NYU Stern Center for Business and Human Rights.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.