The views expressed by contributors are their own and not the view of The Hill

Taking a long view of internet content regulation: How did we get here?

AP Photo
In this Friday, Sept. 16, 2017, file photo, a person uses a smart phone in Chicago.

This Fall has been the most important for internet content regulation since 1996, when the Congress approved Section 230 of the Telecom Act establishing an internet ground rule that internet platforms are not responsible for postings put on them by users. This immunity was soon copied by the EU’s declaration that internet platforms are “mere conduits”… followed by dozens of countries worldwide. 

Recently, things have changed for the largest internet platforms, creating a new model for regulation. The EU’s Digital Services Act (DSA) — which makes the largest platforms responsible for content they allow posted — moved towards final implementation; President Biden clarified that he no longer supports blanket immunity for platforms; a U.S. Circuit Court upheld a Texas law that holds platforms responsible for their content management, and the Supreme Court decided to consider a case that could end the 1996 immunity.

At the risk of ignoring mountains of significant legal details, we need to take a long view of what’s happening. 

Looking at the internet forest through the legal trees is important because this is the perspective used by the vast majority of the pubic, most jurists and most senior policymakers, who normally have little idea of — or interest in — how the internet or platforms actually work and deal with analogies they understand. (When the FTC began to explore internet regulation, I hosted its chairman and senior staff for day-long briefings at IBM Research on “What is the internet?” and later hosted similar Congressional Internet Caucus tutorials.) With few notable exceptions, most senior officials, jurists and the public take the long view of internet content regulation, so it’s very much worth bypassing the details and looking at big picture.

By the time consumer-facing internet platforms emerged in the mid-90s, three models for content regulation on electronic media existed: 1) telecommunications carriers, like telephone, had no responsibility for whatever people electronically transmitted on them; 2) broadcasters, like TV, were fully responsible for anything they electronically distributed; and 3) computer networks were private, internal electronic networks used by large organizations to connect such things as internal e-mail. (A handful of geeks set up computer bulletin boards for other geeks, but few noticed.) So, when legislators, judges and the public began to consider consumer-facing computer networks like AOL and Prodigy, they had to figure out whether this new electronic medium was more like a telephone, a TV or a company’s internal computer network.

At first, some courts and policymakers concluded that any consumer-facing “interactive computer service” that actively controlled all of the content that it distributed was similar to a broadcaster and thus responsible for content it distributed, but if the platform simply posted everything it received, it was similar to a telephone carrier. But, as consumer platforms like AOL and Prodigy grew to large numbers, it became obvious that, while broadcasters can easily monitor their single broadcast stream, a large platform trying to monitor content posted by thousands of users could be inundated. So, early platforms would either have to let any user’s post go up — including, most importantly, pornography — or spend large amounts for legal services to defend themselves for offenses like distributing obscenity or defamation.

And so, Congress came up with an unusual hybrid formula, politically justified by the need to limit pornography on the internet: for user-posted content, all consumer-facing platforms would have the content controls of a broadcaster (but none of its responsibilities) with the content liabilities of a telephone carrier. To oversimplify, platforms could control as much or little as they wished, like a broadcaster, but not be responsible for content, like a telephone carrier.

This made sense because platforms were comparatively small, and it was widely hoped that — if allowed to flourish — they would improve education, health, the arts, etc.

By the 2010s, the global growth of some large platforms had exceeded any 1990 expectations, and a growing array of critics of the “mere conduit” legal structure for the very largest internet platforms emerged. Critics included smaller competitors, copyright interests, computer and telecommunications interests, print media interests, political activists of many different stripes, national security interests and national governments.

While some national governments reacted with proposals to simply regulate internet platforms as if they were local broadcasters, in Europe and the U.S., an important new concept emerged: the creation of a new category of media that consisted only of very large internet platforms. This approach had the benefit of leaving the “mere conduit” character of small and medium-sized internet platforms largely untouched while subjecting only the largest platforms to content regulations and liabilities that somewhat resembled those of broadcasters.

Important milestones in this development included the U.S. Supreme Court’s 2017 unanimous Packingham decision in which the Court left 230 intact but concluded that because of Facebook’s sheer size, it had many of the characteristics of a “public square” and that “to foreclose access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.” That was followed by the EU’s 2020 decision to move forward on a Digital Services Act that fundamentally redefines the content obligations and liabilities for very large internet platforms, called “gatekeepers.”

These two pivots, along with similar actions, have resulted in a torrent of both proposed and enacted legislation in the U.S. and elsewhere that often retain the “mere conduit” character of small/medium platforms but single out very large platforms for some liability for posted content.

The creation of a new category that consists of the largest platforms and substantially excludes small websites is the most important change in internet regulation since 1996. It obviously leaves open numerous complex enforcement issues, not least of which are “Exactly who is a gatekeeper?” and “How can I get into the less-regulated category?”

It will take the better part of a decade to see whether this new category of gatekeepers lasts — and if it does, how it will be tempered by legislatures, lobbyists, regulators and courts.

Roger Cochetti provides consulting and advisory services in Washington, D.C. He was a senior executive with Communications Satellite Corporation (COMSAT) from 1981 through 1994. He also directed internet public policy for IBM from 1994 through 2000 and later served as Senior Vice-President & Chief Policy Officer for VeriSign and Group Policy Director for CompTIA. He served on the State Department’s Advisory Committee on International Communications and Information Policy during the Bush and Obama administrations, has testified on internet policy issues numerous times and served on advisory committees to the FTC and various UN agencies. He is the author of the Mobile Satellite Communications Handbook.

Tags Big tech content moderation Digital Services Act internet regulation Joe Biden platforms Section 230 Section 230 of the Communications Decency Act social media platforms

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more