Throttling free speech is not the way to fix Facebook and other social media
“Caution: Free Speech May Be Hazardous to Your Health.” Such a rewording of the original 1965 warning on tobacco products could soon appear on social media platforms, if a Senate hearing this week is any indicator. Listening to former Facebook product manager Frances Haugen, senators decried how Facebook is literally killing people by not censoring content, and Haugen proposed a regulatory board to protect the public.
But before we embrace a new “ministry of information” model to protect us from dangerous viewpoints, we may want to consider what we would lose in this Faustian free-speech bargain.
Warnings over the “addiction” and “unhealthy” content of the internet have been building into a movement for years. In July, President Biden slammed Big Tech companies for “killing people” by failing to engage in even greater censorship of free speech on issues related to the pandemic. On Tuesday, many senators were enthralled by Haugen’s testimony because they, too, have long called for greater regulation or censorship. It all began reasonably enough over concerns about violent speech, and then expanded to exploitative speech. However, it continued to expand even further as the regulation of speech became an insatiable appetite for silencing opposing views.
In recent hearings with social media giants, members like Sen. Chris Coons (D-Del.) were critical of limiting censorship to areas like election fraud and instead demanded censorship of disinformation on climate change and other subjects. Sen. Richard Blumenthal (D-Conn.) has repeatedly called for “robust content modification” to remove untrue or misleading information.
Haugen lashed out at what she said was the knowing harm committed against people, particularly children, by exposing them to disinformation or unhealthy views. Haugen wants the company to remove “toxic” content and change algorithms to make such sites less visible. She complained that sites with a high engagement rate are more likely to be favored in searches. However, the problem is that sites deemed false or harmful are too popular. Haugen said that artificially removing “likes” is not enough because the popularity or interest in some sites will still push them to the top of searches.
It was a familiar objection. Just the week before, Sen. Elizabeth Warren (D-Mass.) called for Amazon to steer readers to “true” books on climate change. Her objection was that the popularity of “misleading” books was pushing them to the top of searches, and she wants the algorithms changed to help readers pick what she considers to be healthier choices — meaning, more in line with her views.
Similarly, Haugen’s solution seems to be … well, her: “Right now, the only people in the world who are trained to analyze these experiments, to understand what is happening inside … there needs to be a regulatory home where someone like me could do a tour of duty after working at a place like [Facebook],and have a place to work on things like regulation.” Censorship programs always begin with politicians and bureaucrats who — in their own minds — have the benefit of knowing what is true and the ability to protect the rest of us from our harmful thoughts.
Ironically, I have long been a critic of social media companies for their rapid expansion of censorship, including the silencing of political critics, public health experts and pro-democracy movements at the behest of foreign governments like China and Russia. I am unabashedly an internet originalist who favors an open, free forum for people to exchange ideas and viewpoints — allowing free speech to be its own disinfectant of bad speech.
Facebook has been running a slick campaign to persuade people to embrace corporate censorship. Yet, now, even the Facebook censors are being denounced as too passive in the face of runaway free speech. The focus is on the algorithms used to remove content or, as with Haugen and Warren, used to flag or promote popular sites.
Haugen describes her approach as a “non-content-based solution” but it is clearly not that. She objects to algorithms like “downstream MSI” which tracks traffic and pushes postings based on past likes or comments. As explained by one site, it is “based on their ability to engage users, not necessarily its usefulness or truthfulness.” Of course, the objection to those “un-useful” sites is their content and claimed harm.
Like Warren, Haugen is calling for what I have criticized as “enlightened algorithms” to protect us from our own bad choices. Our digital sentinels are “non-content-based” but will magically remove bad content to prevent unhealthy choices.
There is no question that the internet is fueling an epidemic of eating disorders and other great social problems. The solution, however, is not to create regulatory boards or to reduce free speech. Europe has long deployed such oversight boards in removing what it considers harmful stereotypes from advertising and barring images of honey or chips — but the results have been underwhelming at best.
It is no accident that authoritarian countries have long wanted such regulation, since free speech is a threat to their power. Now, we also have U.S. academics writing that “China was right” all along about censorship, and public officials demanding more power to censor further. We have lost faith in free speech, and we are being told to put our faith into algorithmic guardians.
We can confront our problems more effectively by using good speech to overcome bad speech. When it comes to minors, we can use parents to protect their children by increasing parental controls over internet access; we can help parents with more or better programs and resources for mental illnesses. Of course, it is hard to advocate for restraint when the image of an anorexic child is juxtaposed against the abstract concept of free speech. However, that is the siren’s call of censorship: Protecting that child by reducing her free-speech rights is no solution for her — but it is a solution for many who want more control over opposing views.
Free speech is not some six-post-a-day addiction that should be cured with algorithmic patches. There is no such thing as a content-neutral algorithm that removes only harmful disinformation — because behind each of those enlightened algorithms are people who are throttling speech according to what they deem to be harmful thoughts or viewpoints.
Jonathan Turley is the Shapiro Professor of Public Interest Law at George Washington University. You can find his updates on Twitter @JonathanTurley.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.