The views expressed by contributors are their own and not the view of The Hill

New legislation must balance free speech and protecting teens online

In this July 27, 2015, photo, Isabella Cimato, 17, left, Arianna Schaden, 14, center, and Sofia Harrison, 15, check their phones at Roosevelt Field shopping mall in Garden City, N.Y.
In this July 27, 2015, photo, Isabella Cimato, 17, left, Arianna Schaden, 14, center, and Sofia Harrison, 15, check their phones at Roosevelt Field shopping mall in Garden City, N.Y. (AP Photo/Seth Wenig)

Recent reports linking social media to mental health disorders, such as widespread depression, rising suicide rates and other harms experienced by teens, have helped motivate more than 60 senators to cosponsor the Kids Online Safety Act. This revised version of a bill first proposed two years ago would give the Federal Trade Commission and state attorneys general authority to regulate the design of online platforms such as Facebook. 

Notably, the Kids Online Safety Act and competing legislation now being considered do not require the FTC to be content-neutral in the way it regulates. The question is whether that could cause platforms to restrict teens’ access to online content that benefits them.

Legislators are understandably worried about a variety of harms to teens that have been blamed on social media use, including mental health disorders, addictive behaviors, online bullying and sexual exploitation. The act requires platforms to “exercise reasonable care” in designing features to avoid causing or exacerbating such harm.

Indeed, design features such as endless scrolling and push notifications can keep users’ attention directed to a specific platform, eating up many hours and distracting them from other obligations. Other features like the anonymity of some users, by eliminating individual accountability, contribute to cyberbullying and other forms of harassment. 

As for content moderation, social media platforms have generally been proactive about trying to eliminate harassment of their users. The prevailing sentiment in Washington seems to be that the government should also prevent platforms from recommending certain kinds of content to teens.

In some extreme cases this is justified, but critics of the Kids Online Safety Act are concerned that the law would lead to platforms “engaging in aggressive filtering and suppression” of content such as discussion of suicide, depression, anxiety and eating disorders that could be beneficial to many young people, and in a way that violates the First Amendment. The amendment applies to minors, who have a right to receive speech as long as it does not fall into “an unprotected category, such as obscenity or true threats.” Nonetheless, concern exists that platforms may be penalized for promoting useful content.

Recent amendments to the legislation would not allow state attorneys general to target online services for violating their “duty of care” by hosting controversial content. This change was intended to prevent states from infringing upon the presentation of valid, competing viewpoints consistent with free speech. Parents, rather than the government, should make most of the content decisions for their own families. There simply is no better alternative in such a divided society.

A better overall reform would require the FTC to focus its enforcement on product design while prohibiting it from directly or indirectly restricting protected speech. Platforms should continue to be free to make their own choices about content moderation and not be restricted in their ability to experiment with different approaches which may over time lead to better outcomes.

The government can also encourage platforms to collaborate in finding ways to effectively filter out harmful content, particularly if it has no redeeming value, such as morphed sexual images of minors created by AI. They should also be encouraged to make better tools available for teens to manage their time on social media and for parents to monitor their teens’ online behavior.

Then there’s the ever-present age verification question. While the legislation does not require age verification, to avoid liability, firms will have an incentive to either do so or restrict all content — for everyone—that could potentially harm children should they see it.

Proponents counter that the act only “holds platforms responsible for users it knows are minors” and that there is plenty of data to make reasonable inferences about users’ ages. But even if it only requires platforms to make these inferences, this is another provision that may be enforced in ways that cause platforms to restrict content with value.

The Children’s Online Privacy Protection Act plays an important role in protecting children under 13. Teens need us to nurture and guide them so they can assume more and more responsibility to govern their online interactions, a skill required of modern adults.

Tracy C. Miller is a senior research editor with the Mercatus Center at George Mason University and author of the upcoming study “Improving Data Security: Existing Policy and Options for Reform.”

Tags free speech Kids Online Safety Act Politics of the United States social media regulations

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.