The views expressed by contributors are their own and not the view of The Hill

Bills aimed at ‘protecting’ kids online throw the baby out with the bathwater

FILE - The Meta logo is seen at the Vivatech show in Paris, France, June 14, 2023. Instagram and Facebook's parent company Meta is adding new parental supervision tools and privacy features to its platforms beginning Tuesday, June 27. The changes come as social media companies face increased scrutiny over how they impact teens' mental health. (AP Photo/Thibault Camus, File)
FILE – The Meta logo is seen at the Vivatech show in Paris, France, June 14, 2023. Instagram and Facebook’s parent company Meta is adding new parental supervision tools and privacy features to its platforms beginning Tuesday, June 27. The changes come as social media companies face increased scrutiny over how they impact teens’ mental health. (AP Photo/Thibault Camus, File)

The phrase “don’t throw the baby out with the bathwater” is used to signify that we shouldn’t accidentally throw out good things in our efforts to get rid of the bad. As the U.S. Senate again gears up to consider multiple bills intended to protect teens online, they must be aware that substantially increasing the costs of creating teen-friendly platforms could lead online platforms to gate access to their platforms in ways that would exclude teens altogether.  

There are actually many benefits that teens draw from social media and messaging apps. Researchers have found that teens “use social media in the service of critical adolescent developmental tasks, such as identity development, aspirational development, and peer engagement” in ways that “reflect, complement, and reinforce off-line relationships, practices, and processes.”

On the other hand, social-media usage has been linked to some mental-health harms for teen users, including increased risk of suicide, due to online bullying and developmentally inappropriate content. In response to demand from teen users and their parents, social-media companies have developed tools to help limit their exposure to such harms. Nevertheless, it is clear there is both a baby and dirty bathwater here. 

A year to the day since the panel cleared earlier versions of the bills in 2022, the Senate Commerce Committee has scheduled a July 27 markup for the Kids Online Safety Act (KOSA), reintroduced by Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.), and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0), reintroduced by Sens. Ed Markey (D-Mass.) and Bill Cassidy (R-La.). While these bills and the concerns that animate them are bipartisan, the proposed solutions will likely lead to age-gating and exclusion.

KOSA establishes a duty of care that would require covered platforms to “act in the best interests of a user that the platform knows or reasonably should know is a minor” by preventing or mitigating various potential harms. While the bill explicitly states that it does not require age verification, the duty-of-care standards it would establish suggest that online platforms would need to take steps to figure out whether users are under age 17.

COPPA 2.0 requires verifiable parental consent for kids and verifiable consent from teens before online platforms can collect certain personal information or allow that information to be posted by users. The bill’s definition of “personal information” includes IP addresses and other information that permits the identification of “any device used by an individual to directly or indirectly access the internet or an online service, online application, mobile application, or connected device.” The bill defines “directed to children or teens” based on a number of factors that suggest the online platform intends its content to be aimed at those audiences, as well as applying to platforms that are “reasonably likely” to be used by children or teens. This is far beyond the “actual knowledge” standard in the original Children’s Online Privacy Protection Act (COPPA).

In other words, both bills essentially set up a situation where online platforms have heightened requirements to serve teens and kids and implicitly require age verification to make them work. How else could one prevent or mitigate harms to minors? How else could one know to get verifiable consent before collecting something as benign as a device identifier? 

The original COPPA’s verifiable parental-consent requirements led many online platforms to simply ask users whether they are at least 13 before allowing them access. If the potential users admit they are under 13 years old, then they are excluded altogether. Very few entities subject to COPPA actually go through the process of obtaining verifiable parental consent, probably because there really isn’t enough money in targeted advertising for kids to justify the costs of doing so. There is little reason to expect the situation to be different for teen users, who largely lack disposable income or online-payment options.

KOSA imposes heightened notification requirements for targeted advertising, while COPPA 2.0 effectively bans it altogether. This presumably means that the only way that online platforms could earn revenues for services directed at children or teens is through subscription models. It’s unclear whether this is a better deal, or even preferable for most teens or parents, who often prefer free access even if accompanied by targeted ads. As a result, online platforms subject to these rules will decide whether it’s even worth serving teens if they are unwilling to pay. The answer will often be no.

Ben Sperry is a senior scholar with the International Center for Law & Economics.

Tags Kids Online Safety Act

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more