The views expressed by contributors are their own and not the view of The Hill

Requiring parental permission for children’s AI use is shortsighted and harms education

Getty Images

Sen. Rick Scott (R-Fla.) is prioritizing his Artificial Intelligence Shield for Kids (ASK) Act, which pursues the laudable goal of protecting children from harm. However, an overly broad definition of who is a child and a problematic definition of artificial intelligence (AI) need to be fixed.  

Recently, a significant amount of attention has been paid to artificial intelligence. Large language models, such as ChatGPT, have awoken the country to the technology through being able to write human-like language (albeit with issues like hallucinating articles and making false accusations). AI poses some problems. It has been used, by nefarious individuals, for criminal purposes, and some have — with great hyperbole — suggested that it might lead to human extinction.

However, it has also generated significant good. It has helped develop a drug to combat dangerous bacteria. It can help farmers protect their crops from pests. It has written church sermons and can help place a drive-through order and with medical questions.  It can help identify heart failureimprove dentistry, and help paralyzed humans regain mobility. It has even helped The Beatles finish a final song.  

Children, of course, need to be protected. Despite all the good things AI can do, not every AI system is appropriate for — or healthy for — children. Indeed, issues with AI use in social media and algorithms that encourage viewing to the extent that some call it addictive should require parental oversight. App and website developers who target youthful audiences need to provide parents with tools to help them guide, monitor and protect their children.  

Scott’s ACT law embodies two fundamental principles: parental permission and feature disablement. However, it considers everyone under 18 to be a child who needs to have this consent and targets numerous types of computing systems — beyond potentially harmful AI in social media (which is the presumed target of the law). The law is also so broad that it will likely harm computer science and digital citizenship education.  

The first consideration is who is covered. The law applies — fully — to everyone under 18 years of age. It has no provisions for emancipated minors, 17-year-old military enlistees or officers, or bright students going to college at age 17 or earlier. Also, just as we allow children to learn to drive at 15 (or earlier) and gain their license at 16 (in many states or earlier in some), providing graduated access to covered AI systems makes sense from a youth rights perspective. It also provides graduated responsibility with increasing maturity.  

The second consideration is what technologies are covered. The ACT law gets its definition of AI from the 2019 John S. McCain National Defense Authorization Act, which has a very broad definition (which wasn’t designed to determine what systems to protect youth from). This definition includes systems that “performs tasks under varying and unpredictable circumstances without significant human oversight,” “can learn from experience and improve performance,” are designed to “think or act like a human,” can “approximate a cognitive task” and solve “tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.” It also includes systems that “act rationally” and achieve “goals using perception, planning, reasoning, learning, communicating, decision making, and acting.”  

Not only would this prevent organizations from using AI (or even software that doesn’t meet most definitions of AI) for beneficial activities like preventing cyberbullying and health care. It would also limit its practical use in education, most video games and artistic uses. It would severely restrict computer science education and even create a statutory AI exposure issue where one child could be deemed to violate the law by creating a programming assignment AI and providing it to another child (perhaps even while working on it together) if the child didn’t obtain the other’s parent’s permission first.  

While this bill targets a laudable purpose, it needs far more nuance before becoming law. It must exempt most — if not all — typical education uses, such as using AI to provide student need-responsive adaptive education, digital citizenship education, and computer science education in programming, AI, and cybersecurity. It should also exempt AI algorithms designed to identify risks and protect children from them. Source code sharing sites, like GitHub, and developer-focused instructional sites should also be excluded to allow all youth access to these resources. The bill should also consider the different levels of maturity of youth — for example, by mirroring the different age levels contemplated by the Children’s Online Privacy Protection Rule and Children and Teens’ Online Privacy Protection Act, among others.    

Enacting the current bill as law would have profound implications on America’s future competitiveness by making education more difficult and potentially excluding some youth from the ability to benefit from — and learn to program — AI technologies. The bill may also face some First Amendment challenges suggested for various state and federal youth online content regulations. This absolute parental consent requirement doesn’t make sense in an era when education is critical to keeping up with foreign competitors and preparing for a changing job market.  

Jeremy Straub is the director of the North Dakota State University’s Institute for Cyber Security Education and Research, an NDSU Challey Institute Faculty Fellow and an associate professor in the NDSU Computer Science Department. The author’s opinions are his own.  

Tags Artificial Intelligence Rick Scott

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Congress Blog News

See All