The views expressed by contributors are their own and not the view of The Hill

Underregulating tech is a relic of the 90s. AI is an urgent call for change.

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston.
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

In the 1990s, America chose to avoid regulating new digital technologies as they arose. Today, as AI takes center stage, even those leading technology companies urge a new approach to regulation. 

There is no excuse for our country to continue sanctioning a regulatory Wild West in the digital sector. It is past time to establish some guardrails. 

The regulatory landscape in AI is rapidly evolving. In a single week, the president signed a sweeping executive order requiring rigorous safety measures for companies and new AI standards for federal agencies and Vice President Kamala Harris shared a framework to address the harms AI is causing and identified ways to put the technology on a path to prevent possible future risks by acting with urgency.   

These are laudable and necessary steps toward ensuring the benefits of AI are shared widely and its harms are limited. Now, it is time for lawmakers to overcome remaining trepidation to regulate AI and immediately act to cement AI safety in the law for generations to come. 

Policymakers in the U.S. have historically eschewed regulating the tech sector for fear of stifling innovation and American competitiveness in the global marketplace. But that argument is a relic of the late 90s when cyberspace was new and its impact unknown. 

The prevailing wisdom held that technology deserved to be treated differently than everything else that had come before. This “cyberlibertarian” point of view translated into a “hands-off” regulatory environment for the internet. This approach, pressed by technology companies and their supporters, argued for its radical openness and sense of possibility, the lure of a new kind of participatory and global politics and great riches for those able to harness it. 

It was against that backdrop that a key provision was passed in the Communications Decency Act — known as Section 230 — immunizing tech companies from accountability for extraordinary harms their platforms would eventually cause. Nearly three decades later, we continue to feel the consequences of this outdated policy. 

Today, women, children and people of color are more vulnerable to online sexual harassment, revenge porn, voter suppression and eating disorders because of the legal loopholes Section 230 gave to tech companies. The latter is now the subject of a sweeping lawsuit brought by dozens of states against tech behemoth, Meta. 

Many hoped the law might meet its end earlier this year in cases against Google and Twitter, only to see the Supreme Court uphold the status quo. Now, we are confronting an AI revolution that could compound those harms if Congress fails to set appropriate legal boundaries for tech companies this time around.  

The urgency to act is growing by the day. Late last month, a group of researchers revealed weaknesses in the digital guardrails tech companies have created to ensure AI is not a danger to society. The researchers found they could easily break safety measures and enable systems to generate toxic material. Even well-intended customization, like adapting a tool to tutor children, can degrade the guardrails, exposing users to hate speech and abusive messages. 

Among other measures, the guardrails in the president’s executive order requiring new technologies to be carefully tested before they are released should become the law of the land. That is one of the best ways to protect society against the worst excesses of AI, from ongoing bias and discrimination to catastrophic cybersecurity risks.  

Lawmakers should also create incentives in the law to ensure that AI tools serve people in an equitable manner. Consider how the dangerous effects of hate speech on social networks or the increased risk of biased facial recognition in the criminal justice system might thrive and compound in an unregulated tech environment. 

Governments around the world have an indispensable role in making tech safer for users. As the head of a major philanthropy, the MacArthur Foundation, I also embrace my sector’s role in addressing the social impacts of technology.  

The president’s executive order accounts for issues our grantees have been working on for years and leans heavily on the Blueprint for an AI Bill of Rights, which was created under the leadership of Dr. Alondra Nelson. 

In a new initiative, we join with nine other philanthropies to do what we can to help mitigate AI risks. Our priorities are aligned with Vice President Harris’s framework to advance AI governance to benefit people and society, centering individuals and communities most at risk of harm. 

Together, we are committing more than $200 million toward public interest efforts, including new research, policy frameworks and advocacy. 

Our country can and should address the harms of technology, regulate now based on what we know, close the most dangerous loophole of Section 230, support digital rights, establish much-needed guidelines for the development of AI and create a new set of incentives that ensure we can harness the best of what new technologies offer societies. 

John Palfrey is president of the John D. and Catherine T. MacArthur Foundation, a former professor at Harvard Law School, and served as Executive Director of the Berkman Klein Center for Internet & Society. 

Tags artificial intelligence regulation Joe Biden Kamala Harris Politics of the United States Section 230 of the Communications Decency Act tech regulation

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more