The views expressed by contributors are their own and not the view of The Hill

ChatGPT could transform society — and its risks require quick regulation

AP Photo/Peter Morgan
A ChatGPT prompt is shown on a device near a public school in Brooklyn, N.Y., on Jan. 5, 2023. New York City school officials have started blocking the impressive but controversial writing tool that can generate paragraphs of human-like text.

A new technology has been released that has the potential to transform society in ways perhaps even more dramatically than Amazon and the iPhone, but many are glossing over the risks that come with such a powerful advancement.

Barely two months have passed since research firm OpenAI debuted the technology called ChatGPT, and there is little doubt that the program represents a massive leap in generative artificial intelligence (AI) with extraordinary potential.

The chatbot — which cannot think for itself but is trained to generate conversational text — can be used for a wide array of applications, from writing college-level essays and poetry in a matter of seconds to composing computer code and legal contracts, or for more playful uses such as writing wedding speeches, hip-hop lyrics or comedy routines.

It’s already abundantly clear that ChatGPT has far-ranging implications and potential uses for education, entertainment, research and especially our workforce. But the most alarming ramifications of such a vast technological leap will be seen in areas of disinformation, cyber crime and terrorism, because ChatGPT carries with it a tremendous risk of misuse. This nascent technology already is becoming a tool for hackers and phishing schemes.

According to CheckPoint Research, ChatGPT is empowering low-skill hackers and other bad actors to transform basic phishing schemes into more sophisticated, professional-looking attacks.

In two instances in December alone, bad actors used ChatGPT to create malware, or software that is specifically designed to damage, disrupt or gain unauthorized access to a computer system. One of those bad actors admitted it was the first time he attempted to create malware, and that the technology played a significant role in helping to create the script.

CheckPoint Research quickly concluded that the “cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code” at an alarming rate. Others have agreed and are quick to caution that we must quickly “demand rules of the road for these models, both from the companies that produce them and from Congress.”

Regulation is always imperfect and always feels too soon in nascent technology, but the risks here are so boundless that regulatory action is required almost immediately, if not sooner. Now is the time to move beyond the high-level ethical guidance that we have seen across governments and the private sector, and push for intentional and concrete regulation, especially from Congress.

A regulatory and legal framework for artificial intelligence, including general artificial intelligence, is mandatory. Detractors of that opinion might say that regulation stifles and otherwise suffocates innovation, but the United States’s anti-regulatory fever will be counterproductive in the case of ChatGPT. Transparency is necessary in any AI regulatory scheme, but far from sufficient.

Most importantly, we need to identify measures for accountability. We can require companies to put watermarks or other indicators that allow consumers to know when text and images have been generated by AI, but we’ll also need to consider what other restraints and safeguards are required to curb the limitless potential for abuse, many of which we’ve yet to identify. And we will need to identify ways to encourage mitigation of bias. The European Union’s AI Act, proposed in 2021, is currently under consideration and can provide some guidance to U.S. lawmakers.

Regulation is critical, but it’s not just governments that have a role to play. It’s incumbent upon the companies that produce this AI — as well as consumers — to act more urgently. The technology is here to stay. We’ll need new technologies around it — watermarks, detection tools, and more. We’ll also need new understandings of how education should work and what art is — or isn’t. And we’ll need to be on high alert for the many risks it might bring our way.

Regardless, time is of the essence. OpenAI is working on an even more dynamic program called GPT-4, which could be released within months. Our elected officials must enact safeguards before these fledgling technologies become deeply embedded in American society.

David Hickton is the founding director of the University of Pittsburgh’s Institute for Cyber Law, Policy and Security, which hosts the Pittsburgh Task Force on Public Algorithms. He is a former U.S. attorney for the Western District of Pennsylvania.

Tags Artificial intelligence ChatGPT Hackers Malware Phishing

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more