The views expressed by contributors are their own and not the view of The Hill

Digital data guardrails are the first step in regulating AI

Artificial intelligence wavelength
iStock

Over the past several months, I’ve often been asked by reporters, fellow members of Congress and constituents alike what a future with artificial intelligence might look like and what role Congress should play in regulating this revolutionary technology. As the only member of Congress with a graduate degree in AI, and as a computer programmer who has written millions of lines of code over a 30-year career in software development, I’ve devoted substantial thought to these questions.

The postulated dangers of artificial intelligence have long been embodied in our science fiction movies, our sensationalized news and our own worst imaginations: evil robots with red laser eyes rising to take over the world, or a nuclear apocalypse instigated by sentient computers seeking to destroy humanity, of which AI chatbots might seem the first harbingers. The reality is that AI is something completely different. The risks, or what we know of them, are far more abstract, but still equally concerning. Balanced against those risks are the potential benefits of AI, which could lead to an explosion in creativity and productivity unlike anything human society has ever experienced.

Artificial intelligence is already being applied in a multitude of industries and in a myriad of applications to better our world. It is being employed in schools to detect potential shooters and alert administrators and law enforcement before a single shot is fired. It is being used to accurately detect cancerous tumors earlier, cheaper and more accessibly than ever before. It is being deployed to implement improved agricultural practices that increase yields while simultaneously reducing water waste and lowering environmental impacts.

In the future, it could bolster our national defenses to help safeguard us from cyberattacks at the hands of America’s adversaries. It will almost certainly revolutionize the way we work, altering the way society functions and conceptualizes its free time in a way more dramatic than any change since the industrial revolution.

Artificial intelligence has incredible potential to reshape the world we live in, but it’s undeniable that its implementation into our society will cause economic and social disruptions on par with other technological innovations in our history such as the development of the printing press and the birth of the internet. The benefits to society will likely far outweigh the costs, but dealing with the resulting change will undoubtedly require a major shift in our education system, our workforce and our laws.

We must make room for the benefits of AI technology while putting guardrails around its misuse and mitigating its potential impacts. We will need to develop new legal frameworks to answer the questions surrounding intellectual property and the creative commons that arise with generative AI technology. We must also implement federal protections for personal data to guard against the misuse of AI in violating digital privacy and fostering the spread of misinformation. In the long term, we will need to realign our education system to offset the disruptions this technology will bring and ensure that the workforce of the future is adequately prepared for the work of the future.

In addition to putting guardrails around industry, we must also put guardrails around government. It is vital that we prevent the use of artificial intelligence to create a surveillance state, such the one China has been using it to develop, and to ensure that the global standards applied to artificial intelligence reflect the American values of freedom and liberty over those of tyranny. Furthermore, we must ensure that American technology keeps up with our adversaries to protect our critical systems from attack and safeguard our democracy from foreign manipulation.

These are the AI challenges that we already can foresee will face our society over the next decade. Other challenges will undoubtedly emerge as AI plays an increasingly important role in our economy. So where should Congress begin in creating a regulatory framework for AI?

The answer begins with the tackling of the most obvious and immediate problems we already know AI will bring. First and foremost, we must create guardrails around the data that feeds the machine learning algorithms that power AI, addressing the growing tragedy of the commons existent in the training of today’s AI and answering the major ethical questions that threaten to hold the industry back instead of pushing it forward.

This year, we can and must enact a unified federal standard for personal data use that will ensure digital companies are forthright with customers about what information they are gathering and what they may do with that data. By returning control over data back to consumers, we can avoid monopolies on digital data collection and ensure companies both large and small have ample opportunity to innovate. Protecting digital data will also defend against the possibility of apps such as TikTok that seek not only to collect large quantities of data from Americans but to use AI to generate insights from that data to shape and manipulate public opinion.

We must also be cognizant that America became one of the most dynamic digital economies in the world by imposing regulation only when necessary, allowing innovation and entrepreneurialism to thrive. We must protect artists, writers and creators from unwelcome plagiarism or manipulation of their work on AI platforms and give consumers control over their data while avoiding overregulation that will stifle the inventiveness that made America the world leader in the last technological revolution.

It is vital that Congress act to provide clarity and guardrails on these issues, rather than simply abdicating its responsibilities to a new government bureaucracy as the European Union is poised to do. Not only would doing so be empty action that simply kicks the can down the road, it would also remove the people’s most direct voice in government from the regulatory equation.

Two weeks ago, the House Committee on Energy and Commerce held a hearing on forthcoming major data privacy legislation. This bill has been in development for nearly a decade, and if implemented correctly it will address the major concerns surrounding the use of personal data currently prevalent within the AI community while putting reasonable guardrails around the use of the data that feeds AI development. Congress must also work to set a federal ethics standard for the development and use of artificial intelligence that will guide our researchers, business, and government agencies as they strive to unlock the potential of AI in the pursuit of a new and better future. As a member of this committee, I am committed to working alongside my colleagues on both sides of the aisle to accomplish that goal.

Government will play a vital role in addressing the vast and sweeping changes artificial intelligence will bring to the world in which we live. Congress must strike an appropriate balance between safeguarding against the dangers of AI while simultaneously enabling its ethical development and deployment. If we accomplish this, the potential benefits of AI promise nothing less than an explosion of human productivity and the realization of long-held human goals such as universal education and the eradication of poverty. Guarding against the risks of AI while still enabling human society to reap its benefits will be the work of our generation.  

Jay Obernolte is a video game developer and business owner serving on the House Energy and Commerce Committee and as chairman of the Subcommittee for Investigations and Oversight on the House Science, Space, and Technology Committee. He is proud to represent California’s 23rd District including the High Desert and Inland Empire.

Tags Artificial Intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.