The views expressed by contributors are their own and not the view of The Hill

Artificial intelligence doesn’t require burdensome regulation

Getty Images


One of the most important issues that Congress will face in 2018 is how and when to regulate our growing dependence on artificial intelligence (AI). During the U.S. National Governors Association summer meetings, Elon Musk urged the group to push forward with regulation “before it’s too late,” stating that AI was an “existential threat to humanity.”

Hyperbole aside, there are legitimate concerns about the technology and its use. But a rush to regulation could exacerbate current issues, or create new issues that we’re not prepared to deal with along the way.

{mosads}More specifically, the solution to the issues posed by AI — described by Musk and by others — doesn’t lie in AI itself, but the data it uses. As a result, any legislation solution to these challenges needs to be focused on the data itself. But before we can draft legislation to address AI, there are some other factors regulators need to consider first.

 

Define AI and Machine Learning with Precision

To begin with, one of the biggest issues in the world of AI is the lack of clear definition for what the technology is — and is not. AI emerged as one of the most used buzzwords of 2017, and marketers sowed confusion using AI in the broadest possible terms.

But AI isn’t simply the sci-fi version of machines demonstrating human intelligence — it’s an amalgam of technologies like machine learning, deep learning and more. Indeed, AI and machine learning are often confused as the same technology and used interchangeably, when in fact they’re quite different.

Any future regulation aimed at AI needs to be built on clear and distinct definitions for AI, machine learning, and all component technologies. Regulation and governance efforts themselves should focus on these subcategories and individual technologies, not the broader category of AI itself.

Provide Room for Case Law to Help with Regulation

We like to think that every innovation we create is unique. But the reality is that not all challenges created by AI are new, and that our legal system has responded to many of these challenges before. Indeed, one way frequently overlooked response to similar types of technical innovation occurs through the emergence of new case law.

New case law will inevitably emerge as AI and machine learning are used more and more, and deployed more in practice. Indeed, it’s almost inevitable that lawsuits will be brought relating to accidents created by AI. The Judiciary should be given some room to decide these cases on a situational basis. Regulation built upon case law can improve protections and ensure that any new regulatory burden is based on the reality of how AI is being used in practice, rather than based on speculation.

Focus on the Data

Once clear definitions around AI have been set forth, and once the judicial role in shaping liability around AI is clear, regulators should focus on the data used to fuel these technologies. That is, data is the key to the future of AI, which requires massive amounts of data to train models to become effective.

Data in the U.S. has become a commodity, primarily because we view it as a sales feature. Social media platforms like Facebook have shown that their primary purpose isn’t just to give people a place to share photos, it’s a byproduct of their corporate goal — to gather as much data on their users as possible to sell advertising. Data has always been the long game for many businesses.

Take healthcare, for example, where we’re generating more data than ever before. We can now process 12 whole human genomes in 3 days or less. Next year, this will likely go up 3x and the entire process will become even cheaper. This data, combined with the power of AI and machine learning, has the potential to unlock the promise of personalized medicine and accomplish something that no legislation could — create significant drop in medical costs while extending the reach of healthcare providers.

But the power underlying these trends is not simply driven by AI — it’s the data being used to feed the algorithms making the decisions. And this is precisely where regulation should focus.

The passage of GDPR in the EU is a great example of a regulatory response to a changing innovation landscape. At its core, GDPR is about giving control of personal data back to the citizens it was collected from, while attempting to simplify the regulatory environment for businesses to navigate.

As we think about regulating AI, Congress should look at GDPR not as a one-time reaction to cyber-attacks and data breaches, but a roadmap for how we can provide greater protections to our own personal data as its used more by advanced technologies like AI.

Regulation is clearly important in the field of AI and will be required sooner or later. But the importance of these issues calls for a judicious approach that considers the broader implications for the technology and its abilities to beneficially impact society.

Matthew Carroll is president and CEO of Immuta. He previously served as CTO of CSC’s Defense & Intelligence group, where he advised US government leadership on data management and analytics issues.

Tags Artificial intelligence Cybernetics cybersecurity Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Cybersecurity News

See All