The views expressed by contributors are their own and not the view of The Hill

US must not only lead in artificial intelligence, but also in its ethical application

Getty Images

Artificial intelligence (AI) is sometimes referred to as a herald of the fourth industrial revolution. That revolution is already here. Whenever you say “Hey Siri” or glance at your phone in order to unlock it, you’re using AI. Its current and potential applications are numerous, including medical diagnosis and predictive technologies that enhance user interactions.

As chairwoman of the U.S. House Committee on Science, Space, and Technology, I am particularly interested in the potential for AI to accelerate innovation and discovery across the science and engineering disciplines. Just last year, DeepMind announced that its AI system AlphaFold had solved a protein-folding challenge that had stumped biologists for half a century. It is clear that not only will AI technologies be integral to improving the lives of Americans, but they will also help determine America’s standing in the world in the decades to come.

However, the vision of AI’s role in humanity’s future isn’t all rosy. Increasingly autonomous devices and growing amounts of data will exacerbate traditional concerns, such as privacy and cybersecurity. Other potential dangers of AI have also arrived, appearing as patterns of algorithmic bias that often reflect our society’s systemic racial and gender-based biases. We have seen discriminatory outcomes in AI systems that predict credit scores, health care risks, and recruitment potential. These are domains where we must mitigate the risk of bias in our decision-making, and the tools we use to augment that decision-making.

Technological progress does not have to come at the expense of safety, security, fairness, or transparency. In fact, embedding our values into technological development is central to our economic competitiveness and national security. Our federal government has the responsibility to work with private industry to ensure that we are able to maximize the benefits of AI technology for society while simultaneously managing its emerging risks.

To this end, the Science Committee has engaged in efforts to promote trustworthy AI. Last year, one of our signature achievements was passing the bipartisan National Artificial Intelligence Initiative Act, which directs the Department of Commerce’s National Institute of Standards and Technology (NIST) to develop a process for managing AI risks.

NIST may not be the most well-known government institution, but it has long conducted critical work on standard-setting and measurement research that is used by federal agencies and private industry. Over the past year, NIST has conducted a series of workshops examining topics like AI trustworthiness, bias, explainability, and evaluation. These workshops are geared at helping industry professionals understand how to detect, catalogue, and ultimately prevent the harmful outcomes that erode public trust in AI technology.

Most recently, NIST has been working to construct a voluntary Risk Management Framework that is intended to support the development and deployment of safe and trustworthy AI. This framework will be important for informing the work of both public and private sector AI researchers as they pursue their game-changing research. NIST is soliciting public comments until Sept. 15, 2021 and will develop the framework in several iterations, allowing for continued input. Interested stakeholders should submit comments and/or participate in the ongoing processes at NIST.

We know that AI has the potential to benefit society and make the world a better place. In order for the U.S. to be a true global leader in this technology, we have to ensure that the AI we create does just that.

Eddie Bernice Johnson represents the 30th District of Texas and is chairwoman of the House Committee on Science, Space, and Technology.

Tags Algorithmic bias Artificial general intelligence Artificial intelligence Eddie Bernice Johnson Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.