The views expressed by contributors are their own and not the view of The Hill

A word of caution before AI becomes standard in health care

A CES attendee wears an iSync Wave EEG helmet at the iMediSync booth during the CES tech show, Friday, Jan. 6, 2023, in Las Vegas. iSyncWave is the first-ever integrated EEG and neurotherapeutics device that provides telemental care through an AI digital brain health platform. (AP Photo/Rick Bowmer)

When I was in medical school, long before artificial intelligence (AI) was invented, I underwent an extreme four-year period of factoid memorization, a trial by fire where I absorbed everything like a sponge, though I soon forgot over 95 percent of what I’d learned. It seemed like a fruitless exercise at the time, but in later years, this old knowledge would reconstitute itself whenever I encountered a new illness. It formed a framework for me to learn from. This human form of learning, with the brain as the computer, is the standard that AI aspires to reach. 

But now, artificial intelligence has advanced to the point where it has become a highly useful medical tool. In fact, a new AI tool known as “Sybil” showed an 86 to 94 percent early detection rate of lung cancer. It reportedly works by picking up irritations or tiny abnormalities in areas where lung cancer could occur — before these changes are visible to radiologists. It examines a single scan but relies on huge databases to recognize what’s abnormal. 

Concerns raised so far are the lack of diversity in the databases, as well as potential overdiagnoses, though many experts, including Dr. Robert Cerfolio, chief of thoracic surgery at NYU Langone Health (full disclosure: I am a professor of medicine and the medical director of “Doctor Radio” at NYU Langone Health), told me it shows “great potential.” The key to saving lives from cancer remains early diagnosis and AI can help.

Dr. Miriam Bredella, a prominent professor of radiology at Harvard, made the excellent point on SiriusXM’s “Doctor Radio Reports” last week that a crucial purpose of AI in radiology is to rescreen many thousands of studies (X-rays, CT scans, MRIs) that were done for one reason and to use an AI algorithm to find something else, such as the amount of saturated fat in bone, which can correlate to other health problems, including insulin resistance, diabetes and osteoporosis.

There are now hundreds of image-specific AI algorithms across the fields of radiology and cardiology. The Food and Drug Administration (FDA) has approved more than 520 applications between 2019 and this January.

But what about direct clinical applications? According to a recent study on the clinical use of AI in osteoporosis published in the journal Nature, “Applying the AI algorithms in a clinical setting could help primary care providers classify patients with osteoporosis and improve treatment by recommending appropriate exercise programs.” That’s one example that certainly could be helpful since osteoporosis is frequently missed. 

But I am worried that there is not enough concern about AI’s limitations.

While OpenAI’s ChatGPT has been found to answer most medical test questions accurately, no one knows how much a practicing physician (or their patient) will be able to rely on these models for real-time information exchange, and there is a definite risk of patients trying to rely on AI answers without consulting their doctor. 

Keep in mind that AI works by pattern recognition, comparing what it’s seeing to hundreds of thousands of previous databases and answering questions posed by referencing these same databases. However, AI will always lack a well-trained physician’s flexibility and deep insights.

Japan has been leading the way with AI in healthcare, with shared applications and AI diagnostic impressions via the cloud. One Japanese consortium, which includes Hitachi and Microsoft Japan (Iguana interface engine), has been up and running for several months, targeting smaller hospitals and rural areas in Japan with doctor shortages, where AI helps doctors to make diagnoses, and taps the cloud for data sharing. While this sounds ideal, nevertheless, an online survey in Japan published last year found that patients showed more concern about the practice than their physicians, expressing concerns about a lack of regulation and accountability. 

There are no stop signs for AI in health care applications. According to the World Economic Forum, AI applications in clinical practice in the U.S. have increased three times since 2020. Research and Markets estimates a global market for AI healthcare solutions of more than $200 billion by 2030. 

Here in the U.S., as health care leaders follow Japan and AI gears up for clinical use, seminars in AI instruction are occurring at many universities and medical centers including MIT and NYU Langone Health. At MIT, an upcoming seminar in May will not only provide a foundation of understanding but also teach health care experts how to develop an AI product to fit their needs. But at NYU Langone Health, chief digital and information officer Nader Mherabi has cautioned the faculty that generative artificial intelligence applications hold “transformative potential for operations, patient care, educational and research missions but requires further development and careful consideration before use.”

The world would be wise to follow Mherabi’s cautionary directive.

Marc Siegel MD is a professor of medicine and medical director of Doctor Radio at NYU Langone Health. He is a Fox News medical correspondent and author of the new book, “COVID; the Politics of Fear and the Power of Science.”

Tags Artificial intelligence in healthcare ChatGPT Ethics of artificial intelligence Healthcare reform in the United States Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.