Who pays the bills when AI kills?
Artificial intelligence (AI) is revolutionizing many aspects of our life, including such critical areas as health care. Not only is AI providing more accurate diagnoses and personalized treatments, it also is making quality care more accessible to rural patients.
But as AI is increasingly becoming more pervasive, it is posing unique challenges in determining who bears the responsibility when AI systems cause harm.
Modern AI systems are based on statistical methods. As providers increasingly rely on AI to make critical decisions, their statistical nature guarantees that certain AI systems will not perform as intended on some occasions, leading to more and more AI-related disputes. So the question of who pays when something goes wrong dominates much of the debate around AI in medicine as well as other industries.
The concept of “wrong” ranges from physical harm to privacy violations to biases that can be more easily perpetuated by AI throughout social media platforms.
Today’s product and medical liability insurance never anticipated the advent of AI and its systemic effect across the health care ecosystem. Unfortunately for patients, if manufacturers fear being held liable in the event of an accident and insurance lags behind, patients will lose the benefit of new life-saving innovations. Productive use of AI in 21st-century health care requires a 21st-century liability system.
Before the widespread use of AI, most legal systems held humans responsible for their actions, based on negligence or even intentional wrongdoing. However, as AI systems increasingly become more autonomous, it’s hard to assign blame to a human actor in many cases.
Liability questions for autonomous systems and software-performance issues are not new. Because there are many parties involved in an AI system (data provider, designer, manufacturer, developer, user and the AI system itself), it is difficult to establish liability when something goes wrong. Was the damage the result of instructions not being followed? Or, was the damage caused while the AI system was still learning and collecting data?
As the law currently stands, the user of an AI system is less likely than the manufacturer to be considered liable. A 2017 class action suit against Tesla blames the manufacturer, alleging that the automated vehicle’s autopilot system contained inoperative safety features and fault enhancements. When the dust finally settles, Tesla could end up paying fines upwards of $115 million, not to mention losing hundreds of millions of dollars in reputational and shareholder fallout.
Health care is a particularly sensitive area for manufacturers. Unlike automotive, where new insurers are popping up to cover specific types of self-driving scenarios, health care providers and insurers are increasingly wary of litigation threats related to AI.
Recent Food and Drug Administration (FDA) guidance suggests that the FDA wants to push the liability for cyber and any software issues, including AI, to manufacturers rather than hospitals and providers. This philosophy is particularly relevant for home care, one of the much-talked-about ways AI systems can contribute to delivering better equity in health care. With this edict, all of the promising innovation and usage of AI may be at risk. The manufacturers don’t want to assume this liability, or “pay when AI kills,” given the potential costs, since there is no insurance structure in place today to cover this scenario.
Today’s machine learning systems form the bedrock of the recent AI wave in health care. They get smarter as they “learn,” relying on advanced algorithms that process huge amounts of data to identify patterns and predict outcomes. The FDA has even released draft guidance in recent months that provides manufacturers with more freedom and responsibility when building AI systems. Most highly complex AI models depend on algorithms called deep neural networks, which operate almost as “black boxes,” making it difficult to understand how they arrived at a decision.
In fact, it is unclear in some cases whether people could even understand how the systems came to their decisions. Accordingly, mistakes in predictions may occur without a clear understanding of why or how a given AI system arrived at its output.
Although statistical error has always been a part of medicine, health care manufacturers find this prospect extremely scary. In particular, medical technology companies fear that the lack of transparency in studies of problems in products already on the market makes it difficult to pinpoint how and why an issue develops.
The prospect of creating a defensible validation system to withstand an FDA audit with the vast amounts of data and interdependencies dictated by AI almost guarantees that many medical technology companies won’t move forward with new innovations without some sort of insurance safety net. Again, it’s a shame to curtail AI’s potential simply because it will take time for insurers to “catch up.”
Despite these concerns, a risk-based and statistical approach to medicine has long been a part of health care. It’s well known that patients have different anatomies and react differently to different treatments. No medicine is ever 100 percent effective and safe for every patient.
Aspirin, for example, is a life-saver for many consumers, but some patients experience stomach upset and heartburn as well as other more serious side effects. Nonetheless, the risks are considered low enough to market aspirin over the counter in grocery stores as well as airports, making it easily accessible.
The clinical trials responsible for many drug evaluations also focus on improvement for the masses — but not for everyone. By definition, clinical trials attempt to create a statistical model representative of broader society, so that if something works for a 500-person test group, the assumption is that it will work equally well for 500,000 persons. Trials are typically deemed successful when there are statistically significant differences between the trial and control groups. A better outcome for 87 percent of the participants could be considered a good result, notwithstanding the remaining 13 percent.
The key to balancing AI’s potential against patient safety is to take a risk-based approach to leveraging AI, similar to the mindset with clinical trials and new drugs. If there is improvement for a large group of patients, even if it’s not 100 percent, it might be worth proceeding with an AI-powered device or system.
Most importantly, the health care industry should focus on better understanding the characteristics of the device itself, including the probability for harm, as well as its potential interactions with its AI sub-system.
Ultimately, an AI algorithm is just part of a larger device, which can be limited to reduce the chance of harm. While developing “explainable algorithms” that provide clearer reasoning for a decision is the Holy Grail for AI, it’s much easier to document device characteristics. Similar to the current process of testing, approving and releasing drugs on the market, health care should prioritize the evaluation of AI-powered solutions based on a careful assessment of whether the benefits outweigh the risks.
In this way, a natural insurance system will evolve. Greater transparency between manufacturers and health care professionals will facilitate the assignment of liability and establish trust, ultimately leading to a broader acceptance and utilization of AI.
Erez Kaminski is former head of artificial intelligence and machine learning at Amgen, and founder and CEO of Ketryx.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.