Artificial intelligence is a transformative and disruptive technology, affecting every person’s life. Whether you are shopping online, interacting with a customer service center, reading information on a website, getting medical care, passing through airport security or driving in an autonomous vehicle, it is highly likely that that AI is involved.
Any time AI touches our lives, there is a chance that something may go wrong. AI models must be trained with data, and sometimes the data will inadvertently mislead the AI models. This may lead to misguided AI-driven advice or information — or in the worst case, a missed medical diagnosis that can lead to poor health outcomes.
So who is responsible?
The courts will need to wrestle with enough such cases to define laws that provide meaningful insights and resolution. Generative AI that uses large language model have already kept the courts busy, focusing on data copyright infringements. Yet when the courts rule in favor of plaintiffs and establish liabilities, it begs the question: Is insurance needed?
For the consumer, should they purchase insurance to protect against misinformation and poor guidance offered by AI tools? This is hardly necessary. The providers of the information, whether offered by AI or humans, are responsible.
But what about those organizations that are integrating AI across their operations — what type of liabilities are they assuming in doing so, and will insurance protect them against such liabilities? Will existing liability coverage already cover such events, or will insurance companies use the AI revolution to market and offer AI-specific liability insurance protection?
There are a few salient points to keep in mind. The insurance industry sells pooled risk instruments at a profit. They will not offer insurance unless the risk calculus is in their favor, and they can reap a healthy return.
AI liability insurance offers some challenges. First, there are limited historical damage records to rely upon to determine appropriate insurance premiums. To mitigate such risk, some insurers may opt to overprice insurance as they collect data. This may lock some potential customers out of the market, yet may be necessary to manage risks when the full implications of the risks remain unclear.
Deloitte issued a report highlighting AI insurance as a growth opportunity for the insurance industry. They project $4.7 billion of premiums over 10 years, though, like any forward-looking analysis, this is near impossible to validate, given that predictions of this type are shrouded in uncertainty. If it did provide a reasonable “guestimate,” that would amount to around $470 million per year, though the annual premiums would be heavily backloaded as AI risks become more apparent.
To put this number into perspective, there was around $4 trillion of premiums written in 2022 for non-life events worldwide. This means that $470 million per year of premiums is around 0.012 percent of this total (or around 1/83rd of one percent). It will likely take quite some time before AI insurance becomes a significant part of global insurance premiums.
Of course, anytime there is risk, particularly a new risk, there is an opening to introduce and market insurance products. For potential customers, assessing AI risk exposure is difficult, given that there is no historical record, which makes determining the appropriate levels of coverage a challenge. The insurance industry will certainly seize upon this opportunity.
Will companies need AI-specific insurance? To answer this question, think of AI systems as company employees. If an employee makes a mistake that is costly to customers, there is a liability. Insurance is purchased or a fund is created (with self-insured entities) to address such liabilities.
The more salient issue is how the courts will continue to rule on AI-driven liabilities issues. This will be a critical factor informing the risk (and insurance) calculus, which is why setting premiums for AI-driven liabilities will continue to be a challenge.
As AI becomes more ubiquitous in society, mistakes and personal injuries are certain to occur. Even if the rate of such events is lower than with human-driven systems, some in the legal profession will seize the opportunity to profit from such liabilities. Such actions may place headwinds on the use of AI and even limit the benefits that it can accrue.
Insurance is a valuable tool for protecting assets and limiting the cost of liabilities. It will also play a role in shaping the future of AI, given that managing risk is at the fulcrum of the insurance industry and AI will significantly impact risk in the coming decade.
Sheldon H. Jacobson, Ph.D., is a professor of Computer Science at the University of Illinois at Urbana-Champaign.