#24: Insurance for AI: Easier Said than Done
In recent months, many friends have pitched or asked me about insuring AI risk. The idea is usually something like this: businesses want to adopt AI for efficiency, but they’re nervous about the AI hallucinating and making costly mistakes. Even if they buy all the best software to mitigate such mistakes, the scope of LLM outputs is so large that unpredictable, hugely expensive edge cases always remain. Insurance offers a clean way to transfer that risk.
You could read that as a bullish thesis for such an AI insurance product: imagine a world of widespread AI adoption, where every AI deployment is underpinned by an insurance policy. Or imagine a world where insurance products act as the critical enabler for widespread AI adoption in the first place.
But the thesis is not that easy! While I won’t present a slam-dunk-view either way, I want to discuss some of the nuance and complexities that make this market tricky, and probably smaller than it appears at first glance.
Insurance for (Software) Errors
In the history of business, AI isn’t the first thing to make mistakes. Humans have been making mistakes for a long time. For that reason, accountants, lawyers, real estate agents, etc. all carry insurance — specifically, an Errors & Omissions or Professional Liability policy that covers them if they make a costly mistake on the job and get sued by a client.
In recent decades, a significant amount of rote human labor has transitioned to being completed by software instead. This software transition was subject to the same concerns as the current AI transition: can you really trust accounting software not to make mistakes? Won’t there be edge-cases in mortgage underwriting that software might miss, but an experienced underwriter would catch? The proof is in the pudding: the world runs on software now. And similar to Professional Liability, many software companies carry Technology Errors & Omissions insurance, in case their software messes something up and their customer goes after them.
You would think that the market for such insurance is massive. Software handles every button-press in your car, it manages industrial control systems in factories, it monitors the life-or-death status of patients in hospitals. The stakes are high. And we know most software is broken in the margins: every day I visit websites of big, respected companies, and they’re full of bugs.
But most software companies haven’t even heard of Tech E&O insurance. It’s considered a specialty product, often included as an add-on to cybersecurity insurance. Because it’s so niche, it’s hard to estimate the market size, but that’s an indicator of just how small it is: accounting for under $5B in global annual premiums seems like a very safe bet to me.1 For comparison, in the US, Workers’ Compensation runs around $55-60B a year in premiums, and Personal Auto insurance over $300B.
This should give you pause. The handing-over of professional duties to software feels riddled with liability, even today. The thesis for Tech E&O would be very similar to the thesis for the AI insurance product we started out with. (Let’s call it AI E&O.) And yet the market for Tech E&O is small, even in the face of software carrying weighty responsibilities in every nook and cranny of our world.
AI E&O and Tech E&O
Taking this one step further: you could consider AI E&O as a new form of Tech E&O, or — depending on the details of the contract — as included by Tech E&O policies. After all, AI software is still software. It may not be quite as deterministic as software before LLMs, but you’re still trying to insure the same type of risk: software mistakes.
Then, in what sense does AI E&O expand the Tech E&O market? Before LLMs, software could make devastatingly expensive mistakes. After LLMs, software can still make devastatingly expensive mistakes. The LLM aspect may increase the potential frequency and severity of those mistakes, but you have a needle to thread: if frequency of severe mistakes increases too much, then insurance becomes moot. People are not going to use a software product that breaks all the time, regardless of whether any damages are covered or not. It’d just be a nuisance.
This puts insurance entrepreneurs in a structurally tricky position. The Tech E&O market is so small that for a venture-scale thesis, you’d need to forecast AI E&O increasing the size of the Tech E&O market several-fold, probably 10-20x+. To get there, you’d have to:
Overcome any structural market issues2 that may inhibit growth;
Bet on severity of claims shooting up, much more so than frequency. AI-enabled software would have to become tremendously more dangerous to deploy, with multi-million-dollar-loss glitches lurking. The risk scenarios you’d be insuring would be cases like “I’m Chevrolet, and my marketing AI promised new trucks to 163 customers”3 or “I fired all my accountants, replaced them with ChatGPT, and when I woke up this morning I owed a customer a million dollars.”
Maybe I’m being unimaginative, but the maneuvering room to get to widespread AI E&O adoption seems tight. I think the likelier path is that businesses will adopt AI while maintaining some risk-reward equilibrium: steering clear of the use cases with the most severe downside risks, and leaving humans in the loop where appropriate. You may well be right to argue that there is still more risk in the system than before, but I don’t know if there’s so much risk that it gives rise to a major new class of insurance product and satisfies a venture-scale thesis.
Information Asymmetry
An important detail of insurance markets is that insurance carriers must be better at evaluating the risk than the purchasers. Otherwise you get adverse selection problems: consumers who know they are more likely to incur claims purchase insurance, the insurance carriers take losses, and the market eventually collapses.4
This takes you to a practical concern: how would AI E&O products be underwritten? There would be two parts to it:
The insurer would evaluate the characteristics of the AI company — industry, size, safety and testing practices, etc., and look at their service agreements with customers to figure out what kind of risk they’re on the hook for.
The insurer would run a large battery of tests against the AI offering of the company, seeing how it holds up under a variety of adversarial scenarios, and what the variability of outputs is.
The first part is a classic point of strength for insurers: given a large portfolio of businesses underwritten, they can figure out how these factors affect pricing. But I expect that for an AI E&O insurance product, it’s really the second part that determines the risk. Here’s the problem: why would an insurer be better at testing a company’s AI outputs than the company itself?
Revisiting our earlier example, the folks at Chevrolet would have a much better understanding of their own business, all the ways in which they could deploy AI, and the most dangerous, error-prone areas, than any insurer looking in from the outside. Specifically, there are two related problems:
As an outsider, it’s extremely hard to get a full understanding of all the ways in which AI will be deployed, and what risks that implies downstream. Hard to price!
There is a massive information asymmetry between companies utilizing/selling AI software, and insurers seeking to insure the consequent risks. Trying to insure AI applications looks like a hotbed of adverse selection.
Concentration of Risk
Another classic detail of insurance markets is that insurers need to diversify the risks that they underwrite: for example, if you provide flood insurance, then you wouldn’t want to write all your policies in a single town by the river: when one house gets flooded by a storm, chances are that all the houses get flooded, and you go out of business. That’s concentration of risk, and insurers strive to avoid it.
The trouble is that the ecosystem of AI software products currently has enormous concentration of risk. There’s a single-digit number of major LLM providers. AI infrastructure, whether for RAG or data labeling, etc. has a similar concentration of activity, with many small providers and a few major ones. Practically speaking, if you’re insuring mostly GPT wrappers, and the newest GPT model has some kind of safety regression, then your entire portfolio of policies is in trouble.
For any insurer, it will be tricky to maintain adequate diversification of the underlying risks. In practice, this means your portfolio might simply be constrained to a small size, as you can never grow such that you’d be over-exposed to any particular underlying provider.
Underwriting for the Year Ahead
The final challenge is that insurance policies are usually written for the full year ahead, and AI software is evolving with great speed. In our own AI deployments at Limit, we found surprising differences in behavior and quality from different models. It’s hard to trust software updates from outside vendors to be strict improvements.
Further, at the speed at which businesses are iterating on their AI software, or deploying it in new contexts, makes the underwriting problem even harder. It’s tough enough to test the AI software at any one point in time. There’s no good way to make assumptions about how else it will get used in the next few months, or how well-tested the next software release will be. The remedy for an insurance underwriter will be to prescribe what kinds of updates are in scope for the policy, what level of testing must be done, etc. This helps limit the risk, but it also greatly increases the complexity of the insurance contract, and makes it more cumbersome to purchase.
What You Need for AI E&O
My skepticism above doesn’t mean there’s no case for AI E&O. There certainly is. But it’s tricky. You’d have to bring the following conditions together:
There must be rare, hard-to-mitigate, severe risks from AI deployment;
The purchasers of such insurance are the actors in the market (such as software providers and consumers) that are stuck with the risk, i.e. not able to contractually transfer it to other parties;
The insurers would need to be better than the policyholders at figuring out the riskiness of the AI deployment.
Could AI E&O insurers partner with AI testing/safety/QA service providers, similar to how cyber insurers partner with cybersecurity providers? Yes, but those services are already readily accessible to potential insurance customers on the open market!5 The information asymmetry remains.
An insurer wouldn’t need to know how to underwrite every such company, but could constrain their appetite to certain types of businesses where they feel they can exhaustively understand the AI risks;
Diversification of underlying risks (technology vendors) would have to be maintained, which practically implies limiting the portfolio size of the insurer;
The insurance policies would need to prescribe guardrails around software updates.
It is certainly possible to bring all these conditions together — it’s just not easy, and even when you do, it implies a very selective, small portfolio of underwritten risks. I suspect that at least for the next few years, the set of such opportunities will be pretty thin, making it a way but not the best way to attack the AI liability problem. Furthermore, you would need this risk environment to scale up dramatically to give rise to a venture-scale insurance thesis. For now, if you’re really good at evaluating AI model safety, that’s probably better sold as a standalone service than used to underpin an insurance product.
This piece was inspired by conversations over the past weeks with Rune, Bala, Zack, Alex, and others. Thanks for your thoughts!
You might get a figure in that ballpark if you count the premiums of Cyber + Tech E&O policies, but that wouldn’t be the right thing to do. You’d need to factor out the costs of the cyber coverage and try to get to the standalone cost of the Tech E&O coverage. On a standalone basis, I’m almost certain the global volume of Tech E&O is less than $5B in premiums. I wouldn’t be surprised if it’s under $2B.
Some potential structural market issues below:
Many software businesses have already contractually transferred their liability, thereby obviating any need for an insurance product. It is common for software products to have draconian terms and conditions that users click “accept” to without reading: they provide absolutely no warranty, no refunds, the customer agrees to indemnify the business, not the other way around, and so forth.
The expected liability may be overstated. Tech E&O is pretty cheap, usually in the very low thousands of dollars for a million dollars of coverage. Think about what that pricing means: it suggests that it’s not particularly risky to underwrite, and claims are reasonably infrequent/small. There are many things that seem theoretically very prone to error, but in practice work out pretty well.
The liability being “overstated” might reflect it being absorbed elsewhere in the stack! Some events that could be covered by a Tech E&O policy may end up being paid for elsewhere in the stack (e.g. the affected party internalizes the loss instead of going after the software vendor), or covered by a different insurance product, e.g. a property policy in the event of property damage due to faulty industrial software. I don’t have any evidence for this, it just strikes me as plausible.
This example is inspired by a true story!
Or re-prices wildly higher, basically passing the cost of adverse selection on to the regular purchasers.
In fact, the worst of all adverse selection will occur when insuring companies that already run all the maximal AI testing/safety/QA and are still nervous. On the one hand, they might be excellent, cautious, responsible policyholders — on the other hand, they might know something the insurer doesn’t!