Companies can take different approaches for insurance coverage of risks associated with AI as insurers weigh their options for creating new policies. Corey Gray and Jon Mills of Boies Schiller Flexner explain the one clear approach for companies is that they must be heading toward obtaining AI coverage.
What happens when AI goes wrong?
Businesses large and small may decide they need insurance to manage AI risks. However, getting coverage isn’t easy. Some insurers are hesitant to insure AI risks, and some exclude AI coverage altogether. But going without AI insurance is risky business.
The risk is compounded by businesses needing to navigate an emerging and disjointed AI regulatory landscape. California focuses on AI developer reporting requirements; Colorado targets algorithmic discrimination; Tennessee regulates voice and image impersonation. The FCC prohibits “voice cloning,” while the FDA requires AI use disclosures, and the SEC and FTC prohibit deceptive “AI washing” in marketing. Meanwhile, Executive Order 14365 expresses a desire to preempt state policies.
As companies expand their AI uses, they also recognize the risks of the technology. Fortune 500 companies increasingly identify AI as a risk on their Form 10-Ks. As AI developers innovate new functions and capabilities, they also increase insurance uncertainty for the companies that employ their products.
Insurers recognize AI risks, but they also see the opportunities AI presents. Many insurers themselves use AI to aid claims processing, underwrite coverage and detect fraud.
With all of this, is AI insurance a new necessity for businesses? And how do businesses and insurers tackle AI policies as risk for costly litigation and expenses stem from this quickly advancing technology?
AI liability and insurability
Litigation concerning AI captures a wide range of exposure. The Lokken v. UnitedHealth class action concerns AI-aided decisions when denying medical care. Similarly, Mobley v. Workday, Inc. concerns AI-aided employment decisions. Separately, Raine v. OpenAI, Inc. concerns product safety issues arising from reliance on chatbot outputs. Several cases concern “AI washing,” the practice of misrepresenting the presence of AI in products or services, where the government levied significant fines.
The outcome of AI litigation is also unpredictable. Two cases — Bartz v. Anthropic and Kadrey v. Meta — both concerned copyright issues, but the courts applied different fair-use approaches. The Anthropic case settled for $1.5 billion. The Meta case was dismissed.
Liability connected to AI complicates insurability.
Insurability rests on specific factors. In general, it hinges on the principles of pure, quantifiable, fortuitous and measurable risk. This means insurance covers losses that occur in a particular time and place within estimated parameters based on predictions and actuarial evaluations. Some AI doesn’t easily fit these parameters.
Endemic to certain types of AI is the “black box” issue, where AI models generate solutions through a method humans cannot understand. This issue is especially difficult in high-stakes decisions, such as healthcare, finance, human resources and criminal justice.
Insurers could decide not to insure AI. In 2026, several insurers introduced absolute AI exclusions. Such policies could include a clause like the following: “This policy does not apply to claims arising out of the use of artificial intelligence, machine learning or automated decision systems.”
Insurers also must weigh opting out. Since late 2018, a market for AI insurance has emerged, and some insurers are providing coverage specifically tailored to a company’s AI uses.
Those insurers not providing tailored AI policies could provide “silent coverage” by covering AI risks that aren’t expressly identified in the existing policy. For example, AI liability coverage could be read into existing general, professional services or cyber liability policies. The upside to this is existing insurance covers AI incidents without adding additional costs or burdens. The risk is that a particular AI incident won’t be covered, and the insured won’t know until the incident occurs. The silent coverage approach is also risky because insurers are not significantly incentivized to cover substantial liability claims not expressly covered in a policy.
Another means of obtaining AI coverage is obtaining an AI “algorithmic rider.” This approach entails modifying an existing policy to explicitly include AI coverage.
Each approach has specific requirements for obtaining coverage. Either an algorithmic rider or an AI policy may include testing requirements, audits, limits on automated decision-making, liability limits and exclusions for unlawful violations. The insured must negotiate these terms carefully and be certain their company can comply with them.
Forging pathways in existing frameworks
One way to navigate the AI insurability conundrum is to focus on AI tools that could fit into accepted insurability frameworks. For example, AI categorization and evaluation tools, used by insurers and others to detect fraud, that conform to quantifiable performance standards and perform within predictable error rates may be insurable.
Generative AI is more challenging. Rather than categorize and evaluate data, these tools create new solutions or outputs based on user prompts.
Take a hypothetical company that markets “Fido the Talking Dog.” The hardware is manufactured in-house. The generative AI that produces the dog’s conversation is obtained from “CHAT, INC.,” a third-party developer. CHAT directed the AI model never to quit a conversation with the assigned user and to flag potentially harmful messages. Litigation is brought alleging a child was harmed by conversations with Fido. The risks associated with Fido’s performance are difficult, if not impossible, to quantify or predict.
In each instance, the insured should follow a similar path before implementing AI tools.
The first step is to understand the scope of existing coverage and confirm whether the company’s intended AI uses fall within that coverage. Where they are not covered, adding an AI rider to existing coverage may be an option. Acquiring AI-specific insurance as a primary or secondary to an existing policy is another option. Third-party indemnification contract provisions keyed to insurance coverage requirements can offer additional coverage support. Companies need to also incorporate insurance and regulatory requirements in compliance plans. These steps, along with a carefully designed evaluation of the AI tool, could help attain coverage.
AI insurance may not be easily obtained, but the need is clear, and insurers have an incentive to be involved in the vast new market.
A competitive advantage
It makes good business sense for companies to evaluate their need for AI insurance and evaluate their AI risks. Compliance programs for AI can be a competitive advantage when insurers may be leery of AI. After examining existing coverages, companies should test and audit AI tools before integrating them.
Transparency and dialogue are key. A company could identify the AI tools it intends to use and show the insurer how the compliance plan accounts for and mitigates AI risks. This approach can also aid negotiations on the scope of coverage, risk tolerances and compliance requirements while giving insurers confidence to cover AI uses.
A few best practices are emerging in this area:
- Stay updated on applicable AI laws and regulations.
- Evaluate insurance policies and policy language.
- Communicate with insurance providers on AI coverage.
- Identify the specific AI tools the company uses.
- Audit AI tools before integrating and evaluate any “black box” functions.
- Add indemnification provisions to AI provider agreements.
- Identify AI risks and mitigation measures and develop incident-response plans.
- Develop and implement AI compliance plans that include these standards.
- Establish recurring employee and executive AI training programs.
Boies Schiller Flexner partners Alan Vickery and John LaSalle contributed to this report.
This article was first published in Corporate Compliance Insights on April 13, 2026.

/Passle/67ead603e72212c76f32080c/SearchServiceImages/2026-04-14-19-42-52-569-69de98bca4562b2b93102fce.jpg)
/Passle/67ead603e72212c76f32080c/SearchServiceImages/2026-04-06-15-45-31-922-69d3d51b14283acbb4281469.jpg)
/Passle/67ead603e72212c76f32080c/SearchServiceImages/2026-03-23-13-11-17-765-69c13bf5e12d157926cc74c4.jpg)
/Passle/67ead603e72212c76f32080c/SearchServiceImages/2026-03-11-21-02-35-560-69b1d86be23c3a4f73997390.jpg)