Insurers Pull Back From AI Coverage: Why AIG, WR Berkley and Others Are Rewriting the Risk Rulebook
Insurers Pull Back From AI Coverage: Why AIG, WR Berkley and Others Are Rewriting the Risk Rulebook
What happened — and why it matters
On November 23, 2025, the Financial Times reported that major insurers — including AIG, WR Berkley, and Great American — have asked U.S. state regulators for permission to exclude AI-related liabilities from standard corporate policies. One proposed clause would bar claims tied to “any actual or alleged use” of AI, potentially leaving businesses that deploy chatbots, copilots, or agentic systems exposed. Insurers cite the risk of multibillion‑dollar, systemic claims stemming from unpredictable model behavior and unclear liability.
Why insurers are spooked (and actuaries are rubbing their temples)
Three overlapping fears are driving this retreat:
- Black‑box outputs: Underwriters say generative models can be too opaque to price. Even specialty carriers have balked at covering large language models because “nobody knows who’s liable if things go wrong.”
- Systemic risk: The nightmare isn’t one giant payout — it’s thousands of simultaneous medium-sized losses triggered by the same bad model behavior update or exploit. Recent incidents, like a $110 million defamation suit over an AI search “overview,” have sharpened attention on cascading failures.
- Blurry responsibility: Is the vendor, integrator, cloud provider, model maker, or end user on the hook? Until contracts and case law clarify that chain, insurers see a litigation fog thicker than a Canadian winter whiteout.
How it connects to other big headlines
This insurance pivot arrives as policymakers juggle how fast to regulate AI. The European Commission just proposed easing timelines on the AI Act’s toughest parts — pushing certain “high‑risk” provisions toward late 2027 — to reduce compliance drag while the tech matures. That may lower near‑term regulatory pressure, but it also means responsibility for risk control lands even harder on companies, their vendors, and (now) their insurers.
At the same time, the industrial build‑out powering AI is accelerating. OpenAI and Foxconn announced a partnership to co‑design and manufacture U.S.-made data‑center hardware — more racks, more compute, more AI everywhere. The bigger and faster the deployment, the larger the potential exposure surface insurers have to price — or exclude.
What this means for your business (and, yes, your wallet)
- Mind the exclusions: If approved, new endorsements could carve out coverage for AI‑generated content errors, model hallucinations, agentic actions, or deepfake incidents. Expect higher premiums and narrower definitions around “cyber,” “media liability,” and “errors & omissions.”
- Vendor contracts get real: Procurement teams will push AI vendors for indemnities, audit rights, logging guarantees, and safety SLAs. If your startup’s pitch deck says “agentic autonomy,” your customer’s legal team now hears “show me your kill‑switch, guardrails, and incident plan.”
- Segmented coverage is coming: Look for bespoke policies that include AI but only under tight conditions (specific models, data scopes, response playbooks) or with AI deductibles and strict reporting. Think of it like flood insurance: available, but with maps, pumps, and sandbags required.
A quick, plain‑English explainer
Insurers don’t hate AI; they hate unpriced uncertainty. Generative systems can behave inconsistently, scale instantly, and entangle many parties. That’s hard to model with last decade’s actuarial tables. Until liability is clearer and operational controls are consistent, insurers will try to limit exposure — the corporate equivalent of replacing a glass coffee table with something that won’t shatter during toddler time.
Lightly comic aside (because we all need one)
Imagine your company’s AI agent auto‑responds to a cranky customer with: “We’ve forwarded your complaint to our legal team and your cat.” Funny until you learn it also CC’d 30,000 customers and a local regulator. Somewhere, an underwriter just spit coffee across an Excel sheet.
Fresh perspectives to consider
- Shift from “move fast” to “measure fast”: Red‑team your models, log prompts, and prove safety controls. Auditability is the new moat — and it can lower your risk profile for brokers.
- Design for liability clarity: Use layered contracts that allocate responsibility among the model provider, integrator, and end user. If you can’t explain who’s on the hook, don’t expect your insurer to volunteer.
- Watch the policy pendulum: If EU enforcement eases while deployment surges, expect insurers to remain cautious — at least until courts and standards bodies provide firmer footing.
What to watch next
Regulatory approvals for AI exclusions at U.S. state insurance departments will set the tone for 2026 renewals. Also watch whether specialty markets craft narrow, parametric AI covers tied to specific failures (e.g., documented hallucinations causing defined losses). And keep an eye on the infrastructure wave — as more AI hardware hits the ground, the incentive to standardize safety and logging grows, which could eventually coax insurers back into the pool. Until then, read the fine print like your balance sheet depends on it — because it might.