AI Insurance is the New Cyber Insurance

5 min read

By John

In 2003, cyber insurance barely existed. A handful of carriers offered niche "internet liability" policies to tech companies. Most businesses had never heard of it. The ones that had assumed they didn't need it.

Then came the breaches. The ransomware. The regulatory fines. The class actions. A market that had generated $1.5 billion in premiums in 2013 grew to $15 billion by 2023. A tenfold increase in a decade.

AI insurance is at the 2003 moment of cyber. The risk is real, underappreciated, and accelerating. The market hasn't caught up yet. That's about to change.

How cyber insurance got built

Cyber insurance didn't emerge from nowhere. It was pulled into existence by a combination of rising incidents, regulatory pressure, and enterprises realising their existing policies didn't cover what was actually happening to them.

The pattern was consistent. A new category of risk appeared, businesses deployed it at scale before the risk was well understood, incidents occurred, and the gap between "we're covered" and "we're not covered" became painfully visible. Insurers eventually developed the frameworks, the data, and the underwriting discipline to price the risk properly. The market matured.

AI is following the same curve, just faster.

What makes AI risk different from cyber risk

Cyber risk is largely about defence and response. Keep attackers out. If they get in, contain and recover. The threat is external. The asset being protected is data.

AI risk is stranger and more complex.

The threat can be internal. A misconfigured agent. An overpermissioned system. A model that drifts over time. The asset being protected isn't just data, it's decision-making itself. And the failure mode isn't necessarily a breach you can detect and contain. It's an agent making thousands of subtly wrong decisions, with compounding consequences that only become visible much later.

A few things make AI risk particularly hard to insure with existing frameworks.

Opacity. You can't audit an AI agent the way you audit a network. What it will do in a novel situation isn't fully predictable, even to the people who built it.

Agency. Traditional software does what it's told. Agents make choices. When an agent does something unexpected, whether it "should" have done it is a genuinely hard question to answer.

Scale. A single agent can take millions of actions. A flaw that would be a minor error in a human employee can be catastrophic at machine speed and volume.

Novelty. Courts, regulators, and insurers are all working from frameworks designed for a world that didn't have autonomous AI systems in it.

The regulatory moment

Cyber insurance got a massive tailwind from regulation. GDPR made data breaches financially consequential in a way they hadn't been before. The SEC's cyber disclosure rules forced public companies to take the risk seriously at board level. State breach notification laws created compliance obligations that needed to be covered.

AI is getting a similar regulatory push, and it's moving faster.

The EU AI Act is already in force. High-risk AI systems face mandatory conformity assessments, incident reporting requirements, and significant fines for non-compliance. The Act explicitly requires operators of high-risk AI to have risk management systems in place, language that insurers and risk managers are already translating into coverage requirements.

In the US, sector regulators are moving quickly. The FTC has made clear it considers AI failures within its consumer protection remit. Financial regulators are scrutinising AI use in underwriting, lending, and trading. Healthcare regulators are grappling with AI in clinical workflows.

The liability surface is expanding. The cost of being wrong is going up. The demand for coverage will follow.

What the cyber market teaches us about what comes next

The cyber insurance market went through predictable phases.

First, early movers got generous terms because underwriters didn't have loss data. Then incidents accumulated, loss ratios spiked, and underwriters pulled back hard. Then the market matured with better risk assessment tools, clearer policy language, and more sophisticated underwriting criteria. Security posture became a prerequisite for getting coverage at all.

AI insurance is entering the first phase right now. The companies that move early get the best terms. The companies that wait will find the market has hardened by the time they come looking.

But there's a more important lesson from cyber. The companies that used insurance as a forcing function for better security, the ones that tightened controls because their underwriter required it, ended up significantly better off than the ones that treated insurance as a substitute for security.

The discipline of getting insured made them more secure. The same will be true for AI.

The underwriting challenge

The hardest part of building AI insurance isn't writing the policy language. It's underwriting the risk.

Cyber insurers eventually developed standardised questionnaires, security ratings, and loss models. They could look at a company's security posture and make a reasonable assessment of their risk exposure.

AI risk is harder to assess. It requires understanding not just what security controls are in place, but what the agents are actually doing. What systems they can access. What decisions they're making. How they'd behave under adversarial conditions.

This is why security and insurance need to be integrated from the start. You can't insure what you can't assess. You can't assess what you can't see. Continuous monitoring, real-time vulnerability scanning, and behavioural logging aren't just security features. They're the data infrastructure that makes underwriting possible.

Why now

The first AI agent insurance policy was underwritten in early 2026. The market exists. The frameworks are forming. The regulatory pressure is building.

The question for every company deploying AI agents in production is the same one the CISO of a mid-sized company was asking in 2005 about cyber: do we actually need this?

The answer in 2005 was yes, whether or not people knew it yet.

The same is true today. The risk is real. The gap is real. And the window to get ahead of it, before incidents accumulate, before the market hardens, before a regulator makes the decision for you, is open right now.

mount

|

Y Combinator

Asses, Certify and Insure AI agents.

TRUST LAYER FOR AI AGENTS

Your competitors are getting tested.
Are you?

© 2026 MOUNT. All rights reserved

mount

|

Y Combinator

Asses, Certify and Insure AI agents.

TRUST LAYER FOR AI AGENTS

Your competitors are getting tested.
Are you?

© 2026 MOUNT. All rights reserved

mount

|

Y Combinator

Asses, Certify and Insure AI agents.

TRUST LAYER FOR AI AGENTS

Your competitors are getting tested.
Are you?

© 2026 MOUNT. All rights reserved