AI Liability Insurance: Lloyd’s Launches Coverage for Chatbot Errors

New policies address growing financial and legal risks as AI deployment accelerates across customer-facing operations.

4
Generative AINews Analysis

Published: May 13, 2025

Luke Williams

Business leaders take note.

As organizations deploy AI across customer touchpoints, a new risk category emerges: financial and reputational damage from AI errors. Recent cases highlight how AI failures lead to unexpected costs, legal complications, and damaged customer relationships. Now, a new AI liability insurance product addresses this specific business risk.

AI Liability Insurance Enters the Market

Lloyd’s of London has debuted an insurance product for companies dealing with artificial intelligence (AI)-related malfunctions. This launch comes as the insurance industry responds to concerns about losses from AI chatbot errors or hallucinations.

The insurance policies, developed by Y Combinator-backed start-up Armilla, cover businesses facing legal claims when customers or third parties experience harm due to underperforming AI systems. This coverage includes expenses such as damages and legal fees. Several insurers within Lloyd’s will underwrite these policies.

The Business Problem AI Liability Insurance Solves

For C-suite executives and technology leaders, this insurance addresses a gap in risk management. While companies have adopted AI to increase efficiency, some tools, particularly customer service bots, have created costly mistakes when they hallucinate – generating false information that appears credible.

In a recent interview Kelwin Fernandes, CEO of enterprise AI vendor NILG.AI said:

If you remove a human from a process or if the human places its responsibility on the AI, who is going to be accountable or liable for the mistakes?

Real-World Impact: Costly AI Failures

The business risks are concrete. Consider these recent incidents:

  • Virgin Money issued an apology when its chatbot reprimanded a customer for using the word “virgin.”
  • Air Canada ended up in court when its chatbot fabricated a discount in a conversation with a customer. A tribunal ruling required the airline to honor these mistakenly offered discounts.

These cases demonstrate how AI failures translate to financial and reputational costs. According to the Financial Times report, Armilla noted that losses from selling tickets at the discounted price would have been covered by its policy if Air Canada’s chatbot was found to have performed below expectations.

How AI Liability Insurance Works for IT and Business Leaders

AI language models evolve over time through learning processes that can introduce new errors. Logan Payne from Lockton explained that an error alone would not trigger a payout under Armilla’s policy. Instead, coverage applies when the AI’s performance declines below initial expectations.

Karthik Ramakrishnan, AI insurance provider Armilla’s CEO said:

We assess the AI model, get comfortable with its probability of degradation, and then compensate if the models degrade.

For example, if a chatbot’s accuracy drops from 95% to 85%, Armilla’s policy could compensate for the performance shortfall.

What This Means for IT Teams

Currently, some insurers include AI-related risks within general technology errors and omissions policies, but these come with limited payout caps.

Preet Gill, a broker at Lockton, said:

A general policy that covers up to $5mn in losses might stipulate a $25,000 sublimit for AI-related liabilities.

This specialized coverage could enable IT departments to:

  • Implement AI solutions with greater confidence
  • Better quantify and manage deployment risks
  • Secure budget approval from risk-averse finance leaders

What’s Next: Strategic Implications

Tom Graham, head of partnership at Chaucer, an insurer at Lloyd’s underwriting the policies, emphasized selectivity in their approach:

We will not sign policies covering AI systems we judge to be excessively prone to breakdown. We will be selective, like any other insurance company.

Organizations should expect:

  • More insurance providers to develop AI-specific coverage
  • Increasing demand for AI risk assessments prior to deployment
  • Closer collaboration between IT, legal, and risk management teams

This new insurance product reflects a maturing AI landscape where businesses need both innovation and protection.

What Business Leaders Should Do Now

  1. Audit current AI deployments for potential liability exposures
  2. Review existing insurance coverage for AI-related incidents
  3. Implement performance monitoring for AI systems to establish baselines
  4. Consider AI governance frameworks that include risk mitigation strategies

As organizations navigate AI adoption, managing implementation risks becomes as important as capturing its benefits. The emergence of specialized AI liability insurance is an important step in enterprise AI deployment.

Frequently Asked Questions

What does AI liability insurance cover?

AI liability insurance covers businesses for costs related to legal claims, damages, and fees when AI systems underperform or generate incorrect information that causes harm to customers or third parties.

Why do businesses need AI liability insurance?

As AI becomes integrated into customer-facing operations, standard technology insurance often provides insufficient coverage (typically just $25,000) for AI-specific risks. Dedicated AI liability insurance helps protect against financial losses when AI systems fail to meet performance expectations.

How does AI liability insurance differ from standard tech insurance?

Unlike standard technology errors and omissions policies, specialized AI liability insurance focuses specifically on performance degradation in AI systems rather than just technical failures. It provides higher coverage limits for AI-related incidents and considers the unique learning characteristics of AI systems.

InvestmentsNatural Language Processing
Featured

Share This Post