
Insurers at Lloyd’s of London have devised a new insurance product aimed at addressing the financial risks associated with errors in AI tools. According to reporting by the Financial Times, this move aims to provide businesses with coverage against potential losses arising from AI-induced mistakes, as companies increasingly integrate the technology into their operations.
The insurance policies, developed by Y Combinator-backed start-up Armilla, offer protection for businesses facing legal claims when customers or third parties experience harm due to underperforming AI systems. This coverage includes expenses such as damages and legal fees. Several insurers within Lloyd’s will underwrite these policies, marking a significant development in the management of AI-related risks.
AI-induced errors highlight need for insurance
The rapid adoption of AI technologies by businesses seeking to enhance efficiency has sometimes resulted in notable errors, particularly with customer service bots. These errors often stem from flaws that cause AI language models to produce incorrect or fabricated responses. For instance, Virgin Money faced an incident in January when its chatbot mistakenly reprimanded a customer for using the term “virgin.” Similarly, courier company DPD had to disable part of its chatbot last year after it used inappropriate language with customers.
Another case involved Air Canada, where a tribunal ruling required the airline to honour discounts mistakenly offered by its chatbot. Armilla noted that if the chatbot’s performance was officially deemed below expectations, their policy framework could have covered the financial loss incurred.
Armilla’s CEO, Karthik Ramakrishnan, believes that this product may persuade more companies to embrace AI technologies without fear of operational failures. While some insurers currently include AI-related risks within general technology errors and omissions policies, these often come with limited payout caps. According to Preet Gill from Lockton, which distributes Armilla’s products, a typical policy with a $5m limit might only allocate $25,000 specifically for AI-related liabilities.
AI language models are dynamic and evolve over time through learning processes that can introduce new errors. Logan Payne from Lockton explained that an error by itself would not automatically guarantee a payout under Armilla’s policy, rather, coverage is contingent on whether the AI’s performance significantly declines below initial expectations.
Ramakrishnan outlined their approach by saying that they assess the probability of model degradation and provide compensation if significant degradation occurs. For example, if a chatbot’s accuracy drops from 95% to 85%, Armilla’s policy could offer compensation for the performance shortfall.
Tom Graham, who leads partnerships at Chaucer, an insurer at Lloyd’s engaged in underwriting these new policies, highlighted that their approach to underwriting will involve careful selection. He stated that they would not issue policies for AI systems that are deemed to be highly susceptible to frequent failures. This method aligns with the focused strategies adopted in other areas of insurance operations.