It’s no secret that banks must follow “know your customer” regulations to prevent terrorists or other undesirables from opening accounts. It might be less obvious that the European Union demands energy efficiency data on the data centres that process that information; that it lays out procedures for recycling the hard drives that hold that info on customers when they reach the end of their lives – or that the same EU cybersecurity legislation that covers the digital infrastructure in those datacentres also extends to children’s toys and baby monitors.

These are all part of the forest of regulations that all businesses must comply with – a compliance burden that only grows every time yet another crisis or scandal rocks the major Western economies. So, too, grows the world’s army of compliance specialists and service suppliers. The US Bureau of Labour Statistics identified 418,000 ‘compliance officers’ in 2024, a figure predicted to grow by about 3% between 2024 and 2034.

Across the Atlantic, lobbying group Business Europe recently described the “friction” that this expanding regulation burden has created for firms in the EU. It also said some legislation – such as the AI Act – is ambiguous to the point of being self-contradictory, and that the trend toward complexity in such laws is not, on its own, making them more effective. Meanwhile, in the UK, the CBI found that some firms faced yearly compliance costs of up to £50m.

Traditionally, says Ian Miell, CTO at the consultancy Container Solutions, compliance is “someone coming around with a clipboard [and] a list of things to check, and they do that on some sample of a group. Every six months or every year, you have to do that work again.” 

But with businesses and their compliance obligations becoming ever more complicated, this model clearly doesn’t scale. Miell says he’s dealt with companies that have tens, even hundreds, of subsidiaries, all of which are feeding in compliance data to the centre. What’s more, they’re often doing that “with Excel spreadsheets, with PDFs, with Word documents,” he says. “It’s a nightmare, because each has a different format. Someone must parse it all, put it all together, and figure out what’s going on.

This whole process seems ripe for consolidation. “There’s nothing that says your entire compliance lifecycle can’t be captured in this one central place,” argues Miell, “and that it speaks a public language.”

Miell and his colleagues have developed an open source platform to do just that, the Continuous Compliance Framework. That, in turn, can feed real-time dashboards, keep organisations ready for audits, and help organisations build compliance workflows. From there, the obvious next step is to apply automation and AI to relieve the drudgery and, more importantly, the confusion around compliance requirements. So what would AI-powered compliance look like? And how would gen AI and its occasional bursts of ‘creativity’ sit with the very real-world obligations companies in both finance and other sectors must meet?

The argument for AI-powered compliance

Compliance can often seem like an exercise in Kafkaesque absurdity. Nutanix’s director of systems engineering, James Sturrock, says it’s not uncommon for two in-house experts to have differing opinions on how to solve the same thorny regulatory conundrum. That isn’t even getting into how competing jurisdictions might view the problem. 

 “We have offices in every country in Europe, but Germany is very different from a compliance point of view to every other country on the planet,” says Sturrock. “They’re much more strict.” Which is a problem when implementing complex systems for customers – or for Nutanix itself – that span multiple countries.

Given that data might be spread over multiple services and locations, he says, “The only way to make sure that’s compliant is to use something like agentic AI that can evaluate it, look at it, make decisions based on what it sees, and then ultimately come back to its human master and ask for a decision.”

In Nutanix’s case, its stack includes data classification through its Data Lens technology to help customers manage their compliance objectives, while its Enterprise AI “allows organisations to build sovereign AI clouds with absolute control of model authorisation and model serving.” At the same time, its own operations are bound by its AI Technology User Policy.

None of this comes for free, of course. But if compliance is an expensive burden – not least for financial giants –  might AI-powered compliance positively contribute to the bottom line? Ben Peters thinks so, at least when it comes to businesses outside the world of finance.

The founder and CEO of Cogna, which describes itself as providing ‘AI for the real economy,’ Peters says that companies in the “physical industries”, such as utilities or manufacturing, are often underserved when it comes to technology that can improve their productivity. Moreover, utility companies, he says, have limited levers to pull to affect their cost base. “But,” he says, “one of the areas they can pull on is fines and penalties related to compliance.”

Simply meeting targets on warning customers of service disruptions, or ensuring works don’t overrun, means their bottom line is not dented by fines or other penalties.

This sounds straightforward, but the reality is that predicting work schedules involving 200-year-old infrastructure can be tortuously complicated, Peters explains. Equally important are potential unknowns such as contaminated soil or sewers that don’t appear on maps or where data is incomplete. These don’t just represent potential holdups to work – and resulting penalties – but represent further risks in themselves.

“You have armies of people looking things up in SAP, then opening up Google Maps and trying to take measurements,” he says.

Generative AI – in effect, vector search – makes it much easier to both aggregate the information and surface relationships embedded within it, Peters explains. A human can then make the final decision on how to proceed.

A photo of a man using a document rubber stamp, used to illustrate an op-ed about automated compliance.
One pilot that utilised genAI to review legal documents at the bank Santander resulted in a 50% jump in service level attainment. (Photo: khunkornStudio / Shutterstock)

Banking on AI?

Automating alerts or making it easier to spot compliance headaches early is one thing. But what might AI contribute toward simplifying more complex compliance conundrums, like those encountered by the financial services industry? In that sector, explains Pegasystems’ global banking industry lead Steve Morgan, such models have to be readily explainable not only to customers, but internal audit teams and regulators, too. Even then, it’s already clear that certain types of AI applications aren’t completely suitable for insertion into compliance workflows – most notably, GenAI. “Unless you have a very special model that’s trained” on a specific use case, says Morgan, the answers that such models provide compliance experts just aren’t predictable or accurate enough to meet the high standards demanded of banks. 

It’s not all hopeless. One use case where “new” AI has been applied in recent years has been “trying to get to a perpetual KYC,” says Morgan. While detection of potential red flags has improved, Morgan says, “there’s still room to improve the automation, the straight-through processing, the guidance you give to a team.” The key is identifying “what should be automated, versus what does need to be reviewed by a person.”

One notable example of this approach was a recent collaboration between Pegasystems and Santander in Brazil. This involved breaking down a complex workflow and automating each process. “They [then] applied generative AI to some of them to help with looking at documents, procedures, reviewing legal documents up to 200 pages long, pulling out certain data,” says Morgan. The result was an 80% reduction in staffing, a 5% accuracy improvement to 98% and a 50% jump in service level attainment.

The latter was particularly noteworthy, says Morgan, as “that’s good for customers, good for the bank [and] you’re spending less on Google resources.” What’s more, he adds, “none of them were sad about reducing their legal bills.”

Morgan argues that this case study illustrates the logical path for automation in compliance. The campaign to simplify these reporting burdens using AI will increasingly mean “people stripping out effort – but, more importantly, improving service levels, improving quality, and as a result, improving compliance as a process.”

But for this to happen, companies still need to have a comprehensive understanding of their compliance obligations and processes, as well as appreciate where automation and AI make sense. While AI excels at automating processes and aggregating information, says Peters, “what actually constitutes ‘high risk’ is very, very context specific, and humans are good at doing it quickly – if the information is presented to them.”