
For those seeking to consolidate security and resilience in their organisations, generative artificial intelligence (GenAI) is both a blessing and a curse.
A blessing because it is a game-changing technology at the heart of new productivity tools, and software that can be deployed to augment defence protection; a curse because the bad actors are using it, too. Its adoption by attackers is leading to faster and more sophisticated threats at scale. These attacks are being used to exploit software and API vulnerabilities, help write malware and create more persuasive phishing campaigns.
According to Palo Alto’s Unit 42 threat intelligence team, 86% of today’s cyberattacks cause direct business impact, while 70% of incidents span three or more attack surfaces.
In the context of an emerging AI arms race, there’s another puzzle technology and security leaders need to address – how to meet the challenge when cyber budgets are being frozen, or even cut.
To understand how these two competing trends affect those on the frontline of enterprise IT, Tech Monitor and Palo Alto Networks brought together security and technology leaders from across the financial services (FS) sector for an executive roundtable discussion. The event took place at the end of April in the uniquely imposing setting of the Masonic Temple inside the Andaz London Liverpool Street.

Role of regulation in an age of AI
At the start of the session, attendees were asked first to share the main concern or issue that they are experiencing. One attendee cited the challenge facing heavily regulated financial services companies – how do they win the AI arms race when one side (the bad actors) operates without restraints and restrictions, while the other is operating in a world of compliance and governance?
It was suggested that regulators are 18 months behind in responding to the introduction of new technology. To compound the problem, many organisations operating in the banking and wider FS sector are themselves five years behind the regulator. All of which begs the question: what role should regulation play in such a fast-changing environment? The answer, one attendee offered, was “to be visible and accountable” and help organisations discover frameworks that help them manage the guardrails required to keep AI usage in check. It is, another suggested, a “best endeavours” approach to regulation.
Is an AI-infused attack different?
When practitioners seek to identify the benefits of AI, they typically reach for two terms of endearment: “faster” and “better.” AI often accelerates existing processes, often with a commensurate productivity gain; and, sometimes, AI allows for improved execution, transforming existing processes and changing business models as a result. So, when it comes to bad actors and their application of AI, are they executing the same attacks but faster, or are they developing more sophisticated, as well as faster, attacks? A consensus emerged around the table for the latter.
By way of an example, one attendee cited an emerging category of attack which seeks to subvert an organisation’s data that feeds its language models by, for example, introducing biases into the datasets. This, in turn, “poisons” that organisation’s IP, creates disruption, and threatens significant reputational damage.
Another guest offered a more prosaic – but equally powerful – change brought about by AI. Poor grammar and spelling mistakes are used to identify many phishing attacks carried out by non-native English speakers, allowing many to be spotted before causing harm. GenAI has corrected the language in these malicious emails, eliminating the typos and the stilted English, and making them more effective as a result.
And if there is any doubt that attacks have got faster, one statistic put to the table addresses that topic – ransomware attacks that used to take 44 days to exfiltrate vital proprietary data now take just three days.
AI augments or AI replaces?
Asked if the introduction of AI is creating “push back” from those in IT and security departments concerned about job losses, the response was robust. “No, it’s the opposite,” said another voice. Programmers, software engineers, and cyber specialists are welcoming AI, he said, because it is supplementing their capabilities. It is taking away the “boring, mundane work,” allowing specialists to concentrate on the knottier protection and prevention challenges.
Using Palo Alto Networks as an example, the company revealed that it experiences 59 billion attacks a day on average. To counter this barrage, it operates three shifts using 12 people in total, with its third working shift of the day being totally automated. From those billions of attacks emerge around 30 actionable alerts, and of those, between one and two require deep investigation. In short, for Palo Alto Networks, automation is proving an ally of effective and efficient defence.
Lock it down versus open it up
During the evening it was suggested that there are two “polarised” options in an effort to reduce the chance of AI exploitation – to “lock it all down” or to “open it all up”. Neither is without risk. Those that make it impossible to experiment, let alone deploy, AI will discover employees using workarounds and the likely emergence of shadow AI, analogous to shadow IT. Those who open it all up are advised to make sure they “watch everything”.
On the dangers of a more laissez-faire approach to AI usage, one attendee warned that, although GenAI chatbots are easy to use, the users themselves are not necessarily experts about the potential perils of the technology. In other words, while one of the great benefits of AI is that it is accessible to everyone, one of the great threats of AI is that it is accessible to everyone.
‘Cybersecurity 2025: Tackling threats and managing budgets in an age of AI’ – a Tech Monitor executive roundtable in association with Palo Alto – took place on Wednesday 30 April 2025 at the Andaz London Liverpool Street.