
At 3:47 AM on a Tuesday morning, an AI agent springs to life in a financial services firm’s cloud infrastructure. Its mission: process overnight trading data, identify anomalies, and generate risk reports before the markets open. Within milliseconds, it requests access to databases, connects to external data feeds, and spawns three sub-agents to handle parallel tasks. By 4:06 AM, its job is done, and it vanishes as quietly as it appeared.
This scenario – autonomous agents executing complex workflows without human intervention – is rapidly becoming a reality across multiple industries. But it raises a fundamental security challenge that’s keeping cybersecurity professionals awake at night: how do you trust and verify systems that don’t fit traditional identity models?
“The thing that has most greatly impacted the wider adoption of certificates has been the rush to deploy [agentic AI] without the thought towards the management of them,” says Chris Hickman, chief security officer at Keyfactor. “But the reality is that the nature of agentic AI is going to provide a scale that most organisations have probably not seen before.”
The problem is uniquely complex. Unlike human users who can provide credentials or machine identities tied to specific hardware, AI agents exist in a liminal space. They’re more persistent than typical software components but more dynamic than traditional applications. They can spawn, execute tasks and terminate autonomously – potentially without the certificate lifecycle management that underpins digital trust.
The identity crisis
The scale of this challenge becomes apparent when considering what is expected from agentic AI. “If an agent represents something that a person would traditionally do, you can’t use human-based onboarding and credentialing,” Hickman explains. “It would be like trying to onboard 10,000 customer service agents dynamically, and it just wouldn’t happen.”
Traditional approaches fall short because AI agents don’t map neatly to existing identity paradigms. They don’t quite fit the functional definition of users, in that they can’t provide passwords or undergo multi-factor authentication, but they’re not quite machines, lacking the physical anchors that typically ground machine identities.
“I like the concept of thinking of an AI agent both as a human identity and a software component identity, because I think they can act like either one,” says Greg Wetmore, vice-president of product development at Entrust. “When an agent is logging onto a system and touching a configuration or injecting some data, that looks a lot like a human identity moving around the network. When it’s calling your application APIs or making calls to your AWS interfaces, then it really looks like a software module.”
This dual nature creates unique security requirements. Agents need persistent identities that establish what they are, but also ephemeral credentials that define what they’re authorised to do and, crucially, the ability to revoke those permissions instantly if something goes wrong.
The stakes are particularly high when agents communicate with each other. “Agents asking other agents for data and information – that’s really where we kind of come back to a cryptographic anchor or something like a root of trust,” Hickman notes. Without proper certificate management, a malicious system could potentially impersonate a trusted service, and autonomous agents might not recognise the deception until damage is done.
The perfect storm
The timing couldn’t be more challenging. Three seismic shifts in cryptography are converging just as agentic AI reaches enterprise scale. Public certificate lifespans are shrinking to 47 days – a dramatic reduction from current standards. Post-quantum cryptography is approaching maturity, requiring organisations to replace algorithms that have barely changed in decades. And now AI agents are demanding certificate management at an unprecedented scale.
“I think it’s a really interesting trichotomy of things that are happening for basically cryptography that hasn’t changed in 30 years up until now,” Hickman observes. For organisations still grappling with basic certificate management, this convergence feels overwhelming.
The research bears out these concerns. Keyfactor’s studies show that 42% of organisations haven’t even started their post-quantum journey. Meanwhile, many can’t answer a basic question: who in their organisation is responsible for cryptography? “Most people couldn’t tell you,” Hickman says. “I would say that probably 50, 60% of organisations don’t have that well-defined yet.”
This governance gap becomes critical when AI agents are involved. Traditional PKI management often falls to Active Directory administrators. This creates a skills mismatch that could prove dangerous as cryptographic requirements become more complex.
Security by design, not by default
Despite these challenges, security experts aren’t panicking. The consensus among practitioners is that existing technologies can handle agentic AI – if, that is, organisations apply them correctly from the start.
“Agentic AI fits into well-understood security best practices and paradigms, like zero trust,” Wetmore emphasises. “We have the technology available to us – the protocols and interfaces and infrastructure – to do this well, to automate provisioning of strong identities, to enforce policy, to validate least privilege access.”
The key is approaching AI agents with security-by-design principles rather than bolting on protection as an afterthought. Sebastian Weir, executive partner and AI Practice Leader at IBM UK&I, sees this shift happening in his client conversations.
“There’s a number of studies that have been done showing that whilst [AI development] can be up to four times faster, the first phase of code cut has 10 times more security vulnerabilities,” says Weir. “So, the trade-off between efficiency and security is not mature yet.”
This realisation is driving organisations to front-load security considerations. Banks and financial services firms, traditionally cautious about new technologies, are leading the way. “We’re doing a lot of work with a number of banks where they’re seeing their agentic ecosystem grow,” says Weir, “and they are very conscious of how they need to evolve more established authentication and authorisation patterns.”
The solution involves treating AI agents like a hybrid of human and machine identities. They need persistent cryptographic anchors that establish their core identity, combined with short-lived authorisation tokens that grant specific permissions. This approach leverages PKI’s proven ability to scale, capabilities already demonstrated in IoT deployments managing hundreds of millions of certificates.
“PKI is accustomed to scale,” Hickman notes. “The original scale of PKI was the internet, and it continues to work at scale beyond that. We have customers that have hundreds of millions, if not billions, of certificates that have been issued and are very effectively managing them.”
The orchestration imperative
Perhaps the most critical insight from security practitioners is that managing agentic AI isn’t primarily about new technology – it’s about governance and orchestration. The same platforms and protocols that enable modern DevOps and microservices can support AI agents, but only with proper oversight.
“Your ability to scale is about how you create repeatable, controllable patterns in delivery,” Weir explains. “That’s where capabilities like orchestration frameworks come in – to create that common plane of provisioning agents anywhere in any platform and then governance layers to provide auditability and control.”
This orchestration becomes crucial when agents spawn other agents or interact with external systems. Organisations need clear policies about what their agents can do, whom they can communicate with, and how to detect when they’re operating outside expected parameters.
“I think the biggest concern is not what AI will do that I expect it to do, it’s what AI could do that I don’t expect it to do,” Hickman warns. Recent incidents, like AI systems inadvertently deleting production databases or hallucinating incorrect financial data, underscore the importance of proper controls.
The consensus is that AI agents need to be implemented within robust security frameworks. For Wetmore, the answer lies in applying well-understood security practices, especially zero trust and the application of least privileged access and “never trust, always verify.”
Trust without terror
As organisations grapple with these challenges, early adopters are finding that agentic AI security isn’t about reinventing cybersecurity but applying proven principles to new use cases. The technology exists to manage AI agents securely at scale; the question is whether organisations will implement it properly before the rush to deploy overwhelms careful planning.
“This is not a great time to try to reinvent security specific to agentic AI,” Hickman concludes. “We know PKI scales. We know certificates work. We know that an AI agent can have one certificate for identity that’s more persistent, one that’s very ephemeral for authorisation.”
The organisations that succeed will be those that recognise agentic AI as both an opportunity and a responsibility. They’ll invest in governance frameworks before deploying agents at scale. They’ll treat certificate lifecycle management as a strategic capability, not a technical afterthought. And they’ll remember that in the age of autonomous systems, trust isn’t automatic – it’s architected.
For now, that 3:47 AM AI agent continues its work, processing data and generating insights. Whether it operates within a fortress of cryptographic protection or through gaps in outdated security models will depend on the choices organisations make today. The agents are coming; and we can secure them we have the technology.