A technician at a wind turbine factory glances at the dashboard: a line of robotic arms has stopped mid-cycle. On one machine, a diagnostic camera shows every part as ‘defective’ – an impossibility, the technician mutters to herself. A malicious update, slipped into the system’s over-the-air patch pipeline, has quietly poisoned the vision model running on hundreds of identical devices worldwide. Within minutes, production slows to a crawl, with no one on the team knowing whether the fault lies in the network, the AI model sitting inside it, or a third party deliberately tampering with both.
As companies rush to embed artificial intelligence into the physical world, the boundary between online cyberattacks and real-world impact is vanishing. Edge AI – the practice of running AI models directly on or near the devices that generate data – promises faster responses, reduced bandwidth costs and stronger privacy. But as intelligence moves from centralised clouds to decentralised devices, that migration has to be coupled with new security, compliance and accountability frameworks.
That’s the ideal, anyway. For the moment, most edge AI enthusiasts are busy celebrating the outsized benefits of the technology. According to RedMonk senior analyst Kate Holterhoff, the rise of edge AI heralds profound changes in how intelligent systems are built, moving computation from central servers to the source of the data itself. “It isn’t just about latency reduction,” she says. “It’s a fundamental architectural shift.”
Those benefits are driving rapid adoption. “First, latency-sensitive applications simply cannot tolerate round-trip delays to the cloud – think autonomous vehicles making split-second decisions or industrial safety systems,” says Holterhoff. “Second, bandwidth economics make it impractical to stream raw sensor data continuously, especially with high-resolution cameras or dense sensor arrays. Third, privacy regulations and data-sovereignty requirements increasingly mandate that sensitive data never leaves the device or local network.”
That’s all well and good but, as many a cybersecurity expert well knows, with every technological advancement comes an expansion of the attack surface of whatever company is harnessing the innovation – all the more important when breaking into said technology not only breaches data privacy, but machinery, too. A compromised edge AI sensor could disable safety interlocks on machinery, or a tampered model might misclassify a hazard as harmless. The potential damage incurred ranges from the benign to the physically devastating.
Securing the silicon
How, then, do you ensure they remain secure, up-to-date and compliant? At Arm, senior director of market strategy David Maidment has spent more than a decade working on exactly that question. “We’ve got over 300bn Arm-based devices created to date, and over 100bn in deployment,” he says. “That’s a vast installed base, and as workloads become more sophisticated, so must the security that underpins them.”
For Maidment, the starting point is hardware trust. “Any edge device needs a secure root of trust,” he explains. “It must be able to boot securely, run trusted firmware and provide secure memory and crypto.” Arm co-founded the PSA Certified scheme to set that baseline — what Maidment calls “a root of trust for the world.” It has since been aligned with the EU’s Cyber Resilience Act, which will soon make secure-by-design features mandatory across connected products sold in Europe.
Beyond that foundation, Arm’s newer architectures build protection directly into the processor. “We have instructions around Pointer Authentication and Branch Target Identification (PAC BTI) for return and jump security, and memory-tagging extensions to prevent buffer overflows,” says Maidment, procedures which, in plain English, force the edge device to continually prove that threat actors haven’t fiddled with it. “It’s security 101, but it really matters that it’s baked into the silicon.”
At a higher level, Arm’s TrustZone and Realm Management Extensions create trusted enclaves that isolate workloads and data, effectively creating confidential computing at the edge. “Those techniques mean I can trust that a model has privileged access only to the data it’s supposed to, and nothing else,” Maidment explains.
The Arm veteran is keen to stress that 95% of edge AI security challenges are not new at all. “They’re just traditional computer-science problems done properly,” he says. “The remaining 5% is unique to AI: protecting the model and its weights [the factors that control how much influence each input has on the final decision by the AI], preventing data corruption and managing isolation between multiple models from different vendors.”
Model integrity and the Multiverse approach
If Arm’s focus is on the foundation, Rodrigo Hernandez, global GenAI director at Multiverse Computing, is tackling security inside the model itself. Multiverse began life as a quantum computing simulation startup before pivoting into AI model compression. Its technology uses tensor-network mathematics to shrink neural networks by up to 50%, which – it claims – allows large models to run on modest edge hardware with minimal loss of accuracy.
Compression, Hernandez argues, is about more than efficiency. “Having a model deployed at the edge is way safer than having the application layer on the edge and then sending the information to the cloud,” he says. “Compressing models makes inference more private and secure.”
But once models live on edge devices, they become assets that can be stolen or tampered with. To guard against that, Multiverse has developed two techniques. “The first is that we can embed in the very same LLM or model the ability to do something or not do something,” Hernandez explains. “We can map a neuron, remove it, or insert one to enforce restrictions. We call it a ‘truth reinforcer’ – obviously a euphemism – but it’s better to have a gun without bullets than a loaded gun with the safety on.”
The second method involves statistical fingerprinting. “Once you train the neural net, its parameters follow certain distributions,” he continues. “If the neural net is manipulated or neurons are removed, we can detect that it’s been compromised.”
Even so, he concedes that the broader ecosystem around edge AI is immature, including the standards for how AI agents communicate with each other and with external tools. “Protocols like agent-to-agent or MCP [Model Context Protocol] are still being formalised,” he notes. “Industrial environments are very vulnerable, much more than people think. From our value proposition, we can’t fix that, but it’s a very large concern across the industry.”

Shared responsibility at the edge
Both Arm and Multiverse see a common problem: the fragmentation of responsibility.
Holterhoff warns that “edge AI security requires a shared-responsibility model that crosses the stack and includes hardware, platform, model and application.”
Developers, she says, “can’t simply assume an underlying platform is secure, nor can platform providers anticipate every application-specific vulnerability.”
Maidment agrees that low-friction adoption is crucial. “These developers are not security experts,” he says. “Frameworks and libraries have to make it easy for them to do the right thing.”
Hernandez echoes that sentiment from the application side. “Model builders like us need to deliver editable, auditable models,” he says. “AI-application developers need to deliver value, and the system providers are the ones responsible for hardening the deployment. Everyone has their layer of accountability.”
Yet, as Holterhoff points out, accountability is still legally undefined. “When an edge AI system makes an autonomous decision that causes harm, the accountability chain is murky,” she says. “Is it the model developer, the device manufacturer, the system integrator, or the end-user organisation? We need clear liability frameworks that incentivise security without stifling innovation.”
Regulators are beginning to move. The EU Cyber Resilience Act will impose mandatory requirements for secure lifecycle management, vulnerability handling and update mechanisms across connected devices. Other regions are expected to follow.
Maidment believes this is long overdue. “EU CRA is a significant forcing function,” he says. “It’s rippling into the supply chain, and everyone’s asking what they need to do to comply. That will drive a lot of security best practice into hardware and software.”
But compliance alone won’t solve the speed problem. “There’s inherent tension between decentralisation and control,” says Holterhoff. “We’ve navigated similar challenges before – from mainframes to PCs to mobile devices – but the challenge isn’t whether we can secure edge AI; it’s whether we can do it fast enough to keep pace with deployment.”
Edge AI is already transforming how enterprises design systems, manage data and define responsibility. It promises autonomy and privacy – but at the cost of visibility and control.
For Maidment, the answer lies in hardware-rooted trust and attestation. For Hernandez, it’s in model integrity and embedded safeguards. For Holterhoff, it’s in governance frameworks that recognise decentralised AI as its own paradigm. Together, their views suggest that security at the edge will depend not on a single layer, but on how well those layers cooperate, from the transistor to the tensor.