AI has become the driving force of Europe’s digital ambition. Many sectors – from healthcare to manufacturing – are racing to deploy new models, automate workflows and unlock a competitive advantage. But one problem remains: more often than not, organisations are attempting to build advanced AI on infrastructure that isn’t ready to host it.
AI conversations typically focus on GPUs, training techniques, and models. Yet another potential limiting factor sits much deeper: organisations may underestimate AI’s need for vast amounts of data and data storage. In parallel, the European Union’s push for digital sovereignty is changing its regulatory approach toward AI, forcing enterprises to rethink their data storage strategies.
These two forces are applying themselves to different aspects of the data economy and AI use. That, understandably, can cause complexity. What are the key considerations, therefore, for organisations looking to balance both and benefit from this new reality?
How the sovereignty drive is reshaping AI infrastructure
Digital transformation has driven organisations to rely more heavily on interconnected, distributed and multi-domain IT architectures. The ‘five Vs’ of data – velocity, volume, value, variety and veracity – should also shape an organisation’s core tiered infrastructure. The demands of managing this complex architecture continue to span across legal, compliance, tax, audit and risk management.
For any business, data sovereignty and governance should not be an isolated compliance task – it should, ideally, shape how and where an organisation’s core systems operate. Regulatory pressure has also transformed the scope of data obligations. GDPR laid the foundation, but the EU Data Act, AI Act, NIS2, and DORA all affect data obligations, including operational infrastructure.
Modern AI workloads need far more than computational strength. They can demand petabyte-scale storage, high-performance access to data and ultra-low-latency pipelines to feed GPUs efficiently. When storage sits too far from compute, training cycles slow, costs rise, and model performance suffers, all factors that directly influence an organisation’s ability to deploy AI at scale.
This is where on-premises and hybrid cloud IT strategies intersect directly with compliance and AI optimisation. From a policy requirement, keeping data local can become a technical advantage. Increasingly, organisations need to ensure co-location of data and compute within jurisdictions that align with EU-compliance requirements, both to satisfy data protection and sovereignty concerns and to achieve the speed necessary for real-time analytics, iterative model development and high-performance inference.
As a result, EMEA is seeing rapid growth in metro-edge data centres – localised, highly connected facilities located near major population centres or industrial hubs. Just recently, three huge new data centre schemes for London worth £10bn were revealed. Cushman & Wakefield reports that the operational capacity of EMEA data centre markets rose by 21% between H1 2024 and H1 2025, with emerging metro-edge markets outside traditional hubs like London and Frankfurt reshaping the landscape. Cities like Oslo, Dubai, Berlin, and Lisbon are also seeing rapid increases in new data centre facilities.
These metro-edge centres can offer high-density storage, local compliance, and the proximity needed to help eliminate latency bottlenecks. For organisations building or scaling AI systems, this local-first architecture can be essential. It can help ensure that data stays within certain boundaries while delivering the throughput required for modern AI.
Infrastructure modernisation in practice
For all the focus on AI tools and computation hardware, the backbone of any AI strategy is the storage architecture supporting it. Without storage, there’s no AI. So, in addition to optimising compute power, organisations across Europe should be putting importance on “where is my data, and how quickly can I access it.”
This shift involves three major architectural principles. First, business leaders should look at the design for data locality. Instead of moving vast datasets to remote clouds, organisations should move computational technology closer to the data. Doing so can cut latency, enhance model performance and lower data egress costs.
Another thing to consider is investing in scalable, high-capacity storage. AI workflows generate, store and rely on huge amounts of data. To keep up with demands for clean, consistent and resilient datasets, AI requires expansive, cost-effective storage systems – particularly for unstructured data such as video, logs and sensor feeds. High-capacity HDDs and AI-optimised storage arrays (JBODs) are becoming central to long-term AI sustainability.
The last step would be to build hybrid ecosystems that balance sovereignty and scale. Sensitive datasets can remain in local or metro-edge environments, while public clouds can handle burst compute, global collaboration and non-sensitive training workloads.
AI and data sovereignty place regional data centres at the forefront of Europe’s AI evolution. Local data centres and storage providers are delivering value-add for businesses, with sharper local expertise and strategic alignment to local governance requirements.
The result is a new reality: scalable infrastructure closer to home. Organisations that make these architectural decisions today will define the next era of Europe’s digital advantages for tomorrow.
Nigel Edwards is a vice president at Western Digital