It has been six years since the General Data Protection Regulation (GDPR) came into force across the EU, and privacy is no longer up for debate. Europe has already made its ethical position on the matter clear – personal data must be protected, and trust must be engineered into every system that handles it. What is less clear is how that principle holds up under the weight of Europe’s expanding web of regulation.
The AI Act and the Data Governance Act both build on GDPR’s foundations, but together they form a patchwork of compliance that even seasoned experts might struggle to navigate. For businesses developing AI, scaling cloud infrastructure, or moving data across borders, concerns are beginning to mount about how workable this regulatory landscape really is. As data becomes the currency of economic and geopolitical power, the coherence of Europe’s privacy framework will determine whether the continent leads the next era of digital innovation or is slowed by the very protections that once set it apart.
Is GDPR’s legacy a foundation of trust?
When GDPR came into force in 2018, it reset the moral and operational standards of the whole digital economy. It compelled organisations to treat data less like a limitless resource, and more like something that must be earned, justified, and protected. In some ways, that shift in mindset became Europe’s greatest export. Nations from Brazil to Japan modelled their own frameworks on GDPR, and consumers worldwide came to associate European regulation with ethical stewardship. The regulation also gave businesses a common language for data accountability, forcing transparency into decision-making and creating a baseline of trust essential for digital services to function at scale.
Yet, for all its success, GDPR was a product of its time. It was written before the widespread adoption of generative AI, cloud-native infrastructure or large-scale machine learning. Its enduring principles are important, but its legal definitions and enforcement mechanisms are beginning to buckle under the weight of innovation.
Alignment or accumulation
Alongside GDPR now sits the AI Act, the Data Governance Act, the Digital Services Act, the Digital Markets Act, and NIS2, each with its own objectives, enforcement mechanisms, and interpretations of data accountability. On paper, these laws share a common purpose; to safeguard citizens and promote responsible innovation. In practice, however, they often crowd the same operational layer, creating duplication and uncertainty.
A company developing an AI model, for instance, must reconcile GDPR’s consent requirements with the AI Act’s risk classification and the Data Governance Act’s provisions for data reuse, each using different terminology and compliance expectations. For enterprises and public bodies, this means compliance teams spend more time interpreting intersections and jumping through regulatory hoops than innovation. Is the region rewarding caution over creativity, and how will that impact its medium-term future?
Europe’s balancing act
Cloud providers face complex cross-border data restrictions, startups wrestle with legal uncertainty around AI training data, and public agencies hesitate to adopt emerging tools without definitive regulatory guidance. Sovereignty, in this context, cannot simply mean protection. It must encompass the ability to develop, deploy, and scale technologies confidently within a clear, unified framework.
Europe’s digital future depends not on writing more rules, but on weaving existing ones into a coherent whole. A common regulatory blueprint that aligns privacy, innovation, and competitiveness would allow businesses and public institutions to move from compliance to confident growth. By translating its ethical leadership into operational clarity, Europe can prove that trust and technological progress are not opposing forces, but twin pillars of sustainable digital sovereignty.
Benjamin Schilz is the CEO of Wire