The one-year anniversary of the EU AI Act marks an important moment in AI governance. As one of the first comprehensive regulatory frameworks for AI, the AI Act has set a precedent, offering a clearer sense of direction in a space that has so far been defined by rapid innovation and regulatory uncertainty.

It would be easy to reduce the AI Act to a list of obligations or compliance hurdles. But that would miss the broader point. What we’ve seen, in practice, is that it has pushed the industry, both in Europe and beyond, in broadly the right direction. It has helped organisations to be intentional about how they build, govern, and deploy AI systems. It has also reinforced the importance of transparency, privacy, accountability, and resilience — foundational principles that any sustainable AI strategy should include.

The EU AI Act has encouraged teams to ask not just can we deploy this, but should we, and how should we do it responsibly. Previously, enthusiasm around generative AI, at times, outpaced careful planning. While speed and performance still matter, there’s now greater sector-wide recognition of the need to build AI systems that are secure, explainable, and aligned with organisational values.

However, many companies still see the AI Act, and the wider AI landscape as work in progress, which has influenced how they’ve approached adoption over the past year. For many of the organisations I’ve spoken with, the EU AI Act, while acknowledged, isn’t yet the top priority when it comes to AI. That’s not to say it’s being ignored, but the focus has often been elsewhere.

The EU AI Act’s guidance gap

A challenge with parts of the AI Act still in development is that key guidance, particularly around standards and implementation guidelines, has yet to be issued. This creates a certain level of operational limbo for businesses trying to future-proof their strategies. They know regulation is coming, but they don’t yet know exactly what “good” will look like.

We are seeing that AI security has emerged as the more urgent concern. Businesses are asking tough, pragmatic questions: how could generative models expose them to data leakage? Could adversarial inputs manipulate outputs in ways that compromise trust? What new types of cyber threats does AI introduce, and how can we get ahead of them?

These are pressing operational risks, and addressing them is often a prerequisite to scaling AI responsibly. In this way, the AI Act’s emphasis on ‘high-risk systems’ aligns with what many businesses are already prioritising, like identifying vulnerabilities early and building AI on secure, observable foundations.

So what should companies do next? The best response is to stay agile. Firms should take the AI Act as a scaffold, an initial framework to work within, but be ready to adapt as the picture becomes clearer. Prioritise practices that will serve you well regardless of how the regulation evolves, such as investing in strong AI observability, securing the underlying infrastructure, documenting systems and decisions, and consistently applying governance.

Ultimately, I believe the EU’s decision to move early on AI regulation has had a net positive effect. It has brought much-needed structure to a fast-moving space and encouraged important conversations within boardrooms and across industries. It hasn’t solved every issue, but it has provided a starting point. And that may be its most valuable contribution: giving businesses a way to engage with the AI future responsibly, without stifling innovation before it starts.

James Hodge is the GVP and chief strategy advisor for the EMEA region at Splunk

Read more: Women are being unfairly penalised by an imaginary AI competence gap