
As agentic AI moves from idea to pilot to production, the question is quickly changing from whether or not companies will adopt autonomous systems—it’s now whether your organisation has the technological capabilities, operational acumen, and risk appetite.
Data teams face increasingly complex decisions about what processes can and should be automated. What’s your tolerance for autonomous decision-making in business-critical areas? What safeguards need to be in place if something fails? Preparing for agentic AI is as much an exercise in governance by data leaders as a challenge in prioritisation by the business. Ultimately, successful implementation requires thoughtful approaches to trust, transparency, and technical infrastructure that can enable autonomous agents to deliver genuine value.
The Data Readiness Challenge
Perfect data governance isn’t a luxury most organisations can afford, yet agentic AI quality depends on real-time access to high-quality information. In financial services, agents might review market conditions and adjust portfolio allocations. In retail, they could optimise stock levels and pricing based on demand signals. In healthcare, they might analyse patient data and alert providers to changes requiring attention.
“The potential is significant, but the quality of outcomes depends on the quality and accuracy of data, with which organisations often struggle,” observes Mats Stellwall, Principal Architect AI/ML, EMEA at Snowflake. “Agents need access to data, but must follow company rules just like employees do. Access must be managed appropriately.”
Governance is a requirement, not a luxury, in Europe, where organisations face particular complexities with the EU AI Act introducing substantial compliance requirements. “One in three organisations cite AI risk challenges as the biggest barrier to adoption at scale,” notes Lewis Keating, Deloitte’s Trustworthy AI Lead UK. “The potential €35m fine for non-compliance creates additional considerations around implementation approach.”
This regulatory environment requires building compliance into foundational systems rather than adding it as an afterthought.
The Agentic Trust Equation
The trust equation is fundamental. Without trust, employees won’t adopt the technology, and customers won’t accept the output. Building that trust requires three main ingredients.
The first is reliability. Given that AI systems are probabilistic and won’t generate identical answers each time, how do you ensure consistent, appropriate outcomes across millions of decisions?
Next comes transparency. As content becomes increasingly varied in quality and source, transparency about decision-making processes becomes essential for maintaining confidence.
The final ingredient is control. This means human oversight. The traditional “human in the loop” model may evolve toward “human on the loop” approaches—maintaining strategic oversight while allowing operational autonomy.
51% of respondents to Deloitte’s Trust in AI survey indicated that they trust businesses and organisations to use AI tools responsibly – so where does the other 49% come from?
“Trust is about how we evaluate systems and ensure their decisions are appropriate based on quality data,” says Stellwall. “Future-proofing requires the right organisational platform to accompany the technical platform.”
“Aligning with business goals is critical but challenging,” adds Keating. “You must manage agents thoughtfully and focus on communication, business change, and awareness so users trust agents to operate safely and securely.”
Without organisational trust, employees may not adopt the technology effectively, and customers may not feel comfortable with autonomous interactions.
Practical Preparation Steps
- Evaluate your data readiness. According to the latest survey from data and AI platform provider Snowflake, only 11% of early gen AI adopters say at least half of their unstructured data is ready for generative AI applications – data that is crucial for agentic context and quality.
- Consider prioritising security, compliance, and ethical frameworks early. While agentic AI creates many possibilities, thoughtful evaluation helps determine both what’s feasible and advisable for your specific context.
- Design for autonomy: Rather than retrofitting agents onto existing systems, consider how data, governance, and operational frameworks might be structured with autonomous decision-making in mind.
- Testing decision frameworks: Understanding how systems perform under autonomous conditions before deploying them in high-stakes situations.
- Developing management and governance capabilities: Building organisational expertise in overseeing autonomous systems, similar to developing capabilities for managing teams and data systems.
Moving forward strategically
The shift from pilot projects to operational deployment requires fundamental infrastructure and operational considerations that enable autonomous operations at scale. Organisations addressing these preparation questions now position themselves to implement agentic AI more effectively when business or competition make adoption necessary.
Success comes down to taking a comprehensive approach to integrating structured and unstructured data with appropriate access controls and transparent decision-making processes. Beyond that, it comes down to the very human problem of managing change.
To learn more, watch ‘Trustworthy Data for Trustworthy AI Agents,’ a webinar from Snowflake and Deloitte on how to build reliable and adaptable AI agentic systems ready for adoption and scale.