Frontier models are driving the concentration of power at an unprecedented pace, with ‘AI leadership’ often misconstrued as the culmination of progress, control, and safety. But every system that centralises enough eventually fails together. What may look like coordination can quickly reveal its underlying fragility when too much depends on too few. 

Fueling it all is the race itself, an addiction to speed that sacrifices inspection at the altar of faster rollouts, trades control for faster growth, and optimises for momentum over understanding. While most organisations are debating which LLM is the most reliable, the focus of CIOs has shifted away from the intelligence layer that will determine long-term capability and resilience. This is a problem. For if you don’t own the analytics layer, you don’t own your AI future – regardless of which foundation model you choose.

AI safety and reliability compromised

Take ChatGPT. Earlier this year, researchers from Peking University found that newer versions of the model, without clear explanation, gradually demonstrated a measurable movement toward more right-leaning answers when repeatedly asked the same political questions. Left unchecked, such shifts can amplify bias instead of maintaining balance, letting skewed information spiral into echo chambers. 

That’s because without intentionality, systems optimise for survival metrics like cost or conversion and not for a healthy information ecosystem. For instance, a decision-support system used in public assistance may automatically flag certain applicants as ‘high-risk’ based on historical data, undermining already marginalised groups. Systems, while optimising for throughput or cost-effectiveness, could end up perpetuating the same inequalities they were meant to solve. 

Having no clear explanation for a given outcome, moreover, makes meaningful intervention nearly impossible. When the causal chain from data to decision becomes opaque, even well-intentioned updates can create unpredictable (and untraceable) shifts in model behaviour. That too, without a clear mechanism for attributing responsibility or the origin of change. 

Transparency restores traceability, turning black-box behaviour into something observable and accountable. That means being able to see what systems do with the data, and the ability to audit vendors’ logs, training data, and decision processes. 

Giving up local control means giving up autonomy 

Additionally, if a business builds deeply on proprietary infrastructure, it becomes extremely difficult to leave, modify, or challenge decisions made upstream. Consider AWS region lock-in or OpenAI API limits, which both create dependencies that aren’t just technical, but also structural, depriving the firm of the ability to govern its own data, compute or model logic. The more an ecosystem relies on a few dominant providers, too, the more decision-making power concentrates upstream. 

Interoperability preserves the freedom to decide how data is used, allowing businesses to move, modify, or integrate systems while preventing lock-in. In an age of borrowed tools, dependency may offer the illusion of safety, but it comes at the cost of autonomy that makes safety sustainable. When we talk about AI safety, we’re really talking about governance beyond models, where sovereignty really begins – in the data and intelligence stack beneath them. Until we approach that foundation with the same urgency we apply to models, safety will remain a promise built on fragile ground.

Onur Alp Soner is the founder and CEO of Countly

Read more: Building an AI-literate workforce takes education, cross-sector collaboration and leadership