All views are my own
Approximately 80% of government cloud systems rely on just two providers, primarily AWS and Microsoft. When an American company experiences an outage, HMRC services become unavailable. Most people, asked whether this is arrangement is one we should be comfortable with, would say no. Yet we’ve arrived here through a long sequence of individually reasonable decisions about cost, capability, and convenience. The question of AI sovereignty is, at it’s root, the same question asked today; what are we willing to depend on others for, and where do we insist on holding our own ground?
I was listening to a panel on AI sovereignty recently and a panellist claimed that “sovereignty is difficult to define.” I agree that AI sovereignty is difficult to define, but I think people fall into the trap of treating the compound phrase as novel without first considering sovereignty alone. What underpins any nation state is the ability to act freely, to make decisions without being coerced by another. That basic concept isn’t difficult to define at all. The lack of attention to it is often what makes the AI sovereignty conversation more difficult.
The framework for thinking about this isn’t new. Sovereignty in an analogue age meant securing borders and ensuring your own farms and resources could sustain your people. This shifted as global trade became increasingly developed, and solidified around the 1870s, when industrialised shipping made Britain a net food importer for the first time. From that point onwards, the question wasn’t whether to depend on others, but how to structure that dependence so it couldn’t be weaponised. In the early digital age the same question returned in new form, to own the frameworks that underpin your services, or outsource them. Mostly, we outsourced.
AI presents the question again, and two things make it incredibly salient.
The first is that AI isn’t a resource to be consumed, but increasingly the layer through which decisions get made. Outsourcing oil means depending on someone for fuel. Outsourcing AI means depending on someone for the medium in which your government, healthcare, and military make decisions.
This has two implications. There’s the obvious risk of dependency, someone else controlling the infrastructure underpinning our economy. But there is a more subtle risk where models built by US labs are trained on American data, shaped by American values, fine-tuned against American sensibilities. A healthcare system, a public broadcaster, or a government running on American models is making American-influenced judgements. This drift is slow and hard to notice, and by the time it’s visible it’s potentially hard to reverse.
The second is diffusion. Oil sits in one sector. AI propagates through every sector that involves cognition or judgement, which is most of todays economy. A ring-fenced oil dependency can be managed with strategic reserves and diversified suppliers. A dependency diffused through every layer of your economy and state cannot be ring-fenced at all.
So what does sovereignty look like here? Not autarky, for the last few hundred years this has been a pretty unachievable goal. The useful model here is one of mutual dependence. Taiwan’s silicon shield works because TSMC sits on a layer of the global stack that nobody can sever without impacting themselves. The UK doesn’t need to own the full AI stack. It needs to own enough of a layer that others need us, enough to guarantee a seat at the table foreseeable.
The window for this is narrow. The AI stack appears to be consolidating toward a bipolar US/China structure, unlike semiconductors where multipolarity (Dutch lithography, Japanese chemicals, British chip design, Korean and Taiwanese fabrication) created space for middle powers. The UK currently has real assets — AISI, DeepMind, a high quality research library — but these are time-limited. AISI’s seat depends on continued technical credibility, political support, and talent retention, none of which are guaranteed five or ten years from now. DeepMind is in London but owned in the US. We are, for now, enjoying the dividends of proactive decisions, some made decades ago. Coasting on this is not an effective strategy.
Which layer we should aim to own is a question that remains open with no clear route to an answer. That it has to be some layer, and that the window to decide is now, seems to me the point worth highlighting here.