top of page

The case for a National AI Service

UK_parliament_on_sunny_day_202605150956.jpg

Evidence to the All Party Parliamentary Group on AI

When I was told this would be a six-minute talk, I thought about my lawyer, Chris, who bills in six-minute increments. So I picked up the phone and called him.

Six minutes is a long time. Long enough for a generative model to do a day's work that used to belong to someone like Chris. That is the productivity story, and it is the one most people in this room have already heard. I want to talk about a different one.

Six minutes is also a long time for a child. In the lifetime of a child starting school this year, AI will not be a tool they use. It will be the medium through which they learn, work, and form judgements about the world. The question is not whether they will have access to it. The question is whose AI they will have access to, and on what terms.

APPG Louis.jpeg

Right now, the answer is: whichever company gets to them first.

Article 2 of the Human Rights Act guarantees the right to education. In the AI era, that guarantee is hollow unless it extends to the technology through which education increasingly happens. A child whose AI is owned, tuned, and monetised by a US platform is not in the same educational position as a child whose AI is accountable to a public institution.  We do not let private companies own the curriculum.  We should not let them own the interface to it either.

 

The technical objection — that meaningful AI is too expensive, too opaque, and too centralised to provide as a public service — is becoming less true every quarter.

 

Small language models now deliver much of what the frontier models offered eighteen months ago, at a fraction of the cost, with full transparency about what data they were trained on, and with the ability to run locally rather than in someone else's data centre. A personalised SLM, trained on materials a teacher controls and a parent can inspect, is not a hypothetical. It is a procurement decision.

This matters beyond schools. The same architecture that makes a private, accountable AI possible for a child makes it possible for an NHS patient considering whether to enter a trial, a citizen interacting with a public service, or a small business owner who cannot afford to put their data into a platform that monetises it. The choice is not between frontier AI and no AI. It is between AI as infrastructure and AI as a product sold to us by people who own the data we generate by using it.

The precedent for treating critical infrastructure as a public good is not new in this country. It is the argument that built the NHS. The case for what I would call a National AI Service — public, private, participative, available to every citizen — rests on the same logic. AI is becoming the layer through which people will access information, make decisions, and exercise judgement. Leaving the ownership of that layer entirely to the private market is a policy choice, not an inevitability.

The window to make a different choice is open. It will not stay open for long.

bottom of page