A response to recent criticisms and a call for principled, not performative, reform.
The Alan Turing Institute has, in recent months and weeks, become a magnet for critique. Some of it comes from the political centre, disillusioned by what they perceive as a drift from scientific excellence into bureaucratic stagnation. Others, including many on the left, have raised long-standing concerns about governance, diversity, transparency, and the use of public funding to replicate elite networks rather than dismantle them.
These criticisms are not baseless. But neither is the ATI the failure some have suggested it is. To write it off entirely risks obscuring the deeper structural challenges it reflects, and the potential it still holds.
Many of the criticisms levelled at the ATI point to its proximity to power without the accountability that should accompany it. The Institute’s leadership, particularly in recent years, has come under fire for a lack of diversity, with four men appointed to senior roles in quick succession, and for a governance model that replicates, rather than resists, the hierarchies of the British research and policy establishment. It’s a legitimate concern: when an institution is tasked with shaping national AI strategy but led by a narrow cross-section of society, the horizon of possibility narrows with it.
There’s also the question of its intellectual ambition. Critics like
argue that the ATI has not delivered a coherent or impactful public mission. They suggest it has been captured by consultancy logic; project-based, partnership-chasing, always busy, rarely bold. Some have even described it as a “fancy funding agency with branding.”This diagnosis, while partly accurate, flattens the work the Institute has done and ignores the structural conditions under which it operates. It also risks reinforcing a narrative that the ATI’s failures are moral or individual, a few bad appointments, a few timid strategic plans, when they are, in fact, systemic.
Let’s be clear: the ATI was never going to be CERN for AI. It was not funded to conduct massive, discovery-driven research, nor was it ever institutionally positioned to direct state strategy at scale. It has been tasked with doing a lot, with relatively modest resources, within a fragmented UK research landscape that often rewards policy-friendly branding over deep political or ethical engagement.
Despite this, its research portfolio includes meaningful contributions: work on misinformation, the ethics of AI, explainability in machine learning, and societal data science. It has brought together interdisciplinary teams across universities and public bodies, no small feat in an academic culture that is often siloed and territorial.
It has convened conversations around algorithmic fairness, online harms, and governance in data-driven systems that, while imperfect, have helped to shape public and regulatory awareness. And it has attempted, however unevenly, to challenge the narrative that AI is neutral, technical, or separate from politics.
Many of the critiques from the left mirror broader concerns about the UK research ecosystem. The ATI is seen as emblematic of:
Elite overrepresentation: Too Oxbridge, too white, too male, too far from lived experience.
Policy capture: Too close to Whitehall, too eager to please funders, not independent enough to challenge harmful state or corporate agendas.
Symbolism over substance: Using the progressive language of “ethics,” “responsible AI,” “societal impact”, without meaningful redistribution of power or voice.
Again, there’s truth here. But it’s also true that no public institution (and certainly no public institution tasked with AI research) escapes these dynamics entirely. The ATI has inherited a policy culture that values visibility over depth, outputs over outcomes. Its failings are real, but they are not uniquely its own.
To demand that a single institution solve the structural problems of academia, tech, and government combined (and do so on a constrained budget, in a deeply politicised science funding environment) is to ask it to do something no other British institution has done.
It doesn’t need to be excused. But it should be understood.
Rather than tearing it down, what if we redirected our critique of the ATI from the left into outlining our vision of what it was meant to be, and still could become?
A public technical institute focused on building tools, methods, and standards that actively mitigate harm and reduce inequality. Not just studying bias, but creating real, useable systems that help detect and correct it. Not just publishing position papers on fairness, but building open-source libraries that help engineers embed fairness constraints into models.
It could be:
A centre for auditable AI systems, helping institutions build technology that can be explained, challenged, and reformed.
A home for worker-informed AI research, led by unions, co-ops, and grassroots groups.
A lab for algorithmic repair, where tools are created to actively undo bias, reduce surveillance harms, and support justice-focused uses of data.
A bridge between policy and practice, translating critical research into public infrastructure, not just papers, but systems, standards, and shared ownership.
This vision demands humility, politics, and public funding. But it’s possible. And it’s more useful than writing another paper about principles.
One of the most consequential steps the Turing Institute could take right now (and one area where I find myself in agreement with its more centrist critics) is to spin out CETaS, the Centre for Emerging Technology and Security, into its own standalone institution.
Not because defence work is illegitimate, or because ATI should somehow be “above” national security. Quite the opposite. Defence is too important to be run through the same infrastructure tasked with building open, civilian-facing AI research.
CETaS was created to advise on some of the most sensitive and strategically complex questions in UK security policy; including the military applications of AI, geopolitical tech rivalry, and the risks of adversarial machine learning. These are high-stakes domains that require clear mandate, rigorous governance, and appropriately siloed operations.
Folding this work into ATI, a public research institute that is also meant to convene NGOs, campaigners, and citizen technologists, risks muddying the water for both. It creates confusion about purpose, dilutes trust among community partners, and creates unnecessary barriers to public engagement with ATI’s civic work.
CETaS should exist. But it should exist as a dedicated national security institute, with its own leadership, governance, and oversight. That separation would allow ATI to reorient itself fully around public-good innovation, while enabling the UK’s defence work on emerging tech to be led by a body designed specifically for that task.
This isn’t about abandoning complexity. It’s about respecting it, and designing institutional structures that are clear, credible, and coherent.
There is no easy fix for a public research landscape shaped by austerity, private partnerships, and politicised funding. But we still need public institutions capable of imagining and building alternatives to the extractive, surveillance-driven AI futures we’re being sold.
The ATI is imperfect, but it is salvageable. It has infrastructure, reach, and momentum. And with the right leadership and political clarity, it could become a site of principled, critical, and applied work. One that doesn’t just ask what AI can do, but what it should.
To abandon it now is to abandon the public project of technology altogether.
Let’s demand more, but let’s not mistake critique for strategy. Because the real question isn’t whether the ATI is flawed. It’s who will shape what comes next if we decide it’s beyond saving.
None of this will happen easily. But rejecting the ATI altogether does not get us closer to these goals. It just clears the field for those with even fewer commitments to public interest.
So yes, be critical. Be sharp. But be strategic. Because a hollowed-out ATI will not be replaced by a decolonised, democratised, worker-led research institute. It won’t even be replaced by the techno-centrist vision of a government funded institute capable of creating the next ChatGPT. It will be replaced by Palantir.
And that is not a future worth surrendering to.