As Prime Minister Kier Starmer announces an increase in artificial intelligence (AI) use in the civil service, Dr Mark Wong examines the UK's approach to AI. Drawing on his research, he reflects that the UK policy lags behind in developing responsible AI – a missed opportunity and a risk of potentially ‘dire consequences’, if it does not centre the voices, needs, and actions of people most impacted by AI harms.

Blog by Dr Mark Wong, Senior Lecturer (Urban Studies & Social Policy)

Ahead of the Chancellor’s Spring Statement this week, substantial cuts across government departments are anticipated, and both the Prime Minister (PM) and Chancellor claimed that AI will make these cuts ‘more than possible’. The PM reiterated  earlier this month AI would provide efficiency gain and replace civil servants’ work, which AI might do quicker or allegedly better.

Back in January, the PM Keir Starmer announced the UK AI Opportunities Actions Plan. There were big promises about ramping up data centres, rolling-out AI in healthcare, and making government more efficient. His recent speech parroted the narrative that AI is going to save the public sector, calling AI ‘a golden opportunity’. But at what cost? This feels all too familiar.

Bias, misinformation, and limitations

Big AI tech companies have been attempting to convince the public that AI is the ‘saviour’. But also more than a decade of research has shown two things.

One, AI is harming people by perpetuating misinformation, bias, stereotypes and discrimination based on racism and sexism—especially large language models like ChatGPT and Gemini.

Two, the ‘ChatGPT’ bubble is starting to burst and doubts from academia, international governments, and activists are emerging. The problem is whether generative AI has reached a ceiling in its capability. It’s commonly assumed that the technology will become more sophisticated and eventually reach ‘general intelligence’ to do all things autonomously.

But latest research suggests that kind of technology may never exist. Generative AI caught the world by storm two years ago, but we are already doubting its limitations. Other research also shows additional labour and stress on workers correcting generative AI responses.

The ‘turbocharging’ of AI will affect how government goes about its business. But what this is doing is putting faith in the machine, and leaving the people behind.

UK lagging behind in developing responsible AI

Starmer also cited excitement about a ‘global race’ of AI to assist healthcare, education, and public administration. AI tools developed in-house by the government—e.g., ‘Humphrey’, ‘Consult’ and ‘Parlex’—are soon ready to offer automated analysis of public consultations and help Ministers predict MPs’ response to new policies.

The opportunities are growing and are exciting. But the need to develop AI in responsible ways is growing even quicker. That will ensure AI is fair and truly beneficial for everyone. But we still don’t fully know how—and government policies are not doing enough to even try.

To make these opportunities truly shared fairly with everyone, the government needs to involve the public in governing how and why AI is used in the public sector.

This is exciting not only for democratising the development AI but also widening the perception of AI skills, so the focus is not only on elitist technical development but also on good governance and accountability, especially the use of participatory governance.

This should have been the direction of travel for AI. Responsible AI is catching global AI policy conversations in the United Nations, OECD, European Union, the African Union, Japan, South Korea, Scotland, and elsewhere.

But the UK is lagging behind and instead missing the opportunity to develop a truly responsible AI ecosystem.

Co-designing and rebalancing power

Research at University of Glasgow has shown preventing inequalities in digital services and AI requires involving the public from-the-get-go. Our co-created code of practice, published in December 2024, provides an example of how the government can develop digital services in more equitable ways.

Our research has also shown co-design methods, such as people’s panels and co-design workshops, help ensure the voice and expertise of adversely-racialised people and communities most negatively impacted are valued in the AI ecosystem. This approach echoes Demos’ report this week, calling for government to shift from ‘citizen engagement to citizen participation’ to mobilise mission-led government.

The future of AI is not the technology

What we need is, therefore, to involve the public in AI governance. Responsible AI is about considering who is most impacted and rebalancing who has power. This will allow participation of diverse perspectives to determine and audit how AI should or should not be used in government.

This is why the future of AI is not the technology, it’s the people, our communities, and our planet. We need to democratise the governance of AI. The government must adopt a new approach to responsible AI, which centres the voice, needs, and actions of people who are most impacted by AI harms.

The consequence of not tackling this seriously is too dire. We have to question who have historically been marginalised and excluded.

Author

Dr Mark Wong is a senior lecturer and subject group lead of social and urban policy at the University of Glasgow.

He is an expert in responsible AI, racial justice, and using co-design methods to amplify the voice and expertise of adversely-racialised people in technology. His research addresses the bias and harms of AI and data and the impact of digital transformation on society.

Photo by Vicky Yu on Unsplash


First published: 24 March 2025