ai@cam’s mission is to catalyse a new wave of AI innovation that serves science, citizens, and society. We want to build bridges between technology development and societal need, so that advances in our technical capabilities in AI translate into wider public benefit. Understanding public views on AI is vital to delivering that mission.
To help us understand where there may be gaps in AI innovations, or concerns about how AI is developing, in September 2024 we convened public dialogue sessions in Cambridge and Liverpool that explored the use of AI in public services. These dialogues took place at a time when our national conversation about AI policy is shifting. The AI Opportunities Action Plan review has been asking how AI could deliver economic growth, while discussions about the Missions for Government have been considering what policy interventions could help make progress tackling critical social challenges, and the Department for Science, Innovation and Technology has been creating a new digital centre to drive technology adoption in public services. In that context, our dialogues set out to explore people’s views on the role of AI in helping to deliver public services connected to four of the Government’s Missions: in crime and policing, education, health, and energy and net zero.
Policy debates about AI are often bluntly framed as a conflict between innovation and regulation. However, public views are much more nuanced. People recognise the benefits that AI can bring across all of the Missions for Government, but want guardrails in place to prevent harm from its use. When thinking about AI in public services, the people we spoke to talked about the importance of reducing administrative burdens on over-stretched public service workers, and the potential for AI to automate or speed up routine administrative tasks. They highlighted the opportunity for AI to improve decision-making by leveraging insights from data, and expressed the hope that AI might help make high-quality public services more accessible. They were also clear that delivering these benefits requires careful design and governance.
We consistently heard three key messages:
AI should be a co-pilot, not a replacement. User need must be the starting point in developing AI-enabled public services. Participants told us that frontline staff and service users should be actively engaged in discussions about the use of AI, and that AI tools should be designed as a support – or co-pilot – for decision-making. We heard that AI should not be used as a substitute for human contact, or to depersonalise people’s interactions with the state or each other. This means thinking carefully about how AI tools can be designed to empower their users, and about how to make alternative services available to people not using digital tools.
Public benefit should be the primary driver of AI adoption in public services. Many participants discussed the pressures that increasing demand and constrained resources were placing on public services. In that context, their hope was that AI could ease the strain, freeing up public sector workers to concentrate on engaging with service users. This hope, however, came with scepticism about the current trajectory for AI development. Participants talked about the importance of front-line public services having the opportunity to experiment with AI and understand how it could benefit the communities they serve. Yet they appreciated the difficulty of creating space for this experimentation – or for the introduction of any innovative approaches to service delivery – while services are under such pressure. Alongside this scepticism, there were generally low levels of trust in tech developers to work in the public interest, and concerns that personal data held by government would be sold for commercial gain.
Robust governance is needed to provide checks and balances. Concerns about the potential risks associated with AI, and about the role of big tech in AI development, underpinned calls for strong, independent oversight over AI development and use. Participants wanted regulators to have powers to ensure transparency in relation to the use of AI, to maintain high standards of security, to prevent bias or discrimination, and to protect people’s personal data and privacy. They called for governance interventions that centred public benefit in AI development, that could ensure organisations are held accountable for how they use AI, and that tackle power imbalances between individuals, government, and big tech.
The report we’re now publishing contributes to a growing body of evidence about public views on AI. One of the clear themes emerging from this work is the demand for a democratic approach to AI governance. Such an approach would be grounded in efforts to provide public information and education about AI. It would create spaces for stakeholder engagement and consultation around AI development and use. And it would establish independent governance and oversight mechanisms. We’ve heard this message. The question now is how can we respond?
Cambridge University is an engine for AI innovation and a steward of innovation. We can play a role in connecting these public conversations about AI into upstream technology development, and into live policy debates. ai@cam will be integrating insights from these dialogues into the next phase of our work. In the near-term, we’ll be using these findings to inform our Policy Lab activities. In the longer-term, we’re looking at how we can create a platform for continuing these conversations between publics, policymakers, and researchers about our future vision for AI, and how to deliver it. For now, we’d like to thank all of the participants in September’s dialogue sessions for their time and energy.
Read the AI Public Dialogue Report
Jess Montgomery is Director of ai@cam at the University of Cambridge.