A new ai@cam report sets out the guardrails needed to increase democratic control over the use of AI in public services in the UK.
Last month ai@cam convened public dialogue workshops in Liverpool and Cambridge to better understand public perspectives on the role of AI in delivering the Missions for Government, focusing on crime and policing, education, energy and net zero, and health.
Along with AI experts from the University of Cambridge, University of Liverpool, King’s College London, and University of Manchester, 40 members of the public discussed their aspirations for AI in public services and the interventions needed to shape its development. Findings from those conversations have been revealed today in a new report that gives the first analysis of how the public think that AI could help deliver the Missions for Government. This report outlines the opportunities for AI to improve people’s interactions with public services and reflect growing demands for governance that centres public benefit in AI development and use.
Findings suggest a roadmap that can help policymakers steward the development of AI technologies to deliver widespread public benefit.
Participants in this dialogue called for:
- Further transparency around where and how AI is being used, supported by education and public information initiatives that help build understanding of AI.
- Governance that ensures public benefit is the guiding principle for the use of AI in any public services, backed up by powers to prevent profit being prioritised over service quality.
- Independent regulatory bodies that are equipped with the power to influence how and where AI is used, and that can take action to prevent misuse.
- Collaborative and user-centred design practices that centre user need in AI development, and that take action to remove bias and improve reliability.
- Data sharing regulation that protects privacy, while enabling the use of data to deliver public benefit.
- Robust security systems to prevent cyber threats and fraud involving AI.
Participants were clear that AI should not be used to replace, or be seen as a substitute for, human contact. A shared desire amongst the participants was to see AI deployed in support of human decision-making, or with human oversight. At a time when many public services are over stretched, many participants agreed that there was an opportunity for AI to reduce the time required for administrative tasks, particularly in the NHS, allowing frontline staff to spend more time with patients and those in their care. It was noted, however, that service users and staff needed to be at forefront of discussions around the implementation of AI tools within public services and activity engaged throughout the decision-making process.
Touching on the insights from the public dialogues, Mo Vali said: “The discussions today highlighted clearly that participants did not want to lose that human-to-human interaction when engaging with public services. It is crucial we do not underestimate the value we place on personalised, sensitive and individual human interaction, especially when it comes to our teachers, nurses, and doctors.”
Jessica Montgomery, Director of ai@cam, said: “We need new ways of centring social interests and needs in policy and technology development around AI. Engaging the public in a conversation about our shared vision for AI technologies plays an important role in connecting advances in technology and policy to public benefit. The report published today contributes to a growing body of evidence about the interventions that the public want to see to steward the development of AI. Now it is our time to respond.”
Prof Neil Lawrence, Chair for ai@cam, said: “The challenge for the next wave of AI development is to create meaningful, real-world benefits for individuals, communities, and society. Giving people the confidence that the voice is not only important but heard allows us to build the bridges between organisations and communities that can drive real progress in AI research.”
Read the AI Public Dialogue Report