Abstract
This report presents findings from public dialogue workshops conducted in Liverpool and Cambridge to understand public perspectives on the role of AI in delivering the UK Government’s Missions. Focusing on crime and policing, education, energy and net zero, and health, the dialogues reveal public aspirations for AI in public services and highlight necessary interventions to ensure AI delivers public benefit while maintaining appropriate safeguards.
This report presents findings from public dialogue workshops held in Liverpool and Cambridge to understand public perspectives on the role of AI in delivering key government missions.
Background
In September 2024, ai@cam collaborated with the Kavli Centre for Ethics, Science, and the Public to convene public dialogue workshops in Liverpool and Cambridge. Forty members of the public participated in discussions focused on their aspirations for AI in public services and the interventions needed to shape its development responsibly.
The dialogues, designed and delivered by Hopkins Van Mil, specifically examined public perspectives on AI’s role in four of Labour’s Missions for Government: crime and policing, education, health, and energy and net zero.
Methodology
Participants were divided into smaller groups dedicated to one of the four government missions. Each group was supported by AI experts from the University of Cambridge, University of Liverpool, King’s College London, and University of Manchester. Guided discussions explored the guardrails and possible interventions needed to ensure AI delivers public benefit in these areas.
Key Findings
The dialogues revealed significant insights into public expectations for AI in government missions:
-
Public participants want greater transparency around AI usage in public services
-
Governance should prioritize public benefit over profit in AI applications
-
Independent regulatory bodies need appropriate powers to influence AI implementation
-
Collaborative and user-centered design practices are essential to reduce bias
-
Data sharing regulations must balance privacy protection with enabling public benefit
-
Robust security systems are required to prevent AI-related cyber threats and fraud
Recommendations
Based on the dialogue findings, the report recommends:
-
Develop transparency frameworks for AI used in public services
-
Establish governance structures that prioritize public benefit
-
Empower regulatory bodies with appropriate oversight capabilities
-
Implement collaborative design practices that involve end users
-
Create balanced data sharing regulations that protect privacy while enabling innovation
-
Invest in security infrastructure to mitigate AI-related threats
Implications for Policy
The findings from these dialogues offer valuable insights into a future vision for AI in UK public services. They provide a roadmap that can help policymakers steward the development of AI technologies in ways that align with public values and expectations.
This work aims to inform ongoing conversations about driving AI innovations that deliver widespread public benefit while addressing concerns about transparency, governance, and ethical implementation.
Acknowledgments
ai@cam would like to thank all project collaborators, including those from University of Cambridge, University of Liverpool, University of Manchester, King’s College London, Hopkins Van Mil, and especially the public participants who contributed their time and insights.
Download the Full Report
The complete findings from the public dialogues can be found in the full report.