How do we regulate to ensure the development of AI for societal benefit?

Sam Gilbert and Ann Kristin Glenster

01 November 2023

Back to blog

Progress in AI is highlighting the importance of effective policy frameworks and regulations to unlock the technology’s benefits. The nature of these frameworks is the focus of a recent policy briefing by researchers at the University of Cambridge.

Sam Gilbert is an entrepreneur and Affiliated Researcher at the Bennett Institute for Public Policy, working at the intersection of politics and technology. His research interests include the political legitimacy of big tech companies. In this post, he explains how AI can be used ethically by large businesses.

Ann Kristin Glenster is a Senior Policy Advisor on Technology Governance and Law at the Minderoo Centre for Technology and Democracy with expertise in data protection and privacy, cyber security and algorithmic regulation of AI in the EU, US and UK. In this post she talks about the importance of regulation and risk of inequality.

How does the UK’s regulatory landscape currently support AI development?

AKG: The UK government has taken what it has called a pro-innovation stance to AI regulation asking existing regulators to integrate five value-based- principles into their frameworks. This is sensible because legislation is time-consuming to pass and regulatory action is needed urgently to catch up with the pace of innovation in the AI field. However, the principles are open to interpretation, and it isn’t clear how they will be implemented by regulators or companies. I would welcome an assessment of how laws we already have on data protection, for example, could be applied to make AI safe and trustworthy. Such a mapping would help regulators to take a coordinated approach. We also need to recognise that consumers and small businesses won’t be able to adopt AI technologies if they only have limited access to broadband, or if electricity is prohibitively expensive.

SG: The UK currently has more generative AI startups per capita than Europe, but lags behind the US. This is probably more of a function of having very strong research capabilities and an established startup ecosystem in London from which companies like Stability AI are emerging, rather than a proactive and coordinated AI strategy. Having clearer regulations and guidance would likely help the UK’s startup ecosystem flourish. Another barrier is technical infrastructure. Companies wishing to build or operate generative AI models require a vast amount of computing resources including GPU clusters, of which there are few available in the UK.

What action is needed to make AI safe and ethical?

AKG: We currently lack regulations that ensure that consumers and businesses can trust AI. We need to better define what safe and ethical means, and have mechanisms for accountability, which also necessitates transparency and explainability. Government has set out five value-led principles to guide AI regulation in its policy paper, such as ‘appropriate transparency’, but I believe these are little more than slogans at this point. These principles need to be defined and translated to clear and measurable standards, preferably based on recommendations from independent experts with adequate access to empirical data and research. Upholding social norms and responsible business practices are vital too, but it is important that we do not let the technology sector decide how to proceed.

SG: Some businesses have banned their employees from using tools such as ChatGPT for fear of negative legal or reputational consequences. A lack of regulation or regulatory clarity surrounding AI technologies creates uncertainty, which may prevent established businesses and public sector organisations from adopting technologies that could boost productivity. A lot of the responsibility for creating ethical AI lies with the companies that develop it. Large technology companies have become extremely powerful political and societal actors, so it is imperative that we consider collectively how they should be held to account if they do not exercise their power in a responsible way. While strong regulation is vital, technology companies must also foster a culture of societal responsibility. Some AI companies like OpenAI are proactively calling for stronger regulation of tech firms.

What do we need to do to make sure that AI serves society?

SG: AI technologies are tipped to bring huge productivity benefits to businesses and the economy. This transformation could be supported by tax and investment incentives to drive adoption by organisations. It will also require infrastructure including GPU clusters hosted in the UK so they can be used by organisations in possession of sensitive data including the NHS. If this happens, we could see incredible societal benefits. But we also need to recognise that generative AI opens up safety-related risks, particularly with open-source software, which is often championed by pro-competition policymakers. The pros and cons of modifiable, open-source AI software is a hot topic.

AKG: AI could bring massive societal benefits in healthcare and education, for example. However, unequal access to AI technologies could perpetuate inequalities, marginalise people, and polarise society if it is only accessible to select groups, particularly if they go behind paywalls. There is also a huge risk to democracy if hallucinations and fake content continues to dominate the public sphere. Strong regulations are needed. Academia has an important role to play in the transfer of knowledge from experts to policymakers.

What role can Cambridge play in shaping the future of AI technologies?

SG: The University of Cambridge excels in research into underpinning AI technologies and makes an important contribution to human knowledge. When it comes to generative AI, its researchers could help solve important challenges, from accurately detecting content created by AI that can be used to cheat at school or produce political propaganda, to cooling power-hungry data centres needed to run AI models. I also believe Cambridge Enterprise has an opportunity to shape the future by assisting AI startups to grow to become public companies capable of attracting considerable investment and providing jobs.

AKG: The University plays a vital role in the transfer of knowledge from academia to the business community, regulators, lawmakers, and consumers. Its multidisciplinary expertise and empirical research can make vital contributions to facilitating AI literacy and in forming robust and accountable policies for responsible AI. With the right regulatory framework, informed by experts, AI has the power to make societies more prosperous and help humans flourish.

1 November 2023

This post is a write up of an interview with Sam Gilbert, Bennett Institute, and Ann Kristin Glenster, Minderoo Centre for Technology and Democracy