What are the geopolitical implications of AI?

Verity Harding

01 November 2023

Back to blog

An interview with Verity Harding, Director of the AI & Geopolitics Project, Cambridge University

One of TIME100’s Most Influential People in AI, Verity Harding began her career as a political adviser before spending a decade at Alphabet where she was global head of policy for DeepMind, co-founding the company’s policy and ethics team as well as the Partnership on AI. She is now Director of the AI & Geopolitics Project at Cambridge University’s Bennett Institute for Public Policy

One of the aims of the AI & Geopolitics Project is to examine the geopolitics of AI through the prism of democracy and human rights. Why is that so important?

There has been a creeping acceptance over the past few years that AI is a national security battle that must be “won” by someone. But that viewpoint sets these technologies against other geopolitical goals in a way that is detrimental to scientific research, which we know flourishes in a collaborative environment. I used to work in the national security space, so I am not naïve about its importance. But I fear that the dominant narrative being one of AI as a precious resource to fight over and capture is unhelpful, even dangerous. At the AI & Geopolitics Project, we are trying to provide an alternative viewpoint, which starts from a premise that human rights should be encoded in AI and that, through this method, it can become a technology that actually enables cooperation and collaboration rather than nationalism and entrenched division.

Your research and upcoming book suggests that we can learn from the past to help guide us with the future of transformative technologies. Can you share an example?

The United Nations Outer Space Treaty of 1967 is an interesting example. The ‘space race’ was born from a misleading Cold War narrative about the missile capabilities of the United Statesand the Soviet Union. It was fraught and politically charged. But through genuine leadership on behalf of the global political community, not least the American presidents Eisenhower and Kennedy, outer space became – legally – “the province of all mankind”. Effective negotiations stopped nuclear weapons orbiting above us and now we have incredible international efforts such as the International Space Station to exist.

While I think the likelihood of a UN treaty on AI is low, there are plenty of areas where cooperation could be embraced, from exploring the potential of AI in climate science or coming together to say no to the worst abuses of the technology such as lethal autonomous weapons.

Is a collaborative approach the most effective way to ensure AI serves everybody in society?

Absolutely. In the AIxGEO project, we’re looking at places in AI where there may be the potential to make diplomatic breakthroughs that encode a rights-based approach , but that must be informed not just the people building AI, but also those who stand to be most impacted by it.

What is needed to kickstart this approach?

AI has been on the geopolitical agenda in a significant way since around 2017, but the egos and priorities of national governments and leaders often get in the way of meaningful progress. We don’t need lots of new institutions, we need to use the ones we have better. I was a member of the OECD’s network of AI experts for example, which producedthe first intergovernmental standard on AI,adopted by the G20 in 2019. I would like to see governments capitalise on the wealth of existing thinking and cooperation rather than starting lots of new processes.

I think it’s also important that we pause to reflect on why AI is seen in such a negative light and singled out as a geopolitical tool. Unpicking that narrative – where it came from, who it serves - is a good way of moving forward.

How do you think Cambridge can help shape the future of AI?

Some of the best scientific minds have come from Cambridge and today it is a site of really interesting and innovative AI research, as well as thought leaders like Diane Coyle, Gina Neff, Stephen Cave and Seán Ó hÉigeartaigh. This, plus it’s storied history and reputation, will allow it to play a leading role in shaping the future. There has always been a huge role for science, academia and universities in diplomacy and in spreading knowledge and understanding. This is no different and it’s more important than ever.

1 November 2023

This post is a write up of an interview with Verity Harding, Bennett Institute for Public Policy