Hear from Dr Kwadwo Oti-Sarpong, who is leading ai@cam’s ground-breaking project designed to create a new roadmap for the ethical use of AI that will help shape the future of our connected places and cities.
What was the motivation behind the Decision-making with AI in connected places and cities project?
Artificial intelligence (AI) is being used by the public sector to tackle a whole host of challenges facing our cities – from land use through to traffic congestion and sustainable water management. Local authorities in England and around the world are already using chatbots, machine learning, Large Language Models and predictive analytics to generate insights that help to inform city-wide decision making – and this use of technology is predicted to grow.
Despite the huge potential of digital technologies, understanding their ethical and responsible use remains a key challenge. In areas where the use of AI in decision-making could potentially affect thousands or even millions of people’s lives, getting this right is vital. There is both widespread interest from city managers in the use of data-driven technologies to create so-called ‘smart cities’, and widespread ethical concerns about the implementation of these kinds of initiatives.
Concerns about extensive data collection in large-scale city digitalisation projects have played out in cities around the world. For example, the Sidewalks smart city project in Toronto was cancelled due to data privacy concerns, and in San Francisco, facial recognition technology for predictive policing was banned due to perceived data bias in profiling.
The Decision-making with AI in connected places and cities project grew out of the University of Cambridge’s Digital Cities for Change initiative. The question that we have been looking at is how we shift the thinking from a technocentric one to focusing more broadly on the governance and the ethical implications of introducing data-focused technologies. We’re asking: What are the potential ethical risks we are taking in using these technologies? Are we considering these ethical risks in our decision making? And how are we addressing these ethical concerns?
Understanding how to practically root AI use in ethical considerations, and showing how that can be done, will significantly change how we create the future we want.
How does this ai@cam project aim to tackle these ethical concerns?
This project aims to transform how the public sector uses or approaches their use of AI. We are doing this by investigating the decision-making process around using AI in local authorities across England. Some of the key questions we are asking include:
- What are the motivations for using AI, and whether or not AI is the solution to start with?
- What forms of AI are being used or trialled and what are the outcomes?
- And looking at those outcomes, are there any ethical concerns that public sector bodies or their technical delivery partners have identified?
There are eight of us working together from a range of different disciplines, cutting across the knowledge domains of engineering, computational statistics, spatial and urban governance, decision making as well as ethics and philosophy.
We are working alongside local authorities to understand how they are using AI in different cases, including placemaking (relating to assessing planning applications and decisions around urban development); land use and mobility; and sustainable water supply systems. The team welcomes opportunities to expand these use cases into other areas.
It’s early days for the project but it’s clear from conversations we’ve already had that the public sector needs support to critically evaluate their motivations for the use of AI. Those on the governance side of decision making are concerned about adhering to legal requirements, but we know that legislation lags, behind technological developments and is not necessarily tied to today’s ethical concerns. There is also a disconnect between the ethical values or principles and the technical offerings of existing AI tools and packages. The practical insights we gain from our research will be developed into guidelines for practitioners. We’re also looking to create professional development workshops and modules for local authorities and their project partners, focused on how to make ethically informed decisions about AI use within large-scale digitalisation initiatives.
In addition, we’ll be developing a research roadmap to look at what issues need further investigation in order to extend the impact generated from this initial piece of work.
What would you say are the risks of using AI for making decisions in the public sector?
I think, for cities, the biggest risk is the abdication of responsibilities. With the increasing use of digital technologies, we run the risk of governing our cities through AI. On the flip side, we need to master the governance of AI first.
We need to bring into focus the governance of AI so that we understand and can take action to address the potential risks. Those risks include the potential for exclusion and marginalisation of certain groups of people, biased data analysis and the reliance on potentially skewed data to train the algorithms that inform decision making.
It’s crucial that we put in place structural considerations to peel away those risks and ensure that ethical considerations are right at the heart of these decisions. If we don’t have this governance in place, our sense of human responsibility and social values in decision making could be lost. That is exactly what we want to avoid. Data can be helpful in informing decisions about public services, but the people using those services are more than just data sources. We need guardrails to ensure their needs are centred in decision-making. If we are going to rely on AI to try and deliver improved outcomes in our cities, then we ought to step back and put these local considerations at the forefront.
How did your previous work lead to the ai@cam collaboration?
I got my bachelor’s degree in construction technology and project management from Kwame Nkrumah University of Science and Technology in Ghana. I then moved on to study technological innovations in project settings for my PhD at The University of Hong Kong. Since completing my PhD, I have been looking into the area of technological innovations in the built environment. Over the years, my interest has been the implications of introducing new technology in any context – whether it’s at a project, organisational or city scale.
For me, the focus on cities became very strong when I joined the Digital Cities for Change project in March 2022. We have been focusing on the governance and ethics dimensions and also developing a competency framework for the delivery of public value through digital technological innovations – and that’s what drew my interest to this area.
Why does this research matter?
This type of research is relevant in a number of ways, particularly because it helps steer practitioners, decision makers and society away from an easy technocentric bias. It also helps clarify the nature of ethical concerns and considerations that are linked to public sector use of AI in decision making related to our built environment. We can’t just throw more technology at a problem and hope it will solve itself. If we are going to create a society that helps everyone to thrive and be the best that they can be – whether this is individuals, families, businesses – we need to take into consideration the things that are going to change when we introduce new technologies into society.
The idea for this project has been there for some time, but ai@cam has really given it the needed boost and visibility. Due to the interdisciplinary nature of the research, it’s often challenging to fit it into a particular funding box. The ai@cam initiative allows us to explore our ideas and fully put on display the interdisciplinarity that is right at the heart of our project – from the way it was designed to how we want to execute it. The ai@cam funding is a very welcome life-giving resource to this important project.
As told to Vicky Anning
More about Decision-making with AI in connected places and cities
The Department of Engineering at the University of Cambridge is leading this ai@cam project, collaborating with experts from the Departments of Land Economy, Architecture, the Leverhulme Centre for Future Intelligence, Cambridge Zero and Anglia Ruskin University.
If you would like to find out more about this research project or would like to get involved, please get in touch. Alternatively, email Dr Kwadwo Oti-Sarpong (ko363@cam.ac.uk).