Decision-making with AI: Keeping humans in the loop

Dr Ramit Debnath

13 March 2025

Back to blog

As an electrical engineer, Dr Ramit Debnath has always been fascinated by everything to do with energy. Growing up in India, he was acutely aware that there are millions of people around the world who aren’t able to access the energy they need – and that’s only going to get worse as more and more communities are marginalised by the impacts of climate change.

When Dr Debnath came to the University of Cambridge as a Gates Scholar in 2017 to study for an MPhil then PhD in engineering for sustainable development, he started using cutting-edge AI methods such as Natural Language Processing (NLP) to make sure community voices were being heard in the climate debate. Today, his research sits at the crossroads of data science and public policy, with a passionate focus on advancing climate and energy justice.

“I lead the Cambridge Collective Intelligence & Design group at Cambridge, and my group’s work is on creating and understanding how human-AI collaboration can help make better climate and sustainability action decisions,” he explains. “Our cities play a critical role in global greenhouse gas (GHG) emissions, and we need informed and accelerated decision-making for reducing GHG emissions while improving the quality of life.”

“I am interested in exploring how we can leverage the collective voices of people to enable climate action,” he says. “AI can help to uncover hidden patterns within large datasets, thereby facilitating informed decision-making.”

It was his cutting-edge research on human-AI collaboration that brought Dr Debnath into contact with one of ai@cam’s five flagship AI-ideas initiatives – Decision-making with AI in connected places and cities. Joining forces with project lead Dr Kwadwo Oti-Sarpong and colleagues from departments across the University – from Engineering to Land Economy – he is now focusing on how we can make sure humans are kept in the AI decision-making loop at a local level within cities across England. In parallel, he is co-leading a project with Professor Dame Diane Coyle at the University’s Bennett Institute for Public Policy that is investigating similar issues on a national level.

“We are trying to understand how to best create a human-in-the-loop (HITL) AI design so that a decision-maker can trust the emerging AI systems and work with AI to create public value,” he says. “We want to create informed pathways to help decision-makers make use of AI to create social, cultural and economic value in cities.”

So what exactly does “human-in-the-loop” mean?

From a computer science perspective, “human-in-the-loop” refers to a system design incorporating human feedback to enhance the trustworthiness of AI. Unlike automated AI systems that operate without any human intervention, HITL involves continuous human input to refine and improve AI outcomes, ensuring the AI systems align more closely with user needs and expectations. For example: in ChatGPT, user feedback mechanisms such as thumbs-up or thumbs-down ratings are incorporated to gather responses on the quality of AI outputs. This feedback is then used to adjust and improve the model’s performance over time, Dr Debnath explains.

The concept extends to decision-making processes that involve more complex AI applications too, such as local policy making for cities including anything from traffic management to urban planning. Here, “human-in-the-loop” can mean involving relevant stakeholders – such as experts, non-governmental organisations and people working on day-to-day tasks – in the feedback loop. Their input provides invaluable context and guidance, ensuring that AI’s outcomes align with practical, ethical and policy-related considerations. This approach creates a pipeline where expert and stakeholder feedback actively influences AI models, making them more accountable and aligned with diverse needs in decision-making contexts.

“As an academic, I am interested in gaining a more evidence-based understanding of how AI can generate public value in cities,” Dr Debnath says. “I am particularly interested in understanding the key levers, which could be either an algorithm or an algorithm that works in tandem with humans to make critical decisions.”

“I leverage Large Language Models and generative AI to understand how AI can make certain decisions, how biased or less biased it is, and what needs to be included to make the AI outputs more relevant and trustworthy – although my score is climate-related decision-making,” he adds.

Many AI tools for cities used in areas like facial recognition are known for biases in their facial feature selection, which can skew results and reinforce inequalities. To counter this and to avoid marginalising specific communities, Dr Debnath and his ai@cam project partners are exploring existing ethical frameworks with the goal of reducing bias throughout the AI supply chain – starting from the software providers that are supplying local decision makers.

“If successful, the impact of this project would be significant,” he says. “We could demonstrate that AI can generate public value when decision makers use these tools wisely, essentially proving that human-in-the-loop design is the future for safer and more responsible AI systems and clearing up the scepticism and misinformation around AI technologies. The end goal is to help decision-makers use these tools for appropriate resource allocation and infrastructure development within a city context, which will ultimately improve the quality of citizens’ lives.”

Filling a gap

One of the biggest challenges Dr Debnath and colleagues have come up against is lack of information on AI’s capability for public value creation, especially at a grounded level like city-scale decision making. Potentially, this could be regarded as a positive as well as a negative: “It means we are doing something quite impactful by filling that gap,” he says.

As a first step, the team are organising workshops with decision makers and city planners at the city council level. They are also reaching out to industry professionals and other potential partners to help leverage further funding for additional human resources needed to scale up their research.

“I think that this was a very good initiative from ai@cam to bring us together for this project,” he says. “It is very interdisciplinary. We are not doing algorithm development here, so it’s not a computer science project. This is a project where we want to test what computer scientists are developing into real world cases.”

Dr Debnath believes bringing experts together to tackle some of the world’s most pressing challenges, including climate change, is the only way forward.

“AI is no longer a technology-only entity,” he adds. “It is becoming a core part of our social and economic processes and therefore more involved in our lives. Our world is rapidly digitising, and working collaboratively with experts across disciplines through AI-deas projects is a fantastic way to understand how AI influences – and can influence – our daily lives. I believe that this approach allows us to truly understand what constitutes a responsible, safe and sustainable AI.”

More about Decision-making with AI in connected places and cities

The Department of Engineering at the University of Cambridge is leading this ai@cam project, collaborating with experts from the Departments of Land Economy, Architecture, the Leverhulme Centre for Future Intelligence, Cambridge Zero and Anglia Ruskin University.

If you would like to find out more about this research project or would like to get involved, please get in touch.

As told to Vicky Anning