Deploying AI typically requires several interconnected actors and their digital systems, creating supply chains of data and digital technologies. In these supply chains, data flows between ‘many hands’, linking systems that are designed, developed, owned, and controlled by different people and organisations. This makes accountability and transparency of the resulting system difficult, because responsibility may be shared. The nature of these systems is the focus of a recent paper by researchers at the University of Cambridge and UCL, which has is being used by policymakers to understand the impact of AI on regulation and governance.
Jennifer Cobbe (JC) explores law and the regulation of new and emerging technologies and is interested in the accountability of complex systems and the prospects of various technical means for facilitating compliance with legal and regulatory requirements. In this post, she talks about the importance and benefits of algorithmic accountability.
Jat Singh (JS) leads the Compliant and Accountable Systems research group in the Department of Computer Science and Technology, which works at the intersection of technology and law. The group considers the mechanisms by which technology can be better designed, engineered and deployed to accord with legal, regulatory and broader societal concerns. In this post, he talks about accountability problems and future considerations.
Why is algorithmic accountability important?
JC: Algorithmic accountability is important because AI systems are increasingly impacting society and how we interact with one another, in business, in government, in healthcare, in education, and with friends and family. If we want to understand how these systems are impacting our lives, we need to understand not just the systems themselves and how they work, but also the broader context in which they’re developed and deployed.
What governance and accountability problems do supply chains raise?
JS: Many algorithmic accountability discussions tend to focus on the activities of a single organisation, but in practice systems often comprise components produced by a range of organisations. This makes the ‘many hands’ problem increasingly a concern in the field of algorithmic accountability. Algorithmic supply chains can be long, complex and dynamic with little overall visibility and transparency over who is involved and what they are doing. In particular, AI-driven products and systems increasingly integrate pre-built modular components provided as services controlled by others, making them a system of systems. What is particularly interesting - and potentially problematic - is that each actor has a different perspective or view of a supply chain in terms of what they can see and what they can’t, so it is difficult to make informed decisions about what data, services, etc, are, for example, appropriate for a particular situation or context.
JC: From a legal point of view, it is difficult to identify which actors might be responsible for problems with systems or failure to comply with regulations, for example. We have to construct governance and accountability mechanisms that create the right obligations and responsibilities for the right people. This is a big challenge in supply chains where typically you have multiple companies, organisations, and people each contributing to how the system works, yet who these people are and what they’re doing can change quite quickly. It’s hard to get a grasp of these things from a technical perspective or from a legal point of view.
What features of algorithmic systems need to be understood to design effective governance?
JS: We need to better understand the nature of distributed responsibility in supply chains and how it works, so specific responsibilities can be identified. But there is also a bigger picture about power dynamics.
JC: Advanced AI technologies in particular have computing needs and data needs, which means that really only a very small number of companies are going to be in a position to develop or run these systems, and offer them as services. Primarily that’s Amazon, Microsoft, Google, and a handful of others. Understanding the impact of such a concentrated market is important, as only a few people will be behind the development of these technologies, which will increasingly have a large impact behind the scenes of society. Understanding the political, economic and power dynamics around these things is absolutely crucial.
What does your work tell us about how we can make sure that AI serves society?
JS: The purpose of this theme of work was to draw attention to an area of AI governance that we thought was under-discussed. A greater consideration and understanding of who is involved in building, deploying, operating, using and interrogating these systems, and the dynamics between them, is crucial for better aligning AI with societal needs and wants.
JC: As only a small number of companies can develop or deploy these systems on their infrastructure, the chances of AI genuinely serving the needs of society, rather than select commercial interests, are limited. I think we need to consider legal interventions to help govern or regulate how these companies are going to develop and roll out this technology. Otherwise, there’s a good chance that we might miss the chance to shape these systems for society’s benefit.I would like to think that finding ways to better understand how supply chains work, would have benefits that could potentially ripple right across society, and help governments and regulatory mechanisms across the economy and in different markets where AI increasingly plays a role. Developing effective interventions designed to monitor the concentration of power in algorithmic supply chains is an important part of figuring out how we can protect communities and societies in future.
What role can Cambridge play in shaping the role of AI technologies?
JS: Encouraging interdisciplinary work is necessary for tackling big-picture questions and challenges, like those around AI governance. The University of Cambridge is well placed for this, with many people working on related topics right throughout the University. Mechanisms that support interdisciplinary collaborations within the university, such as ai@cam,and beyond are key for moving forward this important area.
1 November 2023
This post is a write up of an interview with Jennifer Cobbe and Jat Singh from the Department of Computer Science and Technology