Ada Lovelace Institute and Connected by Data: AI Harms Analyses

CS:0001
CASE STUDY

Ada Lovelace Institute and Connected by Data: AI Harms Analyses

Matt Davies, Public Policy Lead at the Ada Lovelace Institute, said: “AWO’s analysis gave us a clear view of the distinction between legal protection from AI harms on paper and in practice. This was vital in helping us formulate our response to the UK Government’s position at a crucial point in the debate on AI regulation.”

Jeni Tennison, Executive Director of Connected by Data, said: “AWO helped us break down our policy question into manageable and realistic case studies capable of legal analysis. Their legal findings helped us engage stakeholders to really move the conversation forward.”

Ada Lovelace Institute is an independent research institute with a mission to ensure data and AI work for people and society. Connected by Data is a non-profit which campaigns to put community at the centre of data narratives, practices and policies by advocating for collective and open data governance.

AI harms are proliferating as the technology is increasingly used in the public and private sector, and so is concern about them. But views on how to address these harms differ widely. In the EU, dedicated legislation has been brought forward in an effort to increase protections. Whereas in the UK the Government has taken the position that AI harms will generally be covered by existing legal and regulatory requirements.

AWO was approached separately by two public policy clients in the UK - the Ada Lovelace Institute and Connected by Data - to analyse the extent to which AI harms are addressed in the UK’s current legal regime.

We broke down the questions posed by setting up realistic scenarios in which AI harms could manifest in relation to either individuals or groups. We then analysed law and procedure in a range of areas to reach conclusions about whether people are sufficiently protected by current laws. Our litigation team drew on their experience of providing practical advice to people enforcing their data rights to approach this work, since we needed to capture not only the letter of the law, but the impact of procedure and practice on how able people are in practice to enforce the rights they have on paper.

Our analysis found significant gaps in effective protection, both for individuals and groups. These included:

  • The lack of legally mandated, meaningful, and in-context transparency about AI tools;
  • Gaps in regulation due to a lack of resources and access to information for some regulators;
  • Enforcement of rights through the civil courts being too expensive and risky for most people;
  • A lack of legal protection for groups affected by AI harms where no personal data is being processed; and
  • A focus on redress failing to give communities a voice in how technology affects them.

These findings point to a complex picture. Whilst there is some truth to the UK Government’s claims that AI harms are covered by existing laws, this is undermined on the ground by serious problems with transparency and access to justice. This lack of effective protection threatens to harm individuals and undermine trust in data-driven technology in the long-term.

Our clients’ policy work, based on our analysis, received widespread media coverage. The full reports can be read here and here.