A narrative has been building that AI is going to revolutionise access to justice.

Some of the ideas floating around are undoubtedly exciting; after all, who wouldn’t want an app that could read 10 lever arch files and identify the most important or relevant evidence in minutes – if not seconds?

And from the public’s perspective, it would be revolutionary if an app could give accurate, accessible legal advice. This isn’t a complete fantasy; an app called ¡Reclamo! has been developed in the US to help migrant workers sue for unpaid wages. Other firms offer AI tools to create basic legal documents, such as RobinAi (no relation!). And there are tools promising to minimise the risk of disputes happening in the first place by creating harmonious internal teams, whether for a project or a piece of litigation, in the hope that people are offered fair access to work, such as Hive.

AI tools appear attractive because they claim to be faster, cheaper, more diligent and less discriminatory than the average human being. But is this promised utopia truly realistic? A sense-check can be got by considering ‘Robojudge’, an idea recently gaining traction (see the Bar Council 20th Annual Reform Lecture 2023 by Sir Geoffrey Vos).

The sales pitch is easy to imagine:

  • Speed. A decision within hours – no more waiting six months to a year for a judgment.
  • Cost. No more expensive brief fees and refreshers; ‘Robojudge’ would reach a decision from perfectly curated witness statements, submissions and evidence. Oh, and with fewer judges to pay, the Ministry of Justice would have more money for other things!
  • Training and recall. ‘Robojudge’, trained on vast amounts of case law, and never forgetting or overlooking a point, would be superior to a human judge.
  • Equality. Marketeers would probably argue that ‘Robojudge’, neither seeing the claimant’s skin colour, nor their sex, and being indifferent to their accent, was far less biased than the average judge.

This might seem very attractive, at first sight, but would this last once it is appreciated that ‘Robojudge’ can’t explain its decisions and its developers don’t understand how it works? And that while it might not have been told the claimant’s race etc., it can accurately infer protected characteristics and use that data in decision-making?

This is the reality of AI of right now.

To make it worse there is also the problem of automation bias; humans might be more likely to consider that ‘Robojudge’ is accurate simply because it’s an AI tool.

The full ‘Robojudge’ isn’t here yet but its predecessors are already being marketed, sold and deployed. For example, we know that AI is used to make key decisions about workers and employees: who should be recruited; who should be sacked; who should be allocated what tasks; and who should be disciplined.

So, we think that now really is the time to assess whether AI should be used to make high-risk, often subjective decisions about people. And if so, whether and how it should be regulated.

What is AI?

President Biden’s October 2023 Executive Order on Safe, Secure and Trustworthy Artificial Intelligence defines AI as:

‘a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.’

In the UK, AI is defined less fully in para 1 of Schedule 3 to the National Security and Investment Act 2021 (Notifiable Acquisition) (Specification of Qualifying Entities) Regulations 2021/1264):

‘technology enabling the programming or training of a device or software to – (i) perceive environments through the use of data; (ii) interpret data using automated processing designed to approximate cognitive abilities; and (iii) make recommendations, predictions or decisions; with a view to achieving a specific objective.’

These definitions can seem a bit cumbersome. In essence, AI has two key elements:

  • an algorithmic system or mathematical model; and
  • it is used to essentially replicate human intelligence.

The human intelligence bit is important; it differentiates established forms of technology from AI. Thus, filtering an Excel list of job candidates to find who lives in a particular location isn’t AI, but a computer telling you who to hire, is.

How is artificial ‘intelligence’ acquired?

At the heart of AI is machine learning. In effect, an AI system is given a task by a human, it is given a related dataset by a human and it is told to ‘learn’ how to achieve that task from the data often in an unsupervised way. The end result will be an algorithm created and modified countless times – often with no human input at all – that can achieve the task which has been set for it. Facial recognition technology, for example, works by examining millions of faces to ‘learn’ how to differentiate between people.

There have been developments since ChatGPT became a buzzword; the news is full of terms like ‘general purpose AI’, ‘frontier models’ and ‘large language models’. These terms lack a single international definition but are basically tools supposedly designed to go beyond replicating human intelligence by creating content. (For an accessible guide, see Foundation models in the public sector by the Ada Lovelace Institute.)

How can AI discriminate?

There are many risks to AI but we will focus on discrimination. One difficulty with machine learning is that the AI system is only as good as the data that it is fed. So, thinking again about facial recognition technology, academic research suggests it can be less accurate when it comes to identifying female faces and people with darker skin (see, for example, ‘Beating the bias in facial recognition technology’, Jan Lunter, Biometric Technology Today, Volume 2020, Issue 9, October 2020). The reason for that is likely to be simple: the database of faces from which it has learnt to be ‘intelligent’ will have been dominated by male, white faces. This is often referred to colloquially as ‘garbage in, garbage out’.

Another way in which AI can discriminate is where it learns discriminatory correlations. One famous example of this relates to Amazon. In 2015, Amazon created an automated resume screening algorithm to identify job candidates who were most likely to be ideal Amazon employees. To achieve that end, the AI model was fed the CVs of successful previous hires. When Amazon ran the model, it noticed that men were far more likely to be identified as ideal job applicants. Amazon were able to interrogate the AI model and saw that the AI model had taught itself a correlation between sex and being an ideal employee of Amazon. In other words, since men tended to be hired in the past, the AI model had essentially learnt a correlation that men made ideal Amazon employees. The model was quickly shelved. (‘Amazon scrapped “sexist AI” tool’, BBC News, 10 October 2018.)

Lastly, AI isn’t very flexible. AI is based fundamentally on stereotypes, generalisations and assumptions. Going back to the facial recognition technology example – it can differentiate between a human and the hat they are wearing because it has a stereotypical image of what a human looks like having analysed huge amounts of data. Sometimes making a decision based on a stereotype is perfectly acceptable. But sometimes those correlations don’t work in the real world especially when it comes to atypical people such as people with disabilities. There are documented examples of how the inability to be flexible can lead to discrimination. Research has shown that video led interviewing can mark down disabled applicants where those applicants have a disability that impacts on their presentation. To put it more colloquially, technology can’t yet exercise discretion and judgment in the way that humans can. (See: Disability, Bias, and AI, Meredith Whittaker et al, AI Now Institute at NYU, November 2019.)

How do we know if discrimination is happening?

The answer is that it is very hard. Machine learning algorithms are often too sophisticated for a human to understand, they might be hidden behind intellectual property rights or the organisations that use these algorithms don’t allow external scrutiny of them, yet it is well known to be a problem (see Review into bias in algorithmic decision-making, Centre for Data Ethics and Innovation (CDEI), November 2020), and the government’s Fairness Innovation Challenge is attempting to tease out new techniques to eliminate bias.

What is being done about this?

The Prime Minister can be proud to have hosted the AI Safety Summit at Bletchley Park in November 2023, when 29 countries (including the EU, the UK, the USA and China) agreed a joint declaration recognising the ‘urgency and necessity’ of addressing ‘the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection’.

But this has not yet led to any proposal for a Bill in this government’s final legislative programme (the 2023 King’s Speech). We are left with the government’s White Paper proposals, merely that regulators should apply ‘principles’ for action (A pro-innovation approach to AI regulation, August 2023). Meanwhile the EU powers forward agreeing its AI Act to be finally published early this year.

Many do not think this is adequate and that there is so much more to be done to make all these developments ‘safe, secure and trustworthy’, as President Biden has said they must be in the US (see Biden’s October 2023 Executive Order).

In September 2023, the TUC launched an AI taskforce calling for urgent new legislation to safeguard workers’ rights and ensure AI ‘benefits all’. This is chaired by Kate Bell, TUC Assistant General Secretary and Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge.

The taskforce has cross-party membership, and includes representatives from Tech UK, the Institute for the Future of Work, the Alan Turing Institute, the Chartered Institute of Personnel and Development, the University of Oxford, the British Computer Society, CWU, GMB, USDAW, Community, Prospect and the Ada Lovelace Institute.

We saw these problems coming and have been working together as the AI Law Consultancy since 2018, advising business and individuals, unions and social actors. So far, as well as government and the CDEI, we have worked on regulation with the Council of Europe and the United Nations, as well as all the European equality bodies and the Equality and Human Rights Commission.

We are now delighted to be instructed by the TUC to draft a proposed Artificial Intelligence and Employment Bill with the help of this very distinguished taskforce.

It will be published in early 2024, and then, it will be up to this government, or the next, to make it law.