On 24 March, the House of Lords Public Services Committee published its damning report* on the state of interpreting services in the courts. On 23 October 2024, myself and Philip Stott, as Co-Chairs of the Bar Council’s Legal Services Committee (LSC), gave oral evidence, alongside Richard Miller, Head of Justice at the Law Society, to the inquiry. Prior to this, the LSC had responded on behalf of the Bar Council with written evidence. Our executive summary stated:

While there are many competent interpreters the standard is variable and needs to improve. The arrangement for booking interpreters is prone to problems and is not as reliable as it needs to be. We see little if any prospect of AI being a viable alternative to interpretation being done by a human.

Unreliable data

In the process of preparing and delivering our evidence it became apparent that there is, as in so many areas, a disconnect between reported Ministry of Justice (MOJ) data and the reality in our courts, particularly the criminal courts.

The MOJ data for 2023 indicated over 12,000 instances where language services requests were unfulfilled and 618 ‘ineffective trials’ caused by an interpreter being unavailable. If these figures are correct, there could potentially be other reasons for ineffective trials alongside unfulfilled language services requests. Could most of the unfulfilled requests relate to non-trial hearings and are therefore not recorded, even though by necessity they would cause an adjournment of that particular hearing? We are aware that the shockingly high number of unfulfilled requests will resonate with practitioners as the day-to-day reality, especially in the Crown and Magistrates’ Courts. So why is the impact on justice not captured in the MOJ figures?

Drilling down into the 2023 data for the quarter October to December, there are 3,462 unfulfilled requests. However, MOJ data for the same quarter suggest that 95% of booked interpreter jobs are fulfilled and record only 189 complaints. The complaints figure should match, or at least be closer to, the unfulfilled figure. We were not able to provide an explanation for this discrepancy – nor could the MOJ which gave evidence the following month. The only obvious conclusion is that the data is not a reliable indicator.

Quality assurance and feedback

One of the Committee’s questions concerned quality assurance and feedback processes. The MOJ uses The Language Shop (TLS) to undertake a ‘mystery shopper’ approach. However, none of the LSC members giving oral or written evidence had ever come across a mystery shopper in court. In the Family Court, the presence of someone monitoring the quality of the interpreter would need the court’s permission to sit in and observe (or would need to make a specific request to the judge hearing a case to attend at the court building and listen to the evidence taken through an interpreter). We surmised, therefore, that this quality assurance service does not apply to in-court interpretation. Our best guess is that this service is applied to other methods of language service provision under contract to the MOJ, for example, telephone services where calls are recorded allowing retrospective quality assurance. We may, of course, be wrong. If anyone reading this article has come across a mystery shopper in court, the LSC would be very pleased to hear from you.

Interpreter quality

On the topic of quality of interpreters, our evidence was that while there are many excellent interpreters, a not insubstantial number were poor, leading us to conclude that either the current requirements (at least a first degree and Level 6) are not fit for purpose, or that the standards are not being applied when recruitment takes place. Our view is that the latter is more likely, a view bolstered by the fact that terms and conditions for interpreters have not changed since 2016, leading to an inevitable diminution in real terms pay, resulting in many excellent interpreters leaving the contracted provider and offering services to the court ‘off plan’ when requests to the contracted provider cannot be met.

The Committee was so concerned by the written and oral evidence received that it asked if the current round of tendering for language services could be paused until after the Committee reported. The MOJ said this was not possible.

When giving evidence, responding to the questions posed to us, we were acutely conscious that what we had to say was very negative about interpretation services in court. We were able to stress that we had experienced some excellent interpreters and were acutely aware of the difference a good interpreter made to a hearing. However, we also reported examples of interpreters having such poor spoken English that there could be little or no confidence in the interpretation, of interpreters failing to interpret questions and answers accurately, and in one extreme case telling a witness what questions the witness should or should not answer. We were not talking here about rare languages but languages that should be capable of generating quality interpreters from the UK population.

We can only hope that confidence in the sector – which, as we understand from the evidence given by language professionals to the Committee, is currently on its knees – is re-invigorated so that young people can plot a successful career as a court interpreter. Our suggestion was that any new contract provides built-in incremental increases to enable income security.

Is there a role here for AI?

The Committee asked whether AI could assist. In translation services we can see a role for AI in the ‘heavy lifting’ in common languages, albeit that any such translation would need to be carefully checked to take into account dialect and other potential nuances if used in court proceedings. However, in court there can be little or no substitute for an in-person, competent interpreter and we cannot presently see AI filling the obvious gaps in service provision. Referring back to some of the examples we raised of poor-quality interpretation, we highlighted that (apart, obviously, from poor English) we were often only alerted to a problem when a same-language speaker in the courtroom, sometimes an advocate but more commonly an interpreter booked to attend to assist in conference, raised a query. If AI services were used exclusively and advocates did not speak the language concerned, how would anyone know, except perhaps via a witness’s puzzled expression, that the interpretation was wrong?

What can the Bar do?

Can we as a profession do anything differently or is it just doom and gloom? There is a striking lack of clarity in the complaint process, not just for situations in which a booked interpreter does not arrive but also when an interpreter clearly fails to interpret to the standard expected. We explained to the Committee that because the requirement falls on HM Courts & Tribunals Service to provide an interpreter to enable a witness to give evidence and ensure a fair hearing, ultimately it is for the judge to direct HMCTS to make a complaint – not the advocates. When things go wrong, our duties and energies are focused on trying to keep a case on track as, of course, is the judge. All we can do in our local courts is to identify what system, if any, is used to ensure proper complaint is made; and to make everyone aware that if complaints are not made when they clearly ought to be, then the service will not improve, nor would those charged with setting up the provision know just how bad the service is. 

* Inquiry report

The House of Lords Public Services Committee published its inquiry report, Lost in translation? Interpreting services in the courts on 24 March 2025. The Committee concluded that the current state of interpreting services is unacceptable and presents a significant risk to the administration of justice, places undue demand on an already overburdened court system and there is a clear disconnect between what the ‘government hopes is happening, what the companies contracted to deliver the services believe is happening, and what frontline staff report is happening’.