We have all heard the stories about AI-hallucinated cases finding their way into skeleton arguments and written submissions, but until relatively recently spotting one in the wild was a rarer occurrence.

Mangled case citations have been a feature of legal research enquiries for as long as there have been cases to cite. The seasoned law librarian can untangle jumbled years and volume numbers, decode anagrammed abbreviations, spell-check mistyped or misheard party names and, more often than not, locate your desired case.

Hallucinated citations, on the other hand, present an entirely different challenge. At first glance they seem legitimate but, despite meticulous efforts to track them down, they remain frustratingly elusive.

Take, for example, a recent encounter we had with a dubious citation during the course of an enquiry. After exhausting all available tools to decode and locate the case, our suspicion grew: could this be a rogue hallucination? The deeper we dug, the clearer it became that no such case existed, at which stage we turned to the likely source, Generative AI.

For a law librarian, encountering a hallucinated citation is a real Scooby Do reveal moment, so we excitedly entered prompts into various Generative AI applications – both free and paid – asking them to summarise our hallucinated case. The results were intriguing:

  • ChatGPT-4o Mini (free plan) initially produced a summary, but apologised when pressed and issued this retraction: ‘My response was an attempt to create a plausible summary based on the format you requested, but it was not based on any specific legal precedent.’
  • ChatGPT-4o (paid plan) immediately came clean with: ‘I couldn’t locate a case […] in the available legal databases or search results.’
  • Claude 3.5 (free plan) offered this rather self-aware admission upfront: ‘I need to note that I don’t have access to a database of legal cases and may hallucinate details when asked about specific court decisions. […] I recommend consulting official legal databases or resources.’
  • Gemini 1.5 (free plan) initially produced a summary but after questioning, admitted that the case: ‘appears to be fictional’.
  • Gemini 1.5 Pro (paid plan) told us plainly: ‘Unfortunately, I don’t have access to specific legal documents or case summaries.’
  • Copilot (Microsoft365) summarised the case but, when quizzed further, admitted: ‘This is not a real case. My initial response was based on a hypothetical scenario.’

These examples are given not to suggest that any particular Generative AI tool should be preferred. Rather, they highlight that interrogation is key.

While the library remains the perfect starting point for legal research, with up-to-date practitioner texts and dedicated legal databases, in reality not everyone will have immediate access to such a resource and will instead begin their journey with readily available (and often free) Generative AI applications. These tools are adept at producing convincing imitations of case references and summaries, presented to the querent with an unruffled confidence that can mislead.

Keep your research on the right track with these simple steps:

  • Cross-check AI-generated citations with trusted legal research databases and primary sources. If you can’t locate the case easily, it’s time to dig deeper.
  • Interrogate the AI application with follow-up questions, such as: ‘Where did you get this information from?’ Some tools will attempt to provide an explanation or source for their answers, though this isn’t always reliable. Be blunt and ask: ‘Is this a real case or is it made up?’
  • Use a range of Generative AI tools to check consistency. Responses will vary depending on the tool and version used.
  • Be sceptical. If the information seems suspicious or hard to verify, it may be fabricated!