A top UK Court has told lawyers to urgently fix a growing problem: artificial intelligence is being used to create fake legal cases that don’t actually exist.
[Lawyers caught citing completely made-up court decisions generated by artificial intelligence]
The High Court has issued an urgent warning to legal professionals across the UK: stop letting AI tools create fake court cases, or face serious consequences that could destroy your career.
The warning came after two shocking court cases where lawyers presented dozens of completely fictitious legal citations – many likely generated by AI chatbots like ChatGPT.
The Problem is Getting Serious
According to The Guardian, The problem has become impossible to ignore. From the past few months, several high-profile court cases have been thrown into chaos when lawyers presented “evidence” that simply didn’t exist.
Dame Victoria Sharp, one of Britain’s most senior judges, issued the warning on Friday. She said the misuse of AI has “serious implications” for how justice works and could damage public trust in the legal system.
“These AI tools can sound very convincing,” she explained. “But they might give you answers that are completely wrong. They can confidently cite legal cases that never happened or quote from real cases using words that were never actually written.”
Two Major Cases Gone Wrong
Case 1: The £89 Million Mix-Up
In a huge damages case against Qatar National Bank, lawyers cited 45 different legal precedents to support their argument. It turned out 18 of these cases were completely fake. Many of the quotes from real cases were also made up. The lawyer admitted to using publicly available AI tools.

Case 2: The Housing Case Horror
When Haringey Law Centre challenged a local council over housing, their lawyer cited fake legal cases five times. The opposing lawyer kept asking why they couldn’t find these cases anywhere. A young trainee barrister was found negligent, though she claimed she may have accidentally used AI-generated summaries without realizing it.
It’s Happening Around the World
This isn’t just a UK problem! The phenomenon, known as “hallucinations” in tech circles, has been causing courtroom chaos around the world.
- Denmark: Lawyers in a €5.8 million case nearly faced contempt of court for citing a made-up ruling.
- United States: One lawyer completely summarized court cases by AI. When the lawyer presented this “research” in court, A New York judge called AI-generated fake cases “gibberish” and fined two lawyers $5,000.
- UK Tax Court: In 2023, a UK tax tribunal saw an appellant submit 9 completely fake historical cases. The person later admitted they “possibly” used ChatGPT to find the cases – cases that had never existed.
What Lawyers Need to Know?
The court’s message is clear: Always double-check everything.
Ian Jeffery from the Law Society of England and Wales said, “This ruling shows the real dangers of using AI in legal work. While AI tools are increasingly helpful, lawyers must always check and verify their accuracy.”
The Consequences Are Real
Lawyers who misuse AI could face:
- Public embarrassment and admonishment
- Contempt of court proceedings
- Being reported to police
- Professional sanctions
AI is The Double-Edged Sword
Legal experts say AI can be incredibly helpful for lawyers – it can speed up research, draft documents, and analyze complex information. But these recent cases show it can also be extremely dangerous if lawyers don’t carefully check its work.
AI can be a powerful tool for lawyers, but it’s not foolproof. As one judge put it, these systems can produce responses that sound completely believable but are entirely wrong.
The legal profession now faces an urgent challenge: how to use AI’s benefits while avoiding its dangerous pitfalls.
The rule is simple: If you’re a lawyer using AI, you must verify every single fact, case, and quote before presenting it to a court. No exceptions.
New Rules Coming
The legal profession is now scrambling to create new rules and training programs to prevent more AI-related disasters. Lawyers will need to learn not just how to use these powerful tools, but how to spot when they’re producing fiction instead of facts.
For now, the message from Britain’s top judges is clear: AI can be a powerful assistant, but lawyers who let it do their thinking for them do so at their own peril.