High Court tells UK lawyers to stop using AI after fake case legal citation | Artificial Intelligence (AI)

The High Court has told senior lawyers to take urgent action to prevent abuse of artificial intelligence after citing dozens of fake lawyers to courts that are completely fictional or contain fictional paragraphs.

Lawyers are increasingly using AI systems to help them build legal arguments, but two cases this year were undermined by writing citations of case law that were affirmatively or suspected to have been generated by AI.

In the £89 million damages case against the National Bank of Qatar, the claimant mentioned 45 case laws, 18 of which were fictional, and many others cited other cases. The claimant used publicly available AI tools and his lawyer accepted the fake authorities.

When the Haringey Law Center challenged Haringey’s London Borough’s alleged failure to provide temporary accommodation to clients, its lawyers cited Phantom Case law five times. Suspicion was raised when lawyers defending the Council had to repeatedly ask why they could not find any trace of the so-called authorities.

This led to legal proceedings that wasted legal fees, and the court found that the Law Center and its attorneys were a student attorney. The barrister refused to use AI in this case, but said she might have done it by accident while using Google or Safari to prepare for another case, and she also quoted the Phantom Authority. In this case, she said she might have considered AI summary without realizing what they are.

In a regulatory ruling that responded to the case on Friday, Kings chairman Victoria Sharp said if artificial intelligence is abused, it has a serious impact on the judiciary and the public's management of the judicial system, and lawyers who abuse AI may face AI attorneys who may face sanctions, from public counsel to court litigation and mentions to police.

She called on the lawyers committee and the bar association to consider the steps to curb the issue “emergency” and told law firms and lawyers’ management partners to ensure that all attorneys are aware of their professional and ethical obligations when using AI.

"This kind of tool can produce consistent and reasonable responses to cues, but those coherent and reasonable responses may be completely incorrect," she wrote. "Responses may assert confidently that these assertions are untrue. They may cite sources that do not exist. They may claim to quote paragraphs from true sources that do not appear in that source."

Ian Jeffery, chief executive of the Lawyers Association of England and Wales, said the ruling “is at risk of using AI in legal work”.

“AI tools are increasingly used to support legal service delivery,” he added. “However, the real risk of generating incorrect outputs generated by AI requires lawyers to check, review and ensure the accuracy of their work.”

Skip the newsletter promotion

These cases are not the first to be destroyed by the hallucinations created by AI. In the 2023 UK Tax Court, an appellant who claimed to be assisted by "friends from the lawyer's office" provided the expected judgment for nine false historical court rulings. She admits that she used Chatgpt, but that certainly did not make any difference, as there are other cases that make her point of view helpless.

The appellant almost avoided contempt lawsuits in the Danish case for €5.8 million (£4.9 million) this year. The U.S. District Court in the Southern District of New York fell into chaos for a 2023 case when attorneys were challenged to demand the seven obvious virtual cases they mentioned. Simply asking Chatgpt to summarize the cases it has made up for, and the judge was "Gibberish" and fined two attorneys and their company $5,000.