AI Errors Shake Legal System: Judges Warn Against Overreliance
Judges across the world are increasingly facing an unusual challenge, legal briefs written with the help of artificial intelligence that cite fake cases or contain fabricated details. Court documents and attorneys reveal that many of these AI-generated filings are riddled with non-existent citations, leading to widespread concern.
The trend is serving as a warning for professionals learning to use AI tools at work. Employers are keen to hire people who can leverage AI for tasks like research or report drafting. However, as teachers, marketers, and accountants experiment with chatbots and assistants, many are realizing that these tools can make serious errors.
French data scientist and lawyer Damien Charlotin has tracked at least 490 court filings over the past six months containing what he calls “hallucinations”, false or misleading AI-generated information. “Even the more sophisticated player can have an issue with this,” Charlotin said. “AI can be a boon. It’s wonderful, but also there are these pitfalls.”
Charlotin, a senior research fellow at HEC Paris, developed a database tracking global cases where judges found fabricated case law and false quotes generated by AI. Most of these rulings are from U.S. cases where individuals represented themselves. While some judges merely issued warnings, others imposed fines.
Even established companies have not been spared. In Colorado, a federal judge ruled that a lawyer representing MyPillow Inc. submitted a brief with nearly 30 fake citations in a defamation case involving the company and its founder, Michael Lindell.
The legal profession isn’t alone in facing AI’s flaws. AI-generated overviews that appear on top of search results often contain inaccuracies, and professionals from various sectors are also grappling with privacy concerns. Experts caution workers to be mindful of what data they share with AI tools to avoid exposing sensitive company information.
Maria Flynn, CEO of Jobs for the Future, advises against over-dependence on AI. “Think about AI as augmenting your workflow,” she said. “It can act as an assistant for drafting emails or planning itineraries, but don’t treat it as a substitute for your own work.”
Flynn shared her own experience of using an in-house AI tool to prepare for meetings. “Some of the questions it proposed weren’t the right context really for our organization, so I was able to give it some of that feedback ... and it came back with five very thoughtful questions,” she said. Yet, even she found errors the tool failed to differentiate between completed work and funding proposals. “If you’re new in an organisation, ask coworkers if the results look accurate to them,” she suggested.
Atlanta-based attorney Justin Daniels from Baker Donelson also warns users to verify AI outputs. “People are making an assumption because it sounds so plausible that it’s right, and it’s convenient,” he said. “Having to go back and check all the cites, or when I look at a contract that AI has summarised, I have to go back and read what the contract says, that’s a little inconvenient and time-consuming, but that’s what you have to do. As much as you think the AI can substitute for that, it can’t.”
Privacy and consent issues also loom large. Chicago-based lawyer Danielle Kays from Fisher Phillips cautioned against using AI for note-taking during meetings without participant consent. “People are claiming that with use of AI there should be various levels of consent, and that is something that is working its way through the courts,” she said.
Experts also warn that data shared with free AI tools can resurface in other users’ results. “It doesn’t discern whether something is public or private,” Flynn said.
Despite its risks, learning AI remains crucial. Flynn believes understanding how to use the technology responsibly is the best defense. “The largest potential pitfall in learning to use AI is not learning to use it at all,” she said. “We’re all going to need to become fluent in AI, and taking the early steps to build familiarity and comfort with the tool is going to be critically important.”
 
The trend is serving as a warning for professionals learning to use AI tools at work. Employers are keen to hire people who can leverage AI for tasks like research or report drafting. However, as teachers, marketers, and accountants experiment with chatbots and assistants, many are realizing that these tools can make serious errors.
French data scientist and lawyer Damien Charlotin has tracked at least 490 court filings over the past six months containing what he calls “hallucinations”, false or misleading AI-generated information. “Even the more sophisticated player can have an issue with this,” Charlotin said. “AI can be a boon. It’s wonderful, but also there are these pitfalls.”
Charlotin, a senior research fellow at HEC Paris, developed a database tracking global cases where judges found fabricated case law and false quotes generated by AI. Most of these rulings are from U.S. cases where individuals represented themselves. While some judges merely issued warnings, others imposed fines.
Even established companies have not been spared. In Colorado, a federal judge ruled that a lawyer representing MyPillow Inc. submitted a brief with nearly 30 fake citations in a defamation case involving the company and its founder, Michael Lindell.
The legal profession isn’t alone in facing AI’s flaws. AI-generated overviews that appear on top of search results often contain inaccuracies, and professionals from various sectors are also grappling with privacy concerns. Experts caution workers to be mindful of what data they share with AI tools to avoid exposing sensitive company information.
Maria Flynn, CEO of Jobs for the Future, advises against over-dependence on AI. “Think about AI as augmenting your workflow,” she said. “It can act as an assistant for drafting emails or planning itineraries, but don’t treat it as a substitute for your own work.”
Flynn shared her own experience of using an in-house AI tool to prepare for meetings. “Some of the questions it proposed weren’t the right context really for our organization, so I was able to give it some of that feedback ... and it came back with five very thoughtful questions,” she said. Yet, even she found errors the tool failed to differentiate between completed work and funding proposals. “If you’re new in an organisation, ask coworkers if the results look accurate to them,” she suggested.
Atlanta-based attorney Justin Daniels from Baker Donelson also warns users to verify AI outputs. “People are making an assumption because it sounds so plausible that it’s right, and it’s convenient,” he said. “Having to go back and check all the cites, or when I look at a contract that AI has summarised, I have to go back and read what the contract says, that’s a little inconvenient and time-consuming, but that’s what you have to do. As much as you think the AI can substitute for that, it can’t.”
Privacy and consent issues also loom large. Chicago-based lawyer Danielle Kays from Fisher Phillips cautioned against using AI for note-taking during meetings without participant consent. “People are claiming that with use of AI there should be various levels of consent, and that is something that is working its way through the courts,” she said.
Experts also warn that data shared with free AI tools can resurface in other users’ results. “It doesn’t discern whether something is public or private,” Flynn said.
Despite its risks, learning AI remains crucial. Flynn believes understanding how to use the technology responsibly is the best defense. “The largest potential pitfall in learning to use AI is not learning to use it at all,” she said. “We’re all going to need to become fluent in AI, and taking the early steps to build familiarity and comfort with the tool is going to be critically important.”
Next Story