Lawyer Blames AI Tool For Fake Court Quotes
A Louisiana personal injury lawyer has apologized to a judge after submitting court documents containing fabricated quotations from a real court decision. Ross LeBlanc, a partner at Dudley DeBosier, told Judge William Jorden in a private letter dated March 27 that he began using artificial intelligence software called Eve to draft pleadings earlier this year. He initially checked the AI's citations frequently and found them correct, which built his confidence until he eventually stopped checking.
The errors appeared in two filings at the 19th Judicial District Court in Baton Rouge and were discovered by opposing counsel. LeBlanc wrote that he could not determine whether the mistake resulted from Eve's software or from copying and pasting too quickly, and he could not be sure the fabricated quotations originated from the AI tool.
Jay Madheswaran, chief executive of Eve, stated that a close audit of the case confirmed Eve did not hallucinate any case citations or create fabricated quotations in this matter. Eve builds software for plaintiff-side lawyers using large language models and was valued at $1 billion after a $103 million funding round approximately one year ago. The company now processes more than 200,000 documents per month, representing roughly a 100-fold increase from the previous year.
This incident is part of a larger pattern in which courts have sanctioned attorneys for filing briefs containing AI-generated errors. Sullivan & Cromwell, an elite law firm, recently apologized to a federal judge for a similar error. What is new is that attorneys are beginning to name the specific software involved, potentially exposing the companies to reputational damage.
LeBlanc said he was initially wary of AI technology because of horror stories about hallucinated case law but was persuaded after Eve's pitch and assurances about built-in safeguards. Opposing counsel in the personal injury case uncovered the mistakes and included LeBlanc's apology letter in a request to expand a sanctions inquiry.
Dudley DeBosier has moved to strike that request, arguing the cases are unrelated. The firm indicated that in the Lowe's case, a lawyer used Claude to help draft the brief. The firm states it trains lawyers to carefully review AI-generated results and requires responsible use of technology.
LeBlanc takes full responsibility for not verifying the work, stating he is responsible for checking everything regardless of the technology used. He does not blame Eve but has decided to take a cooling-off period from using the tool. The executive at Eve emphasized that contracts and onboarding materials explicitly state human lawyers remain responsible for final products, and the software includes error-flagging features meant for lawyer review.
Original Sources: 1, 2, 3, 4, 5, 6, 7, 8 (louisiana)
Real Value Analysis
The article provides essentially no actionable help to a normal person. It reports on a lawyer's admission that he submitted court documents with fabricated quotations after relying on AI software without sufficient verification. From the perspective of usable guidance, the article gives readers nothing concrete to do - no steps to take, no tools to use, no decisions to make. The educational depth is extremely shallow; while it mentions AI hallucinations and a company's valuation, it does not explain why AI generates false information, how legal professionals typically verify citations, or what technical safeguards exist and why they might fail. Personal relevance is minimal for ordinary readers - this incident primarily affects legal professionals and clients in specific cases, though there is an indirect lesson about information verification that the article fails to develop. The article has no public service function; it warns implicitly but provides no safety guidance, no actionable warnings, and no instructions for responsible behavior. Practical advice is completely absent - readers might infer they should verify information but learn no methods for doing so. The long-term impact is negligible; this is a single news event with no principles provided for building better habits. The emotional effect is likely concern or anxiety about AI reliability without any constructive path forward, creating helplessness rather than empowerment. The article is not clickbait but it missed every opportunity to teach. It described a clear problem - AI-generated fabrications in professional work - yet offered no guidance on identifying such errors, no general verification principles, and no framework for evaluating information sources.
What the article should have provided but did not is a practical approach to verification that anyone can apply. The fundamental principle is that you must not trust information simply because it appears authoritative or comes from a sophisticated tool; you must verify what matters. When you encounter detailed claims, especially legal citations, statistics, or technical facts, treat them as hypotheses to test rather than truths to accept. Develop the habit of asking: can I confirm this through independent sources? For legal material, that means searching official court databases to verify case citations exist and support the arguments made. For factual claims, find multiple reputable sources that corroborate the information. Learn to recognize patterns that often indicate AI-generated content: overly formal or generic language, missing specific details that should be present, inconsistent style, and an unnatural smoothness that lacks human nuance. If you use AI tools yourself, build verification into your workflow as a mandatory step - check key facts, test boundary conditions, and review outputs critically before relying on them. Understand that responsibility for accuracy always rests with the user, not the tool, which means designing your processes to catch errors before they cause harm. These habits serve you beyond AI contexts; they protect against all forms of misinformation. The broader lesson is that technology can assist but not replace judgment when accuracy matters, so invest in developing your own verification skills regardless of what tools you use.
Bias analysis
The text uses hedging language to blur responsibility. LeBlanc says he "could not determine whether the mistake resulted from Eve's software or from copying and pasting too quickly" and "could not be sure the fabricated quotations originated from the AI tool." This wording makes it unclear who is at fault and suggests the error might be unavoidable.
The article includes corporate protection language from Eve's CEO who stated "a close audit of the case confirmed Eve did not hallucinate any case citations." Using "confirmed" presents this as a settled fact rather than a claim, which protects the company's reputation without independent verification.
Financial details like Eve being "valued at $1 billion after a $103 million funding round" and processing "more than 200,000 documents per month" add prestige to the company. These numbers make Eve seem established and successful, which may influence readers to trust their denial over the lawyer's uncertainty.
The phrase "the errors appeared" uses passive voice that hides who created the fabricated quotations. This removes clear agency from the mistake and makes it sound like the errors happened on their own, obscuring whether the lawyer or the AI produced them.
The lawyer said he was initially scared by "horror stories about hallucinated case law" but was persuaded by Eve's pitch. The word "horror stories" is emotional language that primes fear of AI while the mention of Eve's "pitch" suggests he was sold on the tool, subtly framing him as a victim of marketing.
Calling Sullivan & Cromwell "an elite law firm" adds a prestige descriptor that makes their error seem more significant. This signals that even top-tier professionals make these mistakes, which could excuse LeBlanc's behavior by association.
The article states that naming software "potentially exposes the companies to reputational damage." This frames Eve as a possible victim rather than focusing on harm to clients or the court system, shifting concern toward protecting the business.
LeBlannc "does not blame Eve" and takes a "cooling-off period" rather than stopping use entirely. Presenting this as a measured response makes him seem reasonable while downplaying the seriousness of fabricated court documents, which could be seen as too soft a consequence.
Eve's executive says "contracts and onboarding materials explicitly state human lawyers remain responsible for final products." Including this quote lets the company distance itself legally, but the article presents it as evidence rather than letting readers judge if this truly absolves them.
The text mentions the lawyer stopped checking citations because he "found them correct, which built his confidence." This sequence makes his later failure to check seem like a natural progression rather than negligence, subtly excusing his behavior by showing he had a good reason to trust the tool.
Emotion Resonance Analysis
The text conveys several meaningful emotions that shape its narrative and impact. Apology and remorse appear strongly when Ross LeBlanc submits a private letter to Judge Jorden, expressing regret for submitting fabricated quotations. This emotion serves to demonstrate accountability and mitigate potential professional consequences. Initial worry and caution emerge through LeBlanc's described skepticism of AI technology, influenced by "horror stories" about hallucinated case law, which establishes his earlier prudence. Confidence followed by misplaced trust surfaces as he describes how initial positive results built his assurance until he stopped verifying citations, revealing a progression from careful oversight to dangerous complacency. Professional concern and embarrassment underlie the entire incident, as a personal injury lawyer faces sanction inquiries and reputational damage. The firm's defensive posture in moving to strike the request adds tension and a sense of being under attack. Finally, a sober resolve appears at the end as LeBlanc decides to take a "cooling-off period" from the tool, framing himself as learning from error rather than repeating it.
These emotions work together to guide the reader's reaction in specific directions. The apology creates sympathy for LeBlanc by presenting him as a professional who made a human error and is accepting blame rather than deflecting it. The initial worry about AI builds shared concern about technology reliability in high-stakes legal work, making the reader understand his earlier caution. The misplaced trust evokes a cautionary tale feeling, warning against overreliance on automated systems without verification. The professional concern and embarrassment produce unease about the broader implications for the legal field, suggesting this is not an isolated incident but part of a concerning pattern. The defensive stance of the firm creates slight tension, but LeBlanc's final decision to step back from the tool rebuilds some trust by showing corrective action. Together, these emotions steer the reader toward viewing the incident as both a personal failure and a systemic warning about AI adoption in professional settings.
The writer uses emotional language and persuasive techniques to heighten impact and direct thinking. Word choices like "horror stories," "fabricated quotations," and "sanctioned attorneys" carry strong negative connotations that frame the situation as serious and dangerous rather than a simple clerical error. Contrast is employed effectively: LeBlanc's initial wariness is set against his eventual overconfidence, showing a tragic reversal of judgment. The personal story approach—detailing his journey from skeptic to believer to chastened user—makes the abstract problem of AI hallucination concrete and relatable. Scale is emphasized through specific numbers: Eve's $1 billion valuation and 100-fold monthly document increase create a sense of massive, growing influence, making the error feel more significant. Repeating the theme of responsibility—LeBlanc's acceptance, Eve's contract disclaimers, the firm's training claims—keeps focus on accountability rather than technology blame. The mention of an "elite law firm" committing the same error normalizes the problem while raising its seriousness, suggesting even top professionals are vulnerable. These tools collectively steer attention from a single incident to broader questions about legal ethics, AI oversight, and who bears responsibility when technology fails.

