AI for Louisiana Lawyers: Where Things Stand and What Comes Next

Generative AI is a genuinely useful tool for lawyers, and many Louisiana practitioners are already using it in some form. By now, though, the warnings about AI hallucinations in legal filings are equally familiar. Damien Charlotin, a legal researcher in Paris, maintains a publicly available database tracking court decisions in which generative AI produced hallucinated content. The database catalogued over 1,400 cases worldwide as of mid-May 2026, with more than 900 from U.S. courts. The rate of reported incidents has grown from roughly two per week in early 2025 to two or three per day by late 2025. Those are only the cases someone caught and that generated a published decision — the actual number is believed to be substantially higher.

What courts are doing about it. The most common response has been standing orders and disclosure requirements. As of early 2026, more than 25 federal district courts have issued standing orders or local rules requiring attorneys to certify whether AI was used in preparing filings and, if so, to confirm that a licensed attorney reviewed and verified all citations and legal arguments. Some judges require lawyers to certify that no portion of a filing was AI-drafted, or to identify specifically which portions were. The approach varies considerably by court and even by judge within the same courthouse.

Critics have noted that these requirements largely duplicate obligations already imposed by Rule 11 of the Federal Rules of Civil Procedure, which has always required attorneys to certify the accuracy of their filings. If that obligation was not preventing hallucinated citations before, a new certification form may not change much. Courts have also relied on sanctions (so far mostly monetary penalties, CLE requirements, and disciplinary referrals) with mixed deterrent effect. As one Alabama federal judge observed after disqualifying defense attorneys and referring them to the state bar, if fines and public embarrassment were working, there would not be so many cases to cite.

One proposal to emerge from this period is what commentators have called a mandatory “Hyperlink Rule.” The idea is simple: every cited judicial opinion, statute, or regulation in a court filing must be hyperlinked to a verified legal database — Westlaw, Lexis, Bloomberg Law, or an official government repository. The logic is that a fabricated case has no URL to link to, so the requirement would force the problem into the open at the drafting stage rather than after the brief is already filed. For a fuller treatment of the argument, see this article in the National Law Review. That the proposal has not gained wider traction may be because it would not fully address the problem. AI tools frequently hallucinate not by inventing a case from whole cloth, but by identifying a real case and then mischaracterizing or fabricating quotations from it — a defect that a hyperlink would not catch. This is true even of legal-specific platforms: in United States v. Farris, No. 25-5623 (6th Cir. Apr. 3, 2026), the Sixth Circuit sanctioned an attorney after briefs generated by Westlaw’s CoCounsel contained fabricated quotations and misrepresented the holdings of two real cases. Considering the state of the art at the time of this writing, there is simply no substitute for a human lawyer checking the document before it is filed.

Where Louisiana stands. Louisiana has largely taken the position that existing rules are sufficient. In early 2024, the General Counsel of the Louisiana Supreme Court wrote to the LSBA president concluding that the professional rules governing competence, candor, supervision, and work product responsibility already cover AI-related issues without amendment. On the legislative side, Act 250 of the 2025 Regular Session amended the civil code to address AI-generated evidence, creating procedures for challenging the authenticity of AI-generated exhibits and requiring disclosure when evidence has been generated by AI. But that statute addresses evidentiary authenticity, not the citation-and-drafting context where most of the sanctions are occurring.

That approach is not wrong. Louisiana lawyers already know what their obligations are: verify your citations, supervise your work product, do not misrepresent the law to a tribunal. Those duties are eternal. A lawyer who takes those obligations seriously and applies them consistently is already doing what the rules require, with or without an AI-specific disclosure rule.

What makes the current moment different is not that the underlying rules have shifted, but that the technology is new enough, pervasive enough, and rapidly developing enough that many lawyers have not yet settled on reliable practices for working with it. The ground is still moving. General-purpose chatbots behave very differently from legal-specific platforms with citation-verification features built in, and both will look different six months from now. In that environment, heightened conscientiousness (even paranoia) is probably warranted even if the formal rules have not changed.

I expect that the technology will continue to improve, that hallucinations will become rarer and eventually a relic of this early period, and that AI tools will ultimately allow Louisiana lawyers to serve their clients better than they could without them. Getting to that point without unnecessary damage to clients and careers in the meantime is the more immediate challenge.

We have previously covered the Fifth Circuit’s sanctions order in Fletcher v. Experian and the Louisiana Supreme Court Technology Commission’s nonbinding AI guidelines for judges.

Please follow and like us: