
The Louisiana Supreme Court Technology Commission published Generative Artificial Intelligence Guidelines in October 2025. While expressly non-binding, the guidelines illuminate the Louisiana Supreme Court’s expectations and offer practical guidance to judges using this powerful and rapidly-developing technology.
We often talk about keeping a ‘human in the loop’ — a concept embodied in ABA Formal Opinion 512, meaning that a human being, not the machine, must review, verify, and ultimately own every AI-assisted output. The Commission’s central concern is exactly that: keeping the human in the loop, with the judge as the decision-maker. In other words, AI is a tool for judges, not a substitute for judicial judgment. The guidelines make clear that judges must not delegate to AI the evaluation of briefs, the assessment of evidence, or the resolution of legal questions. A judge may use AI to identify relevant cases or legal principles, to summarize lengthy documents, or to generate a first draft of a routine order after the judge has independently reached a decision. But the final judgment (the legal reasoning, the weighing of facts, the exercise of judicial discretion) must remain the judge’s alone. As the Commission put it, AI should function “akin to a law clerk.” The analogy holds equally for lawyers: ABA Formal Opinion 512 and the New York State Bar Association have both framed AI output as something to be supervised the way a senior lawyer supervises a junior associate — with active review, verification, and ultimate accountability resting with the human professional.
The Commission drew a distinction between permissible and non-permissible uses. On the permissible side: summarizing legal texts, assisting with document drafting and review, conducting legal research, managing case files, and generating training materials. On the non-permissible side: autonomous decision-making, providing legal analysis without independent human verification, and processing sensitive information without robust safeguards. The Commission specifically flagged that generic AI models — those not trained on legal data — are prone to hallucination, lack adequate understanding of caselaw, and can produce contextually misinterpreted or biased output. For that reason, the guidelines caution strongly against using free large language models such as the free versions of ChatGPT or Google AI Studio for any judicial work, recommending instead enterprise-level tools with strong security, privacy, and compliance features.
The Commission also gave judges practical guidance on spotting AI-generated problems in the submissions that come across their desks. Red flags include unfamiliar case citations, inconsistent references to caselaw on the same legal issue, and submissions that read as polished and persuasive but contain obvious substantive errors on closer inspection. Judges are also reminded of Act 250 of the 2025 Regular Session, which amended the Louisiana Code of Civil Procedure to require lawyers to use reasonable diligence to verify the authenticity of evidence before offering it to the court, and to disclose any evidence they know or have reason to know was AI-generated or altered.
Judges who recognize AI’s utility want to know how to use it ethically. These guidelines from Louisiana Supreme Court Technology Commission provide a thorough, meaningful answer.
