CNA, a leading malpractice insurer for Louisiana lawyers, recently published a helpful practice guide titled Building a Safe and Practical Law Firm Artificial Intelligence Policy: A Risk Management Playbook. It’s a useful guide for this moment. Courts are growing impatient with lawyers who submit AI-generated content without adequate verification. The U.S. Fifth Circuit Court of Appeals recently imposed a $2,500 sanction against a lawyer who admitted using AI tools to draft her arguments without adequate review, imposing a steeper-than-usual fine because the lawyer did not accept responsibility. Meanwhile, the 6th Circuit imposed $30,000 in sanctions against two lawyers for more than two dozen fake case citations and dismissed the case as frivolous in light of “pervasive misconduct.”

Louisiana has not issued AI-specific ethics rules or opinions (yet). The Louisiana Supreme Court’s General Counsel wrote a letter in early 2024 stating that the existing ethics rules are “robust and broad enough to cover AI issues without adjustments,” and that lawyers remain ultimately responsible for their work product, maintaining competence in technology, protecting confidential client information, and avoiding misrepresentations of fact or law. In other words, a Louisiana lawyer’s existing obligations under Rule 1.1 (competence), Rule 1.6 (confidentiality), Rule 3.3 (candor toward the tribunal), and Rule 8.4 (misconduct) fully apply to AI uses. For lawyers who want a sense of what Louisiana courts themselves are thinking about AI, the Louisiana Supreme Court Technology Commission also issued non-binding Generative AI Guidelines through the Louisiana Judicial College in October 2025, which are worth reading. Against that backdrop, the CNA playbook is worth reading carefully. It recommends that law firms create a written AI policy organized around five practical steps.
- Assess your firm’s AI needs and goals. Before implementing any AI tool, identify where AI can add value and evaluate the risks. A useful starting question: is the AI use internal (e.g., summarizing long emails) or client-facing? Internal administrative tasks present lower risk than client deliverables. If the work product will be filed in court, make sure to do manual verification of the facts and source citations. And while courts in Louisiana do not (at this time) require AI_usage certifcation, several courts across the country do, so be sure to check your court’s rules.
- Create a governance framework. CNA recommends crafting an internal written policy that addresses at least the following: (1) permitted uses (brainstorming, first drafts, style edits, issue-spotting) versus prohibited uses (uploading privileged client data into non-approved tools, relying on AI citations without human verification); (2) client-centered confidentiality — a lawyer must not enter client-identifying or privileged content into any AI tool unless the tool meets the firm’s security standards and, where appropriate, the client has consented; (3) transparency with clients when AI materially affects the representation; (4) billing integrity — if AI saves time, invoice actual time or adjust flat fees accordingly, and disclose any per-use AI costs up front; and (5) that supervising lawyers remain accountable for AI-assisted work product under Rules 5.1 and 5.3.
- Train all employees. A written policy means little without training. CNA recommends annual CLE-style training on AI ethics, confidentiality, and tool use, along with a new-hire onboarding module. Training should use case studies showing the results of responsible AI use.
- Monitor AI performance. Firms should track accuracy of AI-generated content, watch for bias in outputs, and maintain logs of AI prompts and outputs for auditability. CNA also recommends a clear incident-response protocol for hallucination events, including issuing errata letters and notifying courts as required.
- Review and update the policy regularly. Given the rapid pace of change, CNA recommends at least semi-annual reviews as ethics opinions, court rules, and regulations evolve.
Should Louisiana adopt specific AI ethics rules? The Louisiana Supreme Court’s 2024 letter suggests a wait-and-see approach, which seems prudent given how fast the technology is moving. In the meantime, a Louisiana lawyer who builds a thoughtful, written AI policy along the lines CNA recommends will be well-positioned to use these powerful tools competently, protect client confidences, maintain candor with courts, and avoid the kind of sanctions that are becoming increasingly common.
