
In Kohls v. Ellison, a Minnesota federal court offered a sharp reminder of the risks of relying too heavily on generative AI in litigation. See Kohls v. Ellison, No. 24-cv-3754 (D. Minn. Jan. 10, 2025). The plaintiffs are challenging a Minnesota statute that criminalizes the distribution of election-related deepfakes under certain conditions (see Minn. Stat. § 609.771), claiming it violates the First Amendment. In opposition to a motion for a preliminary injunction, the state submitted expert declarations defending the law’s purpose and efficacy.
The court excluded the declaration of Stanford Professor Jeff Hancock , finding it fatally undermined by fake academic citations generated by GPT-4o. See Id. at 7-12. Though Hancock offered a detailed explanation and the Attorney General’s office promptly acknowledged the error, the court was unmoved: “[E]ven if the propositions are substantively accurate, the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations” See Id. at 8. The court denied the request to file an amended declaration, concluding that Hancock’s credibility as an expert was “shattered” See Id. at 10.
By contrast, the court accepted the declaration of University of Washington Professor Jevin West, which provided general background on AI and misinformation. See Id. at 4-7. While plaintiffs criticized it as conclusory and inconsistent with West’s earlier writings, the court found it admissible for background purposes, noting that the evidentiary standards at the preliminary injunction stage are “less formal” than at trial. See Id. at 4.
The ruling highlights the growing tension between technological tools and professional responsibility. In a memorable aside, the court noted the irony of Hancock—an expert on AI misinformation—submitting fake citations in a case about AI misinformation. See Id. 8. The opinion joins a wave of similar decisions warning that lawyers and witnesses must verify AI-generated content in all legal filings.
For anyone tracking how generative AI is disrupting courtrooms, Damien Charlotin’s excellent database of hallucination-related rulings is worth a visit: www.damiencharlotin.com/hallucinations.