Generative AI tools, from advanced drafting assistants to AI-powered legal research platforms, have transformed the way lawyers prepare filings and manage litigation. Federal courts have responded by issuing new standing orders and local rules that specifically address the use of AI in legal documents.
The policy landscape is far from uniform. Some courts ban AI-drafted filings altogether, others require detailed disclosure and certification, while a third group relies on existing professional responsibility rules such as Rule 11, without new procedures. The Eastern District of Missouri exemplifies this responsibility-based approach, which has practical implications for attorneys practicing there.
The Emerging Landscape of Court AI Policies
Three main regulatory models have emerged:
- Mandatory Disclosure-and-Certification: The most common model requires attorneys to disclose the use of AI, identify the AI tool used, specify which portions are AI-generated, and certify that a human attorney has reviewed the content and verified all citations. This approach is exemplified by the Courtroom Rule of Magistrate Judge Leo A. Latella of the United States District Court for the Middle District of Pennsylvania.
- Outright Prohibition: Less widespread but notable, this model bans the use of generative AI to prepare filings. For example, the Standing Order on the Use of Generative AI of Judge Christopher A. Boyko of the Northern District of Ohio outright prohibits the use of generative AI in the preparation of court filings but permits the use of legal research tools like Westlaw and LexisNexis as well as Google and other public search engines.
- Responsibility-Based Standard: Rather than imposing additional procedural requirements or outright prohibitions, this approach relies on traditional procedural rules. Courts like those of Judge Todd W. Robinson of the Southern District of California treat AI-assisted filings under Rule 11, requiring attorneys to review and verify all content without any separate AI declaration.
The Eastern District of Missouri’s Approach
The Eastern District of Missouri firmly adheres to the responsibility-based model. Its general policy makes filers responsible for all content in submitted papers, including AI-generated portions, under Rule 11(b). No special AI-use declaration is required; simply signing and submitting a document signifies accountability for its accuracy and legitimacy. Magistrate Judge Joseph S. Dueker’s and Senior District Judge John A. Ross’ chambers follow this same standard, placing the burden of verification on the presenting attorney.
With one exception, this rule applies court-wide, including to self-represented litigants, who remain subject to the same standards. Notably, the Eastern District of Missouri does not require pre-filing certification, identification of AI tools, or flagging of AI-drafted sections, and it does not impose AI-specific sanctions. Rather, sanctions are governed solely by the general Rule 11 framework.
Notably Judge Joshua M. Divine, who sits in both the Eastern and Western Districts of Missouri, has taken a different approach. He instead requires all litigants appearing before him to file a certificate along with their entry of appearance indicating that no portion of any filing will be drafted by generative AI or that any language drafted by it will be checked for accuracy.
Comparative Analysis: How the Eastern District of Missouri Approach Differs
- Disclosure requirements: Unlike most courts with AI protocols, the Eastern District of Missouri does not require affirmative disclosure or certification. The signature on the filing is sufficient.
- Attorney duties: Courts with mandatory protocols require detailed verification and record-keeping. The Eastern District of Missouri relies on longstanding Rule 11(b) standards, requiring reasonable inquiry into the facts and law supporting the filing.
- Sanctions: Other courts tie sanctions to AI-specific requirements. The Eastern District of Missouri’s sanctions arise from the general Rule 11 framework, making no distinction based on the use of AI.
- Alignment with peers: The Eastern District of Missouri’s approach is shared by other courts like the District of Connecticut and the Southern District of Texas, which also rely on established professional responsibility rules.
Conclusion
The federal judiciary’s engagement with generative AI is evolving, reflecting diverse judicial philosophies. Some courts see AI as requiring new infrastructure—disclosures, certifications, and explicit sanctions—while others, such as the Eastern District of Missouri, trust existing Rule 11 duties to govern attorney conduct. This patchwork of protocols creates compliance challenges for attorneys managing cases in multiple jurisdictions. The best practice is to follow the most stringent applicable standard: rigorously verify AI outputs, document the review, and be ready to disclose AI use even if not explicitly required. Regardless of the model, all courts demand accurate, well-researched filings. AI is a tool, but responsibility for its output remains with the attorney.