Judge Brantley Starr of the Texas federal court has introduced a new requirement for attorneys appearing in his courtroom. They must now certify that no portion of their filing was drafted by generative artificial intelligence (AI), or if it was, that it was verified by a human being. This decision comes after attorney Steven Schwartz used ChatGPT, an AI language model, to supplement his legal research in a recent federal filing. However, the AI-generated cases and precedent provided were entirely fabricated, causing regret on Schwartz’s part.
Judge Starr’s new rule, called the “Mandatory Certification Regarding Generative Artificial Intelligence,” mandates that attorneys file a certificate on the docket confirming compliance with the requirement. The certificate must attest that no AI was involved in drafting the filing or that any AI-generated language was cross-checked for accuracy by a human using print reporters or traditional legal databases.
The judge’s office provided a well-informed explanation for the necessity of this certification, acknowledging the potential of AI platforms for various legal purposes but emphasizing their limitations. The memorandum noted that AI systems are prone to hallucinations, creating fictitious information, including quotes and citations. It also highlighted concerns about reliability and bias, as AI lacks the sense of duty, honor, and justice that attorneys are bound by.
While Judge Starr’s rule applies to his specific courtroom, it may serve as a precedent for other judges considering similar requirements. The use of AI in legal work, particularly for briefing and research, holds promise, but ensuring accuracy and transparency is essential. By implementing this certification, the judge seeks to prevent the misuse of AI-generated content and maintain the integrity of legal arguments presented in court.