Texas Federal judge requires lawyers to certify non-use of AI in courtroom arguments

Share via:

Judge Brantley Starr of the Texas federal court has introduced a new requirement for attorneys appearing in his courtroom. They must now certify that no portion of their filing was drafted by generative artificial intelligence (AI), or if it was, that it was verified by a human being. This decision comes after attorney Steven Schwartz used ChatGPT, an AI language model, to supplement his legal research in a recent federal filing. However, the AI-generated cases and precedent provided were entirely fabricated, causing regret on Schwartz’s part.

Judge Starr’s new rule, called the “Mandatory Certification Regarding Generative Artificial Intelligence,” mandates that attorneys file a certificate on the docket confirming compliance with the requirement. The certificate must attest that no AI was involved in drafting the filing or that any AI-generated language was cross-checked for accuracy by a human using print reporters or traditional legal databases.

The judge’s office provided a well-informed explanation for the necessity of this certification, acknowledging the potential of AI platforms for various legal purposes but emphasizing their limitations. The memorandum noted that AI systems are prone to hallucinations, creating fictitious information, including quotes and citations. It also highlighted concerns about reliability and bias, as AI lacks the sense of duty, honor, and justice that attorneys are bound by.

While Judge Starr’s rule applies to his specific courtroom, it may serve as a precedent for other judges considering similar requirements. The use of AI in legal work, particularly for briefing and research, holds promise, but ensuring accuracy and transparency is essential. By implementing this certification, the judge seeks to prevent the misuse of AI-generated content and maintain the integrity of legal arguments presented in court.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Texas Federal judge requires lawyers to certify non-use of AI in courtroom arguments

Judge Brantley Starr of the Texas federal court has introduced a new requirement for attorneys appearing in his courtroom. They must now certify that no portion of their filing was drafted by generative artificial intelligence (AI), or if it was, that it was verified by a human being. This decision comes after attorney Steven Schwartz used ChatGPT, an AI language model, to supplement his legal research in a recent federal filing. However, the AI-generated cases and precedent provided were entirely fabricated, causing regret on Schwartz’s part.

Judge Starr’s new rule, called the “Mandatory Certification Regarding Generative Artificial Intelligence,” mandates that attorneys file a certificate on the docket confirming compliance with the requirement. The certificate must attest that no AI was involved in drafting the filing or that any AI-generated language was cross-checked for accuracy by a human using print reporters or traditional legal databases.

The judge’s office provided a well-informed explanation for the necessity of this certification, acknowledging the potential of AI platforms for various legal purposes but emphasizing their limitations. The memorandum noted that AI systems are prone to hallucinations, creating fictitious information, including quotes and citations. It also highlighted concerns about reliability and bias, as AI lacks the sense of duty, honor, and justice that attorneys are bound by.

While Judge Starr’s rule applies to his specific courtroom, it may serve as a precedent for other judges considering similar requirements. The use of AI in legal work, particularly for briefing and research, holds promise, but ensuring accuracy and transparency is essential. By implementing this certification, the judge seeks to prevent the misuse of AI-generated content and maintain the integrity of legal arguments presented in court.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Quantum computing will fortify Bitcoin signatures: Adam Back

The post-quantum era is still “several decades away,”...

New-Age Tech Stocks Bleed Amid Broader Market Slump

SUMMARY Eighteen out of the 30 new-age tech stocks...

UAE-based Web3 banking startup raises $25m series A

The funding was co-led by Web3Port Foundation and...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!