Texas federal judge, Brantley Starr, is taking steps to ensure that attorneys appearing in his court do not use generative artificial intelligence (AI) in their filings without proper human review. This comes after an attorney, Steven Schwartz, allowed ChatGPT to supplement his legal research, resulting in completely fabricated cases being included in his federal filing.
Judge Starr has added a new requirement that attorneys must now certify that no portion of their filing was drafted by generative AI or, if it was, that it was checked for accuracy by a human. The judge's move aims to prevent similar AI-generated errors from occurring in his courtroom.
The certification requirement applies to all attorneys appearing before Judge Starr's court in the Texas Northern District. Attorneys must submit a certificate on the docket confirming compliance with the rule. This includes verifying that any language generated by AI was checked for accuracy by a human using reliable legal sources.
The use of generative AI, such as ChatGPT, for legal research and drafting has its limitations. These AI platforms can produce hallucinations and biased information, including fabricated quotes and citations. They lack the allegiance to clients, the rule of law, and the truth that human attorneys uphold.
While generative AI has potential uses in the legal field, Judge Starr's rule reflects the need for responsible and accurate use. Attorneys must be prepared to justify the accuracy and reliability of AI-generated content if they believe it has met the necessary standards for legal briefing. This certification requirement emphasizes the importance of human review in legal filings.