Legal Research Gone Wrong: Attorney's Trust in AI Leads to Fake Citations

2 min read A US attorney is in hot water for using OpenAI's ChatGPT in legal research. Fake citations discovered in the litigation process have landed Steven Schwartz in serious trouble. May 30, 2023 06:30 Legal Research Gone Wrong: Attorney's Trust in AI Leads to Fake Citations

A US attorney is facing serious consequences after relying on OpenAI's ChatGPT for legal research. Steven Schwartz's decision to trust the AI tool resulted in the use of fake citations in a litigation process.

In a sworn affidavit, Schwartz admitted to using ChatGPT for research in a case against Avianca airlines. Despite warnings of potential inaccuracies, Schwartz claims he was "unaware that its content could be false." 

Judge Kevin Castel expressed skepticism over the authenticity of the cases presented by Schwartz. Six cases, including Varghese v. China South Airlines and Shaboon v. EgyptAir, were found to be completely fabricated.

Schwartz now faces potential sanctions for his reliance on the AI tool. Despite vowing to verify AI-generated research in the future, the attorney will have to answer for the use of false citations. Hearing scheduled for June 8.

This case highlights the importance of cautious adoption and verification of AI tools in the legal profession. It prompts a critical discussion on the role of AI in legal research and the need for transparency and accountability.

User Comments (0)

Add Comment
We'll never share your email with anyone else.