A lawyer representing a man who sued an airline used artificial intelligence (AI), specifically ChatGPT, to assist in preparing a court filing. However, the outcome was far from successful.
The lawsuit followed a typical pattern: Roberto Mata filed a lawsuit against Avianca, alleging injury caused by a metal serving cart during a flight to Kennedy International Airport. When Avianca sought to have the case dismissed, Mr. Mata’s lawyers vehemently opposed, submitting a 10-page brief that referenced multiple relevant court decisions, such as Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines.
The problem arose when no one, including Avianca’s lawyers and the judge, could locate the mentioned court decisions or the quotes and summaries mentioned in the brief. The reason was that ChatGPT had fabricated everything.
The lawyer responsible for the brief, Steven A. Schwartz from the firm Levidow, Levidow & Oberman, admitted his error to the court, stating in an affidavit that he had used an AI program for legal research and that the source turned out to be unreliable.
Mr. Schwartz, who has practiced law in New York for thirty years, asserted that he had no intention of deceiving the court or the airline and claimed to be unaware of the potential falsity of the AI-generated content. He even asked the program to verify the authenticity of the cited cases, to which it affirmatively responded.
Expressing regret for relying on ChatGPT, Mr. Schwartz pledged to verify the authenticity of any AI-generated content in the future. The judge, P. Kevin Castel, acknowledged the unprecedented nature of the situation, with a legal submission full of “bogus judicial decisions, with bogus quotes and bogus internal citations.” A hearing was scheduled to discuss possible sanctions on June 8.
This case involving the usage of ChatGPT by lawyers highlights the ongoing debate within the legal profession about the value and risks associated with AI software. Legal ethics experts, like Stephen Gillers from New York University School of Law, emphasize the need for verification of information provided by AI systems, cautioning against blindly incorporating their output into court filings.
The real-life case of Roberto Mata v. Avianca Inc. serves as a reminder that white-collar professionals still have some time before being replaced by robots. The incident unfolded when Mr. Mata, a passenger on Avianca Flight 670 in August 2019, claimed to have been struck by a serving cart. Avianca’s lawyers sought to dismiss the case based on the expiration of the statute of limitations.
In their brief filed in March, Mr. Mata’s attorneys argued for the continuation of the lawsuit, citing multiple court decisions that were subsequently revealed to be nonexistent. Avianca’s legal team informed Judge Castel about their inability to locate the referenced cases, including Varghese v. China Southern Airlines, and the suspicious quotations contained in the brief.
Judge Castel requested that Mr. Mata’s attorneys provide copies of the opinions mentioned. They submitted a compilation of eight supposed opinions, which included details like court names, judges, docket numbers, and dates. However, Avianca’s lawyers could not find any of the listed opinions in court records or legal databases.
Bart Banino, an attorney from Avianca’s legal team, mentioned that their firm, Condon & Forsyth, specializes in aviation law and recognized that the cited cases were fictitious. They suspected the involvement of a chatbot in the creation of the brief.
ChatGPT operates by predicting the probable sequences of text fragments based on a statistical model trained on vast amounts of internet text. In Mr. Mata’s case, the program seemingly grasped the structure of a legal argument but filled it with names and details from various real cases.
Judge Castel, having conducted his own investigation, confirmed that the docket number printed on the alleged Varghese opinion corresponded to an entirely different case. He denounced the opinion as fraudulent, noting the presence of non-existent internal citations and quotes. Five of the other opinions provided by Mr. Mata’s lawyers also appeared to be fabricated.
During the hearing, Mr. Schwartz explained his version of events. He originally filed the lawsuit in a state court but, after it was transferred to Manhattan’s federal court, where he was not admitted to practice, his colleague Mr. LoDuca became the attorney of record. Mr. Schwartz continued the legal research independently, involving ChatGPT to supplement his work. He claimed that the AI program assured him of the authenticity of the non-existent cases.
In their exchange, Mr. Schwartz asked ChatGPT about the legitimacy of “varghese,” and the AI responded positively, providing a citation. When Mr. Schwartz further inquired about the other cases, ChatGPT claimed they were real and accessible in reputable legal databases. However, this turned out to be false.
Ultimately, the incident underscores the importance of verifying information provided by AI systems and the potential pitfalls of relying solely on their output without human oversight.