हिंदी

AI Research Backfires! US Lawyer Faces Court Hearing After Using ChatGPT

AI Research Backfires! US Lawyer Faces Court Hearing After Using ChatGPT

A lawyer from New York is facing a court hearing after his firm Levidow, Levidow & Oberman utilised the AI technology for legal research.

This was discovered after a filing used hypothetical legal cases as examples. Noticing this, the judge stated that the scenario had created an “unprecedented circumstance” for the court. However, in court, the attorney stated that he was “unaware that its content could be false.”

ChatGPT, an Artificial Intelligence (AI) application, has been making news since its release. It’s been utilised to finish tasks like writing work emails in certain tones, styles, and instructions.

The case began with a man suing an airline for what he claimed was personal injury. His legal team filed a brief that cited a number of previous court cases in an attempt to establish, through precedent, why the case should proceed.

However, in a response to the judge, the airline’s lawyers stated in a letter that they were unable to locate some of the examples referenced in the brief.

Judge Castel then wrote to the man’s legal team seeking an explanation. According to him, “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

Later, it was revealed that the study was conducted not by the man’s lawyer, Peter LoDuca, but by one of his law office colleagues. Steven A Schwartz, an attorney with over 30 years of expertise, used the AI tool to discover cases comparable to the one at hand.

Furthermore, Mr Schwartz stated in a statement that Mr LoDuca was involved in the research but was unaware of how it was carried out. He stated that he “greatly regrets” utilising ChatGPT and that he had never used it previously for legal study. He expressed his belief that he was “unaware that its content could

be false.” He vowed never to use AI to “supplement” his legal studies “without absolute verification of its authenticity” again.

A viral Twitter thread depicts the conversation between the chatbot and the lawyer. “Is Varghese a real case?” Mr Schwartz inquires. “Yes,” ChatGPT said, “Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) is a real case.”

He then asks the bot to reveal its source. ChatGPT noted that after “double checking,” the case is authentic and can be found on legal research databases such as LexisNexis and Westlaw.

The judge has scheduled a hearing for June 8 to “discuss potential sanctions” for Mr Schwartz.

Recommended For You

About the Author: Isha Das