New York Lawyer Admits Using AI-Powered ChatGPT for Legal Research

 


A New York lawyer finds himself in a legal predicament after relying on the AI tool ChatGPT for case research, leading to the citation of non-existent legal cases. Learn more about the unprecedented circumstance and the consequences faced by the lawyer and his firm.

In a recent development, a New York lawyer is facing a court hearing due to his firm's utilization of the AI tool ChatGPT for legal research. The court has encountered an "unprecedented circumstance" as a filing made reference to legal cases that were discovered to be non-existent. This article delves into the details of the case, highlighting the lawyer's explanation and the implications of employing AI technology in the legal profession.

-New York Lawyer Faces Consequences for AI-Assisted Legal Research

-Unprecedented Situation Unfolds as AI-Generated References to Non-Existent Cases Emerge

-Lawyer Claims Unawareness of AI's Potential for Inaccurate Information

-Prominent Attorney Vows to Verify AI Outputs After Research Mishap

A New York lawyer has found himself in a legal quandary after his law firm employed the assistance of the AI-powered tool ChatGPT for their legal research needs. The court presiding over the case expressed astonishment at the "unprecedented circumstance" that arose when it was discovered that the filing contained references to legal cases that did not exist.

The lawyer in question, during the court proceedings, asserted his ignorance regarding the AI tool's capability to generate misleading content. ChatGPT, renowned for its ability to produce text resembling natural human language, comes with a cautionary disclaimer acknowledging the possibility of inaccurate information.

The original case involved a plaintiff suing an airline over an alleged personal injury. To substantiate their claim, the plaintiff's legal team submitted a brief citing several prior court cases, aiming to establish a precedent for the case to proceed. However, the airline's legal representatives later brought to the judge's attention that they could not locate some of the referenced cases.

Judge Castel, in an order demanding an explanation from the plaintiff's legal team, noted that "six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations."

Further investigation revealed that the research was not conducted by the plaintiff's attorney, Peter LoDuca, but rather by a colleague from the same law firm, Steven A Schwartz. Mr. Schwartz, a seasoned attorney with over 30 years of experience, had employed ChatGPT to find relevant previous cases.

In a written statement, Mr. Schwartz clarified that Mr. LoDuca had no involvement in the research process and was unaware of its execution. Expressing regret, Mr. Schwartz admitted to relying on the chatbot for the first time, acknowledging his lack of awareness regarding the potential inaccuracies in its content.

Mr. Schwartz has made a commitment to never again utilize AI for legal research without ensuring the absolute authenticity of the information. He has emphasized the need for rigorous verification before integrating AI as a supplemental tool in future research endeavors.

Screenshots attached to the filing portray a conversation between Mr. Schwarz and ChatGPT. In one message, Mr. Schwarz inquires about the legitimacy of the Varghese v. China Southern Airlines Co Ltd case, which turned out to be one of the non-existent cases. ChatGPT responds affirmatively, prompting Mr. Schwarz to inquire about its source. After "double-checking," ChatGPT reaffirms the case's existence, claiming it can be found on legal reference databases such as LexisNexis and Westlaw. The chatbot also asserts the authenticity of the other cases provided to Mr. Schwartz.

Both attorneys, employed by Levidow, Levidow & Oberman law firm, have been ordered to present their justifications and explanations at a hearing scheduled for 8 June, where potential disciplinary actions will be considered.

Since its launch in November 2022, ChatGPT has gained popularity among millions of users. This AI-powered tool possesses the ability to respond to questions using natural language, mimicking human conversation. Additionally, it can replicate various writing styles. It relies on a database reflecting internet content up until 2021.

The incident involving the New York lawyer has raised concerns regarding the risks associated with artificial intelligence (AI), including the dissemination of misinformation and the potential for bias.

The utilization of AI technology, specifically ChatGPT, for legal research has landed a New York lawyer and his firm in hot water. The inadvertent inclusion of non-existent legal cases in a filing has led to a court hearing and potential disciplinary consequences. This incident highlights the importance of thorough verification and caution when employing AI tools in the legal profession. As the use of AI continues to grow, it is crucial for legal professionals to exercise diligence and ensure the accuracy and reliability of the information generated by these technologies.

Comments

Popular posts from this blog

AI and data annotation: the hidden labor behind the AI revolution

Here are the skills Business Analysts possess

Tesla's Dojo Supercomputer: Redefining AI with Unprecedented Computing Power