February 8, 2024

AI Misuse in the Legal Industry – Unpacking the Controversy Surrounding New York Attorney Jae Lee’s Fictitious Case Citations

Book a Demo
  • This field is for validation purposes and should be left unchanged.

New York Attorney, Jae Lee, recently came under fire for using an AI tool, ChatGPT, to cite a nonexistent case in a legal filing while appealing the dismissal of her client’s medical malpractice lawsuit. The case in question, Bourguignon v. Coordinated Behavioral Health Services, was found to be untraceable by the 2nd Circuit court, and Lee was unable to provide a copy of the decision.

This incident has raised concerns about the potential misuse of AI in the legal industry. Lee’s actions were deemed considerably below what is expected of counsel, and as a result, the US Court of Appeals for the 2nd Circuit forwarded her to a grievance panel for possible sanctions.

This event is not an isolated incident. Other instances have been reported where lawyers have mistakenly used AI tools to cite fabricated cases in their legal filings. Notably, two Manhattan attorneys and the former attorney for Donald Trump, Michael Cohen, have also fallen into this error.

In another case, Jae Lee of JSL Law Offices was criticized for citing a fictitious, AI-generated case about a botched abortion in a medical malpractice lawsuit. These instances have sparked a debate about the role of AI in the legal field.

While AI tools like ChatGPT have been useful for many purposes, including legal research, they have also been misused, leading to potentially serious consequences. Experts in the field of AI are now voicing their concerns about the deployment of AI in the legal industry.

The managing director of AI platform Luminance, for instance, is cautioning that while AI is a powerful tool, it is not an expert source and should be used with care in the legal field. This sentiment is echoed by Simon Thompson, head of AI at GFT. Thompson argues that AI systems should only be utilized in industries and applications for which they have been specifically designed to prevent potential misinformation or inaccurate responses.

The misuse of AI tools like ChatGPT in the legal industry underscores the need for more stringent guidelines on their use. These incidents serve as a stark reminder that while technology can be a powerful ally, it must be used with discretion and responsibility, especially in industries like law where accuracy and truth are paramount.

Connect with our expert to explore the capabilities of our latest addition, AI4Mind Chatbot. It’s transforming the social media landscape, creating fresh possibilities for businesses to engage in real-time, meaningful conversations with their audience.