
4 days ago
AI is not the problem
When an admitted attorney files a legal brief in court that cites six court cases that did not exist, you know you are in for an exciting ride.
In June 2023, a federal court in New York faced exactly that situation. The brief listed detailed case names, airlines, courts, and docket numbers. Everything looked legitimate and carefully sourced. It was not.
ChatGPT had generated every citation, and Steven Schwartz of Levidow, Levidow and Oberman P.C. submitted the brief to a federal judge without checking any of them.
When Avianca's attorneys flagged the problem, Schwartz went back to ChatGPT and asked it directly whether the cases were real. The chatbot assured him they were. He accepted that too.
Judge P. Kevin Castel of the Southern District of New York was not sympathetic. He described the submissions as containing bogus judicial decisions with bogus quotes and bogus internal citations. Schwartz, his colleague Peter LoDuca, and their firm were sanctioned $5,000. In a separate ruling issued the same day, the underlying personal injury lawsuit was dismissed on statute of limitations grounds.
At the sanctions hearing, Schwartz told the court he had been operating under a false assumption: that ChatGPT could not possibly fabricate cases on its own. He had never thought to check.
That is not an AI failure. It is a human failure. And it is happening every day, in offices across every continent, at every level of seniority, in every industry that has decided AI is a shortcut to thinking rather than a tool for it.
Click below to read the full story:
https://lnkd.in/dWnb4K3k
No comments yet. Be the first to say something!