This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| less than a minute read

The Perils of Using AI (As Learned by Lawyers Representing Anthropic)

Anthropic is one of the world's leading AI companies through its LLM Claude. Latham & Watkins is one of the world's leading law firms. Mix them together, and you end up with today's latest cautionary tale of over-reliance on AI by lawyers. While representing Anthropic in one of many pending cases challenging AI companies' use of copyrighted materials, Latham apparently relied on Claude to create citations for a report submitted by its expert. Unfortunately, Claude hallucinated the citations, leading Latham and the expert to cite a nonexistent academic article. After the music publishers suing Anthropic noticed the invented citation, Latham was forced to cop to the error, submitting a declaration stating that while there was a real report their expert relied upon, Claude created a fake citation rather than providing an accurate one. Oops!  

As AI continues to proliferate, I will repeat my mantra of  “don't trust and definitely verify” every output it produces. With my BSF colleagues, I am litigating several cases against other AI companies, and you can be sure we will check every citation to make sure Llama or ChatGPT aren't trying to pull a fast one.

A lawyer representing Anthropic admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with "an inaccurate title and inaccurate authors," Anthropic says in the filing, first reported by Bloomberg. Anthropic's lawyers explain that their "manual citation check" did not catch it, nor several other errors that were caused by Claude's hallucinations.

Tags

boies schiller flexner, ai, techlaw