Michael Cohen, once the personal lawyer and fixer for Donald Trump, has admitted to unwittingly incorporating fabricated legal cases, generated by artificial intelligence (AI), into a court filing. The admission came to light in a recently unsealed court document in Manhattan federal court, raising questions about the use of AI in legal research and its potential impact on high-profile cases.
Table of Contents
The AI-Generated Cases
Cohen disclosed that he sourced these AI-generated cases through Google Bard, a tool introduced by Google to rival Microsoft’s ChatGPT. These tools are designed to rapidly generate text based on user prompts but have been known to produce inaccurate information, commonly referred to as “hallucinations.” The fabricated cases were presented in arguments made by Cohen’s attorney, David M. Schwartz, who sought to terminate Cohen’s court supervision after serving over a year in prison.
Unawareness and Misunderstanding
Cohen, who was disbarred five years ago, claimed in a declaration to the judge that he was unaware Google Bard could generate non-existent legal cases. He explained that, as a non-lawyer, he had not kept up with the evolving trends and risks in legal technology. Cohen mistakenly perceived Google Bard as an enhanced search engine, akin to ChatGPT, and had used it successfully in other contexts to find accurate information online.
Blame Game and Responsibility
While Cohen admitted his lack of awareness, he shifted the blame to his attorney, David M. Schwartz. Cohen asserted that Schwartz failed to verify the validity of the citations before submitting them to the judge. However, he urged the judge to show leniency towards Schwartz, characterizing the oversight as an “honest mistake” resulting from inadvertence rather than an intent to deceive.
Attorney’s Perspective
In a declaration filed with the court, Schwartz claimed that he believed drafts of the papers had been reviewed by E. Danya Perry, another attorney representing Cohen. Perry, however, disputed Schwartz’s assertion, stating that she had no involvement in the back-and-forth between Schwartz and his paralegal. She only discovered the bogus cases after the court filing and promptly reported them to the judge and federal prosecutors.
Implications for Michael Cohen and Donald Trump
The incident could have broader implications for Cohen, who is expected to be a key witness in a Manhattan criminal case against Donald Trump. In October of this year, Cohen testified against Trump, emphasizing the trial’s focus on accountability rather than a personal feud.
Precedent and AI in Legal Cases
This isn’t the first instance of AI-generated citations causing legal complications. In June of the same year, two New York lawyers were fined $5,000 for including fictitious court cases generated by ChatGPT in a legal brief. The recurring theme of AI-generated misinformation surfacing in the legal realm underscores the need for increased scrutiny and awareness among legal practitioners.
In conclusion, the admission by Michael Cohen brings attention to the potential pitfalls of relying on AI-generated content in legal proceedings. As technology continues to evolve, it becomes imperative for legal professionals to stay abreast of emerging trends to avoid unintended consequences and maintain the integrity of the legal system