AI does not produce reliable information. As Trump's former lawyer has fucked around and found out.

Michael Cohen, the former personal lawyer and fixer for Donald Trump, used an artificial intelligence program to generate bogus legal citations in his motion for early termination of his supervised release.

Cohen, who was sentenced to three years in prison in 2018 for lying to Congress and other crimes, hired a new lawyer, David M. Schwartz, to file a motion on his behalf in November 2023. In the motion, Schwartz cited three cases that supposedly supported Cohen's request for ending his supervised release, which he began in May 2021 after being released from prison due to the COVID-19 pandemic.

As U.S. District Judge Jesse Furman pointed out in a scathing order, none of the cases Schwartz cited actually exist. Furman ordered Schwartz to produce copies of the decisions by December 19 or face sanctions. He also demanded an explanation of how the motion came to cite non-existent cases and what role Cohen played in drafting or reviewing it.

Cohen, in a stunning admission, said that he had mistakenly given Schwartz the fake citations after the artificial intelligence program Google Bard cooked them up for him. Google Bard, a product of Google's research division, is a natural language generation system that can create texts on various topics, including legal ones. Cohen said he used the program to research cases that could help his cause, but did not verify the accuracy or authenticity of the results.

There is a widespread misconception that LLMs are capable of independently reproducing factually correct information; a misconception that is rapidly taking hold in internet users of Cohen's generation and older who do not have the requisite digital literacy to understand the post-LLM world. For users who are “less online,” using AI - as it stands - in any scenario with significant real-world consequences, especially without proper supervision and verification is dangerous.

Regardless of your opinion or perception of Cohen - the man is a trained lawyer and allegedly an astute fixer. If he is unable to understand or process the reality that current generation LLMs simply cannot be trusted to produce factual or reliable information, there is very little chance that other older users will be able to either. The panic about misinformation on TikTok has been seizing headlines. And there is certainly a danger of AI generated propaganda influencing younger users. But there's a far greater chance that demographics over 60 will continue to fall for AI generated content and fabricated "facts" - with increasingly dangerous outcomes.

@Westenberg logo
Subscribe to @Westenberg and never miss a post.