AI and an Egregious Falsehood: Could it Happen to You?
It may not be outside the realm of possibility.
Artificial Intelligence, for its benefit, potential and power, needs to come with a warning when it publishes claims about people or organizations that put them at risk of reputation damage because those accusations may not be factual or true.
Singer and actress Diana Ross, as the saying goes, could tell you some things.
Kellen McGovern Jones, senior investigative reporter at the Dallas Express just wrote about it and it’s quite a story. It’s also a strong argument for AI’s legal and moral responsibility to get communication right.
“Google’s AI Overview falsely stated that legendary singer Diana Ross was arrested for cocaine possession and entered rehab in 1992 — claims not supported by any known record,” McGovern Jones wrote.
Yet it was reported as fact.
“The statement, surfaced through Google’s new ‘AI Overview’ feature in response to a basic search from The Dallas Express on July 30, 2025, stated: ‘Yes, Diana Ross has publicly admitted to struggling with drug use in the past. In 1992, she was arrested for possession of cocaine and later entered rehab,” McGovern Jones added.
The big problem?
“That account is not true,” McGovern Jones wrote.
It can be argued that this isn’t a case of a lie because that requires human understanding and intent, yet what AI is communicating is a complete, ugly falsehood.
“A review of news archives on both Google and LexisNexis by The Dallas Express turned up no evidence that Ross was arrested for cocaine or entered rehab in 1992,” per McGovern Jones. “While the Motown icon did check into rehab in 2002, reports at the time pointed to prescription drugs and alcohol, not cocaine.”
AI can’t be making up things about people if it wants to be ethical technology for public consumption and trusted. Victims of such technology processing and public communication are going to think, no excuses and they would be logical and reasonable in concluding and stating it.

Something to Keep in Mind
It would be natural for someone, such as Ross in this situation, to be extremely upset at AI publishing something painfully wrong about them, so much so that they would immediately want to hire a lawyer to sue for damages, especially punitive ones.
It would seem, at least emotionally, to be an easy win. The reality though is that defamation and courts don’t work as we think they do and should however. It’s often a hard road to satisfy what the law and judges demand to prove defamation.
“After ChatGPT falsely claimed radio host Mark Walters had embezzled from a nonprofit, Walters sued OpenAI for defamation. The lawsuit was dismissed,” McGovern Jones wrote. “The court found OpenAI lacked the ‘state of mind’ necessary for defamation and emphasized user disclaimers.”
In other words, the public be damned. What the court said about user disclaimers is smart. They need to be ever present and clear about the confidence or degree of possibility involving errors of AI’s outputs.
AI will have to tighten up its product and service offering, which I’m confident that it will do. For now, the errors that it is making that are egregious and harmful are not only not acceptable, they are worthy of punishment, an offense that the courts aren’t willing to support, due to the rule of law.
This means that 1) AI companies have to hold themselves to a higher standard and better manage the risk of its findings 2) people should still contact reputable online defamation attorneys for a consultation and 3) realize the importance, especially if one is in the public eye, of the value of the benefit of the doubt from longstanding trust from one’s circle and the court of public opinion.
It’s additionally wise to strongly consider having someone to represent you in the media, public and online when anything false and negative breaks.
None of us may be able to fully prevent AI “hallucinations” but we can be very well prepared to respond effectively.
Michael Toebe is a reputation and communications specialist at Reputation Intelligence and writes the Reputation Intelligence newsletter here on Substack and on LinkedIn. He helps individuals and organizations proactively and responsively with matters of trust, stakeholder relationships and reputation.
He has been a reporter for newspapers and radio, hosted a radio talk show, written for online business magazines, been a media source, helped people work through disputes, conflicts and crises and assisted clients with communications to further build, protect, restore and reconstruct reputation. LinkedIn profile.
