AI Voice Cloning: How Dangerous?
"No one is safe from voice cloning... only 3 seconds are voice are needed"
Society continues to hike through the uncharted wilderness of artificial intelligence (AI), which means, there will be danger and elevated risk.
“No one is safe from AI voice cloning,” the newsletter, monday.com recently wrote.
“The technology is posing an increasingly serious threat…
“A bizarre recent incident saw a high school athletic director use an AI voice clone to impersonate the school’s principal, leading the public to believe he made racist and antisemitic comments. The incident demonstrated just how easy AI voice cloning is to use and abuse.
“With only three seconds of one's voice needed to duplicate it, deep-fake revenge slander could happen in any workplace.
“Concerns are growing, leading the Federal Communications Commission to make the use of AI voices in robocalls illegal and US lawmakers to file new bills — such as the No Fakes Act and the No AI Fraud Act that seek to prevent companies from using an individual’s face, voice, or name without their permission.”
The Reputation Intelligence newsletter recognized the importance of this topic from communications and reputation perspectives and spoke to Josh Weiss about the valid concerns. The president and founder of 10 to 1 Public Relations, Weiss has spent 25-plus years in public relations and crisis communications.
Is what monday.com asserted, factual — is it AI voice cloning a high-priority danger because of the strength and ease of the technology?
“Few companies are prepared for the risk they face in the short-term from AI-generated deepfake videos, image and audio — this includes AI voice cloning,” Weiss points out. “We’re already witnessing celebrities becoming the first victims and now the 2024 elections have been flooded with phony video and audio clips.”
The cloning has established itself as a noticeable problem and risk. It’s not a warning. It’s here.
The motivation is high to utilize it for benefit.
“Fraud follows the money and businesses will be next,” Weiss says. “There have been some deepfake scams affecting big-time CEOs. This threat isn’t something business leaders should be concerned about in the future, its something that is already starting to affect businesses today.”
Victim counts will rise in different areas and in different ways, possibly significantly, from nefarious characters.
“A fake AI-generated video or AI voice clone showing or saying your company leader stating that your big new product is going to be delayed by months, leading your stock price to drop so that short-sellers can make a killing before you even start to mount a defense,” Weiss says.
“A fake AI-generated image of key staff committing a crime appears just as the selection committee is choosing the winner of an important bid, creating enough doubt that they select your competitor instead.
“A fake AI-generated audio recording where your company President appears to be on the phone making racist and inappropriate comments, creating a staff and customer uproar.”
The protective qualities of the government response will be scrutinized.
“It’s too early to tell how effective these potential laws would be,” Weiss says, going on to explain, “All the same, it’s an important for regulators, government officials and agencies to recognize the threat and be very clear on safeguards that should be followed to protect individuals and businesses.”
It’s important responsive action and concerted effort to combat maliciousness.
“While any law needs to be continually updated to stay current or ahead of technology trends, we need to start somewhere,” Weiss says.
The risk means leaders should be thinking ahead as far as their own response.
“It is crucial for businesses to proactively prepare for potential attacks and enhance their crisis communications plans,” Weiss advises. “They can do this by incorporating deepfake-related scenarios and strategies for safeguarding reputations.”
Reputation as a side responsibility will no longer be acceptable.
“As AI increases and the digital transformation continues, reputation management will become even more important,” Weiss contends. “New threats such as the threat of deepfakes or AI-generated images, video or audio files will evolve so it will be important for companies to consistently update crisis playbooks and strategies.”
He’s concerned for the immediate and near future.
“I believe the most dangerous time for businesses is in the next 18 months,” he asserts. “Big companies and platforms like Meta and Google are slowly adding features to help identify and remove deepfakes. However, until those features become permanent elements of the platforms and until the awareness of deepfakes are better established in society, such attacks and AI voice cloning are a very dangerous threat.”
The sobering reality, he says, is that “A lot of individuals and companies are likely to be attacked.”
Weiss says his company has created a free PR guide for companies to help them prepare for, and manage a deepfake crisis.
Thank you for reading the Reputation Intelligence newsletter…
Michael Toebe is a reputation consultant, advisor and communications specialist at Reputation Intelligence: Reputation Quality, assisting individuals and organizations with further building reputation as an asset or ethically and responsibly protecting, restoring or reconstructing it.
Recent Insights:
What is Kristi Noem Doing to Herself? Does She Not See It?
Follow Reputation Intelligence on Twitter/ “X”
Follow Reputation Intelligence on the Medium platform for more stories/insights