Deepfakes, Risks and Protection
A Reputation Intelligence Special Report with cyber, legal, reputation, AI and communications specialists
A doctor recently wrote an article communicating his reasonable concerns about technology, deepfakes and risks to how people would judge his reputation. His fear held him back from creating and publishing videos on a major platform.
"Misinformation spreads faster than corrections and even if a deepfake were proven false, the damage could already be done,” Mehmet Yildiz stated about his reasoning.
The AI on Google defines it as follows:
“A deepfake is a synthetic media image, video or audio recording that has been manipulated using artificial intelligence (AI) to make it appear real,” the Google search reports. “The term "deepfake" comes from the combination of ‘deep,’ which refers to the AI deep-learning technology used — and ‘fake’ — which indicates that the content is not real.”
Reputation Intelligence engages in a discussion today about this danger. Before that happens, let’s look at what Ian Paterson, the CEO at Plurilock Security, expressed.
"With the rapid advancement of artificial intelligence, the sophistication and believability of scams are expected to increase dramatically,” he tells RepIntel for this article.
“AI innovation is progressing at an exponential rate, with new tools emerging daily that mimic human behavior, create deepfake videos and alter voices convincingly.”
Did you notice that powerful adjective? “Convincingly.” It gets your attention.
Now to the conversation with a variety of professionals providing their expertise in the form of insights and recommendations.
The level of risk with publishing your face and voice online is analyzed differently, depending on the professional you ask.
“The danger of deepfakes is not to be underestimated, especially for those who frequently share their images, voices or content online,” says Theresa Payton, who was the first female CIO at the White House and is the founder, CEO and chief advisor at Fortalice Solutions, which specializes in cybersecurity, cybercrime, cyber defense and risk assessment.
“These AI-generated fabrications can now manipulate a person’s likeness and voice to create entirely false representations that are often indistinguishable from reality,” she says.
The danger therefore is credible.
“For individuals with a public presence, this can erode credibility, distort their message, and tarnish reputations,” Payton says. “It’s crucial to be aware of the escalating threat of deepfakes and the potential for this technology to be weaponized against us for various reasons, including political, financial or personal motivations.”
There is a 30,000-foot overview from the legal field to consider.
“One thing that's especially scary about deepfakes is that we're probably just starting to see the surface of what's possible with this technology,” says Paul Koenigsberg, a personal injury attorney at Koenigsberg & Associates.
“We've already had established cases of these being used for nefarious purposes but the technology has been here for are relatively short time, such that we can safely assume that we are yet to see the worst it can do.”
He assesses the risk differently.
“In general, the level of danger depends on how much you have to lose,” Koenigsberg asserts. “Before you breathe easy and think you can't be a victim because you don't have billions of dollars worth in assets, know that the threshold of what constitutes scam-able assets is really lower than we might think.
“We've already seen cases where deepfakes are used for identity theft, financial scams or reputational harm.”
He is of the belief that it’s not just well-known, high-profile people that are in danger.
“The risk isn't limited to celebrities or influencers; even individuals with smaller audiences or those in professional settings can be targeted,” Koenigsberg says.
A communications professional, however, wants to alleviate some concern or anxiety.
“There is always going to be a risk. Any time you fill out an application and put down your social security number, it can potentially be misused or hacked down the line. In terms of deepfakes, I believe the danger is about the same,” counters Isaac Mashman, the founder at Mashman Consulting Group, a firm that specializes in personal branding and individual reputation management.
“It is improbable, but it could happen. If you have ever posted one photo of yourself or have a singular video or audio of your voice, it is possible to be duplicated, however, the average person shouldn't be all too concerned.”
Communication however is still a dangerous landscape.
“With AI technology these days, both its capabilities and its general lack of regulation, essentially any material online could be used to make a deepfake,” Edward Tian, the founder and CEO at GPTZero, an AI detector for ChatGPT, GPT-4 and Gemini, argues.
“All it takes is just one person to see an image or video of someone and use AI technology to manipulate it.”
The problems are deeper than maybe first thought.
“The dangers of deepfakes extend far beyond simple misinformation,” Payton begins. “They can be used to destroy reputations by attributing false statements or actions to individuals, leading to personal and professional fallout.
“Increasingly, deepfakes are being weaponized for blackmail, harassment, financial fraud and more insidious purposes like the creation of unauthorized illicit imagery and pornography.
“Criminals exploit this technology to manipulate individuals into compromising situations, often without their knowledge or consent. This can be especially devastating, as it not only violates privacy but also subjects victims to severe emotional and reputational harm.”
There are also collective societal costs.
“Beyond direct attacks, deepfakes erode public trust in legitimate content, creating a pervasive environment of doubt and skepticism where truth becomes more challenging to discern,” Payton explains.
“It’s critical to note that anyone can become a target for various reasons — public visibility, personal associations or random targeting,” she warns.
“Being a victim of a deepfake is not a reflection of one’s actions and we must resist the temptation to victim-blame. This evolving threat requires vigilance and empathy to address both the technical challenges and the human impact.”
“Deepfakes have been used for various purposes but the overarching idea is to use a private person's identity for spreading disinformation, destroying that person's reputation (or) blackmail or extortion,” Koenigsberg says.
He too brings up the topic of sexual images as a form of criminal activity.
“One common practice is to create a pornographic deepfake of a person and threaten to spread the resulting media if the victim doesn't pay up. These attacks can lead to long-term damage to their reputation and emotional well-being, even if the content is proven to be fake.
“In many cases, the mere threat of exposure is enough to cause significant distress and force victims into compliance.”
Your likeness could be used to communicate that which you do not endorse or believe.
“Deepfakes represent a double-edged sword,” says Owais Rawda, a senior account manager of public affairs at Z2C Limited, with a focus area on digital media trends. “They can easily mislead audiences and tarnish reputations overnight. Imagine seeing a video of yourself misrepresenting your beliefs or actions. It's a nightmare scenario.
“Beyond personal reputations, there are broader implications for public trust in media. Misinformation spreads like wildfire and while corrections may eventually surface, the damage done can be irreversible.”
Deepfake audio, without video, is a danger that is being discussed in media as well.
“I was recently listening to a podcast where they were discussing how cyber crime networks target elderly people and use their loved ones' voices to get personal information such as their banking info,” Mashman says.
“This may not happen to everyone but serves as a clear example of one such use case of deepfakes. More generally speaking, the more famous and influential the person, the more likely it is for someone to duplicate their likeness.
“Joe Rogan had to step in after businesses were using fake videos of him endorsing their product. Scarlett Johansson (too) with ChatGPT. There will always be risks but do not prevent yourself from taking opportunities out of fear.”
Tian looks at the landscape with more scrutiny.
“Deepfake scams are becoming more prominent and they can be very successful due to how realistic they are,” he says, going on to echo to what Mashman alluded. “Using someone’s likeness with the use of AI can allow scammers to convince family members, friends or the public to give money or participate in something harmful without them realizing it’s a scam.
“There is also the danger of exploitation. The conversation regarding posting kids on social media for example, which has become a much more prevalent topic in recent years, is one to think about.”
Risk management should be an ongoing protective checkpoint to conduct.
“It demands a proactive and layered approach,” Payton advises.
“First, be mindful of the platforms you engage with and the available privacy settings, as reducing the amount of personal content online can limit exposure.”
A proactive approach can provide a benefit-of-the-doubt insurance for you.
“Establish a robust personal and professional brand that is built on authenticity and transparency,” Payton recommends. “This can act as a shield against potential misinformation.”
Constant observation should be another regular duty.
“It’s also vital to actively monitor your online presence, using tools or services that can alert you to any false or manipulated content so that you can respond promptly,” Payton says.
It’s beneficial, she adds, to make risk management a team effort.
“Surrounding yourself with a supportive network that can help amplify your true voice is another crucial strategy,” Payton teaches. “Lastly, remember that preparation and awareness are the best defenses against this evolving threat; staying informed and taking action before an incident occurs will help protect your reputation and well-being.”
Help your stakeholders learn and clearly notice how you communicate.
“In a world where deepfakes can blur the lines between reality and fiction, ensuring your true voice is heard and recognized is crucial for protecting your reputation,” Rawda says.
“I have experience in the public relations space and currently work with public figures on their personal brands,” Mashman says. “Reputation management is a big part of my work and I know of several people who are going out of their way to produce AI clones for themselves in a positive sense.”
AI clones, per Google, “also known as digital twins, are digital representations of people that are created using artificial intelligence to mimic their appearance, voice and behaviors.
“AI clones are interactive technologies that can be used in a variety of fields, including customer service, entertainment, and personal assistance.”
There is a certain reality that prevents total protection when communicating online with images, video and audio.
“There is not much you can do to prevent a bad actor from taking an image of you that already exists online and manipulating it with AI,” Tian laments.
“You could try eliminating as many images and videos of yourself that concern you as possible and you should practice caution in uploading anything new. Right now there isn’t a lot of legal protection but in the coming years I expect there will be.”
“Keep control over what you post and be selective about the platforms you use, ensuring they have strong privacy settings,” Koenigsberg suggests. “Another approach is to diversify your online identity across various platforms.
“For example, avoid relying solely on one platform for communication or branding, as being ‘all-in’ on a single platform makes it easier for deepfakes to gain traction if they spread,” he advises.
“Having a broad online presence can help prevent false content from overwhelming your narrative and tarnishing your reputation.”
Michael Toebe is a specialist for trust, risk, relationship, communications and reputation at Reputation Intelligence - Reputation Quality. He serves individuals and organizations by helping them further build, protect, restore and reconstruct reputation.
Follow Reputation Intelligence on Twitter/ “X”
Follow Reputation Intelligence on the Medium platform for more stories/insights
"Deepfakes can cause immense reputational harm. For instance, they can be used in smear campaigns, where fabricated videos or audio clips are disseminated to damage a person’s or company’s reputation."
"Fortunately, AI is being used to develop tools that can detect deepfakes and mitigate their impact."
"ZDNet reports on the development of AI-driven deepfake detection tools, which are becoming increasingly sophisticated and effective at identifying fake content. These tools analyze video and audio content for signs of manipulation, effectively identifying and neutralizing deepfakes before they can cause significant damage."
https://www.fastcompany.com/91204923/the-future-of-online-reputation-management-in-the-ai-era