AI: a New Bad Actor in Smear Campaigns
Yes, it's something news to manage for reputation protection and safety
Technology opens up opportunities for societal advancement, yet also abuse. Now, AI is a tool for attacks on the names of people and organizations.
“Reputational sabotage has found an ally in artificial intelligence,” wrote Chris Davis, at Insurance Business America’s magazine.
“A growing number of companies are confronting coordinated disinformation attacks, often orchestrated by ex-employees, corporate rivals or activist groups.”
If that isn’t alarming by itself, then maybe learning that “These campaigns, once limited in reach and complexity, are now cheaper, faster and more damaging thanks to generative AI, according to London-based law firm Schillings, which specializes in crisis response and reputation management,” Davis wrote.
This is not an exaggeration of danger. The risk is significant.
“Speaking to City AM, Schillings partner Juliet Young reported a striking 150% rise in smear campaigns over the past three years, targeting high-performing firms and executives,” Davis reported.
A 150-percent increase is showing how relatively simple and effective the technology is proving to be in attacks on reputation. This seems to indicate that prevention systems and processes are strong enough to make danger nearly impossible.
“The strategies and technology needed to launch these attacks are now accessible globally, across a wide range of actors and budgets,” said Young. “We’ve encountered instances where the cost of initiating a campaign is less than £50 ($67.62).”
The Uses
The strategy has been used to gain leverage, such as in legal disputes, Davis wrote.
“It’s not uncommon for clients to face a campaign intended to pressure them into a settlement, under threat of enduring reputational damage,” Young told City AM.
The How
There are preferred offensive plans.
“The attacks often deploy a multi-pronged approach,” Davis discovered. “AI-generated content disseminated through phony news outlets, bot-driven amplification on social media and deep-faked visuals designed to manipulate public perception.
“Some even insert compliance-triggering terms into online narratives to flag regulators or financial institutions.”
If organizational leaders and teams and whatever systems being used to flag and catch such criminal activity are not aware of what is happening or they are slow to discover it, the damage can be severe.
“Left unchallenged, these efforts can alter search-engine results, dent investor confidence and expose businesses to regulatory scrutiny,” Young stressed.
The Trap
People can be emotional and biased before they are poised and curious. Perpetrators know this and exploit it.
“Deepfake technology is playing an outsized role,” Davis wrote. “According to Young, some of the more sophisticated smear efforts involve fabricated screenshots of headlines designed to gain traction among unsuspecting audiences.
“Others are seeded with compliance ‘red flags’ that trigger alerts within due diligence databases, creating obstacles to financing or M&A transactions.”
In other words, the criminals know what works and how to use technology to create weapons to help them achieve objectives and their mission.
First Aid for Damages is Difficult
“The evolving nature of AI-driven disinformation has made mitigation not only more urgent but significantly more complex,” Davis wrote.
Young elaborated to explain:
“The anonymity of the actors and the scale of these attacks make them difficult to trace and counter,” Young said.
Dismantle and Reset
The incidents or crisis often calls for different teams for a winning defense.
Young “emphasized that an effective response typically requires… investigators, legal counsel and crisis communications specialists, who can dismantle the disinformation and reset the narrative,” Davis reported.
AI for dirty uses should be no surprise, especially now that the technology and abuse is becoming more common and well known.
Whether it’s for one-on-one attacks (don’t forget this will occur too), small-or-large groups or organizations, the perpetrators who use it for immoral or criminal reasons learn how AI can do the most damage to help them achieve their aim and goals.
Most of us are not prepared to build and execute a defense, at least not a socially acceptable one.
In addition, there is complexity involved to wisely plan, strategize and implement different levels of prevention as well as mitigate and resolve smear campaigns.
Heightening our awareness of what is happening in society, what is possible with AI as a weapon and what may become more dangerous in the future, is a responsible start.
Michael Toebe is a reputation and communications specialist at Reputation Intelligence and writes the Reputation Intelligence newsletter here on Substack and on LinkedIn. He helps individuals and organizations proactively and responsively with matters of trust, stakeholder relationships and reputation.
He has been a reporter for newspapers and radio, hosted a radio talk show, written for online business magazines, been a media source, helped people work through disputes, conflicts and crises and assisted clients with communications to further build, protect, restore and reconstruct reputation. LinkedIn profile.