Coined on Reddit in 2017, the term “deepfake” combines the word “deep,” as in AI deep-learning technology, and “fake,” signifying content that isn’t real. Image manipulation is nothing new, it began in the 19th century and soon rolled into early Hollywood movies. Today, the advent of digital technology – and more recently the ability to leverage machine learning and AI – has led to a proliferation of synthetic media and deepfakes.
You may have seen a recent image of Pope Francis in an oversized puffer jacket, or the viral TikTok account deeptomcruise, showing actor Miles Fisher impersonating the “Mission: Impossible” star with the help of realistic video manipulation. However, deepfakes are often associated with more sinister motives – the Reddit forum where the term originated featured face-swapping of celebrities’ likenesses onto existing pornographic content.
There are, unfortunately, myriad examples of deepfakes being weaponised to demean and harass targets, with one recent incident in Spain seeing images of underage girls altered to remove their clothing.
Deepfake Disruption in Politics and the Economy
Deepfakes can also be used to attack figures with public profiles, be they politicians, celebrities or high-profile business executives. Henry Ajder, a generative AI and deepfakes expert who advises Meta, Adobe and the UK government, says that the democratisation of AI tools is creating significant challenges in the fight against harmful content.
“Every single midterm and presidential election in the US I’m asked: is this going to be the one that we see that killer deepfake cause chaos? Up until recently I’ve said no, because the opportunity cost of a deepfake has not been worth it compared to the more traditional media manipulation and disinformation approaches bad actors have been using,” Ajder explains.
“This year that’s changed quite substantially. If you look at TikTok, Instagram and other social media platforms, the number of hyper-realistic memes, be they Joe Biden playing Monopoly with Donald Trump or a voice clone of Jordan Peterson talking about the finale of Game of Thrones being a disaster, are huge. This has really coincided with the true democratisation of these tool sets.”
In May, a fake image of the Pentagon on fire spready quickly on X (then Twitter), causing the stock market to briefly dip. “You can see very clearly how this can be weaponised at scale to change public sentiment, to cause panic, to impact a specific organisation’s stock price either positively or negatively,” Ajder adds.
Deepfake and false or misleading media is only going to become more prevalent, meaning PR and corporate communications professionals must remain acutely aware of the impact they could have on reputation and trust.
The use of AI voice cloning in online scams, primarily to steal money, has actually been a threat for several years. Back in 2019, the CEO of an unnamed UK energy firm believed he was speaking on the phone with his boss, the chief executive of the firm’s parent company, who asked him to send more than $200,000 to a supplier. More recently, senior citizens in Canada lost a combined $200,000 earlier this year in an apparent voice-cloning con.
Deepfakes and the Risk to Brand Safety
“These are becoming viable attack strategies for people to use against both businesses and private individuals,” notes Ajder, adding that with more of our biometric data being captured, it’s likely that this kind of fraud will increasingly trickle down from targeting high-profile celebrities, politicians and business people to targeting members of the public.
“It's important to understand what the threat vectors are against your employees, your brand and your C-suite, but also how you can protect your users and make sure your clients aren’t being targeted by malicious AI identifying as your brand,” he says.
How PR Pros Can Protect Their Brands: 5 Steps
What can PR professionals do to navigate the threat of deepfakes? Here are five quick tips to help combat the problem and mitigate risk:
- Monitor coverage: Look to online and social media coverage of your brand for any emerging threats or harmful content. Monitoring tools like CIsionOne can help here – the platform includes an AI-driven React Score feature that can identify potentially harmful content, like hate speech and fake news in individual articles, social media posts and broadcast segments. From there any unfolding deepfake story that could lead to reputational damage can be dealt with quickly.
- Take preventative measures: Encourage clients to explore the use of watermarks or digital signatures on important media assets. This can be especially useful during high-profile events or campaigns where public attention will be high. Transparency and authenticity when disseminating all media communications is key.
- Be proactive: Building on the idea of transparency, it’s worth encouraging brands to proactively address deepfake concerns by sharing their media creation processes and sources. This will help maintain the trust and credibility of their customers and clients, being able to point to verified comms and quickly alert them to fake content.
- Plan for a crisis: PR pros know full well that a crisis can be just around the corner – a crisis comms plan isn’t a ‘nice to have’, it should be ‘always on’ and constantly adapting to new threats. Include a specific response strategy for any deepfake incidents and establish the personnel responsible for handling deepfake-related crises and their roles.
- Take legal action: We’ve seen in the recent cases against Fox News that knowingly spreading misinformation, including deepfakes, can create legal liability for the perpetrator. Speak to your legal teams now to plan potential legal responses in case legitimate publishers or platforms contribute to the spread of content that harms your brand or your employees.
Though deepfakes can demonstrate the risks posed by AI in PR and communications, it’s important to understand that the technology has the potential to transform the industry positively with tools like ChatGPT opening the possibility to scale and speed up content creation.
AI is also built into many of the tools PR pros use on a daily basis and can help take care of time-consuming tasks to free humans up for more strategic work. But the more content and data we create using AI, the more we need to be mindful of the risks the technology poses – and the possible disruption from those using it for nefarious purposes.
For a wider look at combatting misinformation, you might want to next read this in-depth article, which outlines how PR and comms teams must take responsibility for the spread of harmful content.
To learn more about how CisionOne utilises AI to help PR and communications teams, get in touch to speak with an expert today.
Most Recent Posts
Cision Resources
-
E-books and Guides
Comprehensive how-to guides on strategy and tactics
-
Case Studies
What are other brands doing – and how can we learn from them?
About Simon Reynolds
Simon is the Content Marketing Manager at Cision UK. He worked as a journalist for more than a decade, writing on staff and freelance for Hearst, Dennis, Future and Autovia titles before joining Cision in 2022.
Learn More. Do More. demo new
PR Tips, Case Studies, and Product Updates
The Complete Guide to Crisis Communication
Protect your brand from a PR crisis. Get The Complete Guide to Crisis Communication and learn how you can build an effective crisis communication strategy.
[On-Demand Webinar] The Next Generation of Media Intelligence: From Gorkana to CisionOne
Explore CisionOne, a revolutionary media intelligence platform, and the evolution of Gorkana. Learn key features and strategies from Luke Williams, CisionOne Product Marketing Manager. Elevate your media outreach to new heights!
How Ellisphere Boosted Campaign Engagement and Visibility
Find out how Ellisphere increased engagement on their campaign content by up to 48% using our Multichannel News Releases and Guaranteed Paid Placement.