For 2024, new cybersecurity training in recognizing deepfakes will be important for organizations
In recent days, the importance of recognizing deepfakes has been international news, with major celebrities falling prey to explicit AI imagery and a finance worker in Hong Kong paying $25 million to a scammer after a multi-person video conference call with their deepfake CFO persuaded them to do so.
These convincing forgeries can undermine personal reputations, corporate integrity, and even national security, and there has been a 550% rise in the creation of doctored images since 2019, fueled by the prevalence of AI. Deepfake content can be used to create deceptive videos or images to possibly sway elections, crash stock markets, and damange corporate images.
There are currently no federal laws against the sharing or creation of deepfake images, though there have been moves at the state level to tackle the issue. Recent events may be stirring bipartisan support, which may move Congress to pass legislation. With regulation and control of deepfake imagery already a part of the UK Online Safety Act 2023, the problem still remains a widespread and immediate one, and equipping our colleagues with the knowledge to recognize deepfakes will be an important component of company security training as we move further into an AI influenced 2024.
Recognizing the Deepfake Problem
When conducting security training, it is important to showcase the prominence of what we are trying to protect against and why. Many of our colleagues won’t understand the gravity of the deepfake issue, and it will be important to start with core principles and explain what deepfakes are and why this training is important to our organization.
Deepfakes leverage powerful AI and machine learning algorithms to create or manipulate audio and video content with a high degree of realism. Originally confined to academic circles, the technology has now proliferated across the digital domain, posing significant challenges for individuals and organizations alike.
One example of the ramifications of basic image manipulation may be the May 22 image of smoke billowing from a government building near the Pentagon, which (while swiftly debunked) was shared by reputable sources and triggered investor fears, sending stocks tumbling—a stock sell-off driven by a since-debunked picture underscoring the issue of how artificial intelligence could be used for nefarious purposes with big consequences.
From impersonating corporate executives to spreading false information to manipulating stock markets or even tampering with evidence, the potential misuse of deepfakes represents a serious and real security threat. The democratic process, corporate governance, and personal reputations are at risk, making the need for awareness a part of any organization’s future security training to improve security posture.
Recognizing Deepfakes: Signs and Warnings
Visual Inconsistencies
Training your team to spot irregularities in video or audio content is crucial. This includes inconsistencies in lighting, shadows, facial expressions, or lip sync discrepancies in videos. Audio deepfakes may exhibit unnatural intonations or breathing patterns.
Such evidence can be seen in the YouTube example, ‘This is Not Morgan Freeman.’
The first step will be teaching our colleagues to question what they’re looking at and, if something doesn’t look right, to seek definitive affirmation.
Technological Aids
Machine learning and artificial intelligence aren’t all bad and can greatly help support the corporate cybersecurity stack.
Leveraging AI-based detection tools can provide an additional layer of defense. These tools analyze the content for signs of manipulation that are imperceptible to the human eye. Encouraging the use of verified platforms and tools that watermark authentic content can also mitigate risks.
In addition, using machine learning tools that can recognize the signs of a breach based on a baseline of approved and expected behavior can be invaluable in identifying the tell-tail signs of a compromise, preventing lateral movement, and even recognizing the signs of unknown cybersecurity vulnerabilities like zero-day attacks in real time.
The Role of Critical Thinking
Promote a culture of skepticism and verification. Nowadays, if we repost a piece of news without (at minimum) checking reputable news sources and/or Snopes, we are courting controversy and possible ridicule from our peers.
Encourage employees to question the authenticity of unexpected or suspicious communications, especially those that solicit confidential information or prompt hasty decisions. Examples, like ‘Spider-Man: No Way Home but It’s Tobey Maguire’ or ‘Lynda Carter Wonder Woman,’ can help highlight the quality of deepfake videos and the importance of recognizing deepfakes. Is this really Tom Cruise and Keanu Reeves on TikTok? In ‘Toxic Influence,’ Dove employed deepfake technology to simulate mothers of teenage girls giving improbable advice, aiming to highlight the harmful effects of risky ‘beauty’ tips disseminated by influencers on social media platforms. Salvador Dalí famously once said, “If I die, I won’t completely die,” and in the Dalí Lives deepfake from The Dalí Museum, this is certainly true.
Best Practices for Cybersecurity Training on Recognizing Deepfakes
Regular Training and Awareness Sessions
- Implement comprehensive training programs that are regularly updated to reflect the evolving nature of deepfake technology.
- Use real-world examples to illustrate the potential threats and conduct hands-on workshops to practice detection techniques. Asking, “Is this an AI image?” or suggesting they “Pick which of these images is real” makes the training participants think critically, gives hands-on experience in recognizing deepfakes, and shows the tell-tail signs in practice.
Simulation Exercises
Simulated deepfake attacks can help prepare your team for real-world scenarios. These exercises not only test the efficacy of your detection protocols but also highlight areas for improvement in your response strategy.
To help staff recognize deepfakes, security teams can simulate deepfake attacks by creating relevant deepfake content that mimics potential attack scenarios, such as spoofed videos or audio recordings of senior executives issuing unauthorized commands. This involves using advanced deep learning techniques and generative adversarial networks (GANs) to generate high-quality forgeries. However, simple imagery can be easily created with the likes of Dall-e, Wombo, Night Café, or SD’s img2img, where you can link to an image—say, with permission, a member of staff or the exterior of your corporate HQ—as inspiration to generate a new (alternative) image.
Teams can then conduct phishing campaigns within the organization using these deepfakes to train employees to recognize and respond appropriately to such threats.
Collaboration with Industry Partners
Engaging with industry groups, attending cybersecurity conferences, staying up on cybersecurity trends, and participating in forums can provide insights into emerging trends and best practices. Sharing knowledge and experiences with our peers can enhance your organization’s preparedness, and we .
Creating a Resilient Organizational Culture
Fostering Open Communication
Encourage a culture where employees feel comfortable reporting potential deepfake incidents without fear of appearing foolish or of retribution. Open lines of communication ensure that threats are identified and addressed promptly.
Establishing Clear Protocols
Develop clear procedures for verifying and responding to deepfake incidents. This includes steps for escalation, investigation, and communication with stakeholders, ensuring a coordinated and effective response.
Continuous Improvement
Cybersecurity is not a one-time effort but a continuous process of improvement. Regularly review and update your security policies, training programs, and detection tools to keep pace with the rapidly changing threat landscape. Deepfake technology will evolve, and training will have to evolve with it to help our colleagues recognize deepfakes as access to AI image and video creation tools advances.
The Human Element in Cybersecurity
While technology plays a crucial role in detecting and mitigating the risks posed by deepfakes, the human element remains indispensable, and the buck will stop with the viewer—art, as they say, is in the eye of the beholder. Empowering our staff with the knowledge, tools, and confidence to challenge and verify digital content is paramount in recognizing deepfakes.
By fostering a vigilant, informed, and proactive organizational culture, it’s possible to address the security challenges posed by deepfakes, safeguarding an organization’s integrity and trustworthiness in a workplace where the truth isn’t as clear as it might first seem and may be getting foggier by the day.