Beyond the Ban List – A Framework for Third-Party App Risk
As of September 2025, TikTok remains in legal limbo in the United States. A framework deal is reportedly in progress that would transfer TikTok’s U.S. operations to a consortium of ...
Blog Will Arnett todayOctober 2, 2025
Discussion about deepfake AI has grown steadily in recent months, fueling public concern over what is real and what is not in the news media and on social media. Spurred partly by high-profile politicians and celebrities using deepfake to promote themselves, and the wild explosion of deepfake porn (we’re not going to get into that), deepfake has become something of buzzword, with even technical people sometimes mislabeling content or generally misunderstanding what it is.
Experts warn that up to 90% of online content could be synthetically generated by 2026, raising urgent questions about trust, authenticity, and digital literacy.
Before we go any further, we need to distinguish what Deepfake is and what it isn’t. Deepfake technology uses artificial intelligence, specifically deep learning (hence the “deep” part), to create hyper-realistic audio, video, and images that mimic real people. It’s not just a clever trick; it’s a powerful tool that can replicate someone’s voice, facial expressions, and mannerisms with alarming accuracy.
Originally developed for entertainment and creative applications, deepfakes have quickly evolved into a serious cybersecurity threat. When used maliciously, they can impersonate executives, bypass biometric security, and manipulate employees into making costly mistakes.
Let’s clear up a few misconceptions:
Deepfakes are sophisticated, deceptive, and increasingly accessible. And cybercriminals are using them to exploit trust in ways traditional phishing never could.
Threat actors are no longer relying solely on spoofed emails or fake domains for phishing attacks. They’re using deepfake technology to:
According to a 2024 report by Sumsub, 65% of companies experienced deepfake-related fraud attempts, and 44% were targeted by deepfake audio attacks. These aren’t “risks of cyberthreats to come”; they’re happening now, and they’re costing organizations millions.
From impersonated CEOs in Zoom calls to fake job candidates infiltrating corporate systems, deepfakes are now a top-tier threat vector. Financial institutions, software firms, and energy companies have all suffered major losses due to deepfake-enabled social engineering.
In early 2024, a finance employee at Arup, a global architecture firm, joined a video call with what appeared to be the CFO and other executives. The call was convincing… voices, faces, everything. The employee was instructed to wire $25.6 million. It wasn’t until later that they realized the entire meeting was a deepfake. The attackers had used AI-generated video and audio to simulate a legitimate business interaction.
A bank manager received a call from someone impersonating a director that he knew and had spoken with before. The voice was cloned using deepfake technology and paired with forged emails. The manager authorized a $35 million transfer. The voice was so convincing that it bypassed all suspicion. This incident remains one of the most expensive deepfake scams on record.
In 2023, the CEO of a certain UK energy firm received a call from someone claiming to be an executive from the parent company. The voice matched perfectly in accent, tone, and urgency. €220,000 (USD $237,600) was transferred before anyone realized the call was a deepfake. The scam exploited trust and familiarity, two pillars of effective social engineering.
Retool, a software development company, was targeted by attackers who used deepfake audio to manipulate internal help desk staff. The attackers bypassed speaker-based authentication and gained access to sensitive client accounts. The breach resulted in tens of millions of dollars in losses and highlighted the vulnerability of voice-based security systems.
A businessman in Northern China joined a video call with someone who looked and sounded like a trusted associate. The deepfake was flawless. He wired $622,000 during the call. Later, the real associate denied ever being on the call. The scam was only discovered after the money was long gone.
We’ve established that the threat of deepfake vishing (could we start referring to it as ‘dishing’, pretty please?) is very real, but so are the defenses. Here’s how organizations can protect themselves:
Never rely on a single communication method for sensitive requests. If someone asks for a wire transfer or confidential data, verify through a second channel whether it be phone, text, or in-person confirmation. Oh, and remember that phone numbers can be spoofed, so don’t just assume that if you recognize the phone number that’s calling that it’s a legit call. There’s nothing wrong with saying “hey, let me call you back really quick” to check if the number calling is really who they say they are.
Deepfake content often contains subtle anomalies such as unnatural blinking, mismatched lip sync, or robotic/monotone speech patterns. Employees should be trained to recognize these signs and escalate suspicious interactions. Emphasize the importance of taking the time to verify communications. Social Engineers count on targets being less vigilant when they feel pressured by a sense of urgency.
Biometric systems are vulnerable to deepfake spoofing. Organizations should adopt multi-factor authentication methods that don’t rely solely on voice or facial recognition, such as authenticator apps, location-based authentication, or hardware tokens. Email and SMS verification should be used only when paired with stronger methods, as they are more susceptible to being breached.
Alias Cybersecurity offers deepfake social engineering penetration testing, a proactive way to assess your organization’s vulnerability to deepfake attacks. As part of a larger offering of penetration tests that includes everything to test the human element from social engineering to full-blown Red Team exercises, our team simulates real-world scenarios to test whether your staff can detect and respond appropriately.
This isn’t just a test—it’s a wake-up call. Alias helps you identify weaknesses before attackers do.
Dishing (Deepfake Vishing) is no longer a futuristic threat; it’s a present-day reality. Cybercriminals have been using it to bypass traditional defenses and exploit human trust for over half a decade now. The financial and reputational damage can be catastrophic. If your organization isn’t including deepfake testing in its assessments, it’s critical to start now.
The cost of doing nothing is far greater than the cost of preparation. Deepfakes are evolving. So should your defenses.
Written by: Will Arnett
Tagged as: phishing scams, deepfake, social engineering, vishing.
Blog Alias
As of September 2025, TikTok remains in legal limbo in the United States. A framework deal is reportedly in progress that would transfer TikTok’s U.S. operations to a consortium of ...
Copyright 2019 Cyber Security Design Concept by <a href="http://qantumthemes.com?rel=demo" target="_blank">QantumThemes</a>.