Social Engineering
2025: The Year Social Engineering Stopped Pretending To Be Subtle
If you had told me five years ago that a finance director would join a Zoom call, see a full gallery of “familiar” executives, and calmly wire half a million dollars to criminals based on that meeting, I would have said it sounds like sci fi. Full story
That is exactly what happened in March 2025, when a multinational’s finance leader in Singapore was tricked into sending 499,000 dollars after joining a deepfake video call where every “participant” was synthetically generated, including the supposed CFO.
This is where we are at the end of 2025: social engineering did not just get smarter. It changed shape. It moved from inboxes into video calls, classrooms, payroll systems, and the faces and voices of people we trust.
This post is my year end attempt to map what happened, why it matters, and what security leaders need to do differently in 2026.
1. Deepfakes Went From Novelty To Normal Attack Surface
Multiple independent reports this year called out the same pattern. Huntress described 2025 as “the year of the deepfake,” noting how threat actors are baking AI voice and video into phishing, financial fraud, and initial access campaigns. CrowdStrike’s 2025 Global Threat Report echoed that adversaries are using generative AI to create convincing social media profiles, fake executives, and tailored outreach at scale.
Some of the most striking examples:
- The Singapore deepfake Zoom heist Criminals cloned the appearance and voices of leadership for a live multi participant “meeting.” There was no strange email, no obvious spoofed domain. There was a video call, a familiar face, and a request to urgently move funds for a sensitive acquisition. The result was a nearly 500,000 dollar loss.
- Audio deepfakes targeting families and individuals Legal and consumer protection bodies have been warning about AI cloned voice scams where attackers simulate a relative in distress or an authority figure demanding payment, often with a few seconds of audio scraped from social media. An American Bar Association piece this year broke down one such case, showing how fear and urgency override rational checks in seconds when the voice sounds “right.”
- Identity as a trademarked asset Public figures now treat their face and voice as IP they must legally protect. Jeremy Clarkson, for example, moved to trademark his image after deepfake scams used his likeness to push bogus crypto and financial products. That is not just celebrity gossip. It is a preview of what every brand and executive will grapple with as identity becomes something attackers can fabricate on demand.
The core lesson: in 2025, “I saw them,” “I heard them,” and “the video looked real” stopped being meaningful security controls.
2. Email Never Died – It Just Became The Deepfake On Ramp
With all the attention on voice and video, it would be easy to think classic phishing is old news. It is not.
Palo Alto’s Unit 42 social engineering report this year found that phishing remains the dominant vector, responsible for roughly two thirds of social engineering cases they reviewed, with the remaining third coming from newer methods like SEO poisoning, malvertising, smishing, and MFA fatigue.
Business email compromise (BEC) in particular is still the economic engine of social engineering:
- Research cited by Hoxhunt shows BEC attacks grew about 30 percent in early 2025, with a notable rise in gift card and invoice themed campaigns.
- BEC now accounts for more than half of social engineering incidents in some datasets and remains one of the costliest breach types, with average losses approaching 4.9 million dollars per case.
On top of that, 2025 brought a wave of large scale, AI assisted phishing campaigns:
- Cyble Research and Intelligence Labs highlighted a campaign that abused HTML attachments to bypass email security and steal credentials while masquerading as major brands.
- Darktrace reported a 620 percent spike in phishing activity in the lead up to Black Friday, driven by brand impersonation and holiday themed lures.
So while everyone is trying to spot the deepfake on Zoom, email quietly continues to be the main supply chain for credentials, access tokens, and initial footholds.
3. Social Engineering Moved Sideways Into Support Desks And Payroll
One of the most important trends this year was the shift from targeting “just users” to targeting the systems and teams that support users.
Two examples stand out.
3.1 Help desks as high value human APIs
ReliaQuest researchers tracked a group dubbed Scattered Lapsus Hunters that pivoted from Salesforce to Zendesk customers in 2025. Their tradecraft combined:
- Dozens of typosquatted domains that mimicked Zendesk portals and VPN entry points.
- Fake SSO pages designed to harvest credentials from support staff.
- Malicious tickets submitted through legitimate support channels that attempted to lure agents into executing remote access payloads.
In other words, why phish one end user when you can compromise the platform that touches thousands of them at once.
3.2 Payroll as the new fraud perimeter
Another 2025 incident wave involved threat actors using phishing to compromise HR and payroll systems at US universities, then quietly redirecting salary payments to attacker controlled accounts.
This is social engineering that targets process rather than just people. The emails are simply the first domino in a chain that ends with “everyone gets paid, just not to the right bank accounts.”
If your mental model of social engineering starts and ends with “prevent employees from clicking bad links,” you will miss these attacks. The real game in 2025 was designing fraud resistant workflows for support and finance teams.
4. The Human Cost: Deepfake Abuse Outside The Office
Corporate incidents are only half of the story. The same tools that impersonate a CFO are being used against teenagers.
A Guardian investigation this week detailed the surge in deepfake pornography in schools. Teachers reported that around one in ten UK secondary schools they surveyed had already encountered AI generated sexual images of students or staff, created with readily available “nudify” apps.
Many victims only discover the images after they have circulated through classmates. Some are so traumatized they stop attending school. Policies and legal frameworks are lagging, leaving teachers and parents improvising responses to what is essentially a new category of tech enabled social and sexual abuse.
It is tempting for corporate security to treat this as “outside our lane.” I would argue the opposite. The kids being victimized in classrooms today are your workforce in five years. Their trust relationship with technology and identity is being shaped by these events.
5. What The Data Says About Humans In 2025
Across multiple 2025 reports, a consistent picture emerges.
- Phishing volumes in 2024 dropped in some regions, but the campaigns that remain are more targeted and use AI for language quality, personalization, and timing.
- Smishing and other mobile first attacks exploded, with some studies citing triple digit growth driven by SMS delivery scams and MFA prompts.
- Less conventional social engineering methods like SEO poisoning and malvertising now account for more than a third of observed cases in some datasets, pulling people into attacks through search results and ads rather than email.
Taken together, these numbers confirm what many of us felt intuitively this year. Social engineering is no longer a single channel problem. It is a multi surface, AI amplified discipline focused on exploiting trust, not just technology.
6. Five Things Security Leaders Need To Do Differently In 2026
Closing out 2025, here is the uncomfortable truth: the human layer is not something you “patch” with an annual training. It is a system you design.
Going into 2026, I would focus on five concrete shifts.
6.1 Move from “don’t click” to “assume persuasion will succeed”
Design controls on the assumption that at some point:
- Someone will click.
- Someone will answer the phone.
- Someone will join the deepfake Zoom.
That means tightening downstream controls: high friction verification before large payments, strong out of band checks for bank changes, and explicit procedures for validating unusual executive requests regardless of channel.
6.2 Treat support, payroll, and vendors as prime targets
Your help desk and HR teams are now top tier targets, not back office functions.
- Apply the same security rigor to support platforms like Zendesk as you apply to core production systems.
- Lock down workflows that can change identity data, bank accounts, or access rights. If a single email plus a ticket can move money, you have a design problem, not a user training problem.
6.3 Operationalize deepfake resilience
You will not stop attackers from cloning voices or faces. You can stop your organization from making high risk decisions based solely on those signals.
Practical moves:
- Write policy that explicitly states no payment, credential reset, or major contract decision is made purely because of a video or voice interaction.
- Build “challenge questions” and secondary verification into high risk approvals that are hard to fake with publicly available information.
- Educate executives and high profile staff about how their likeness can be abused, then give them simple scripts to push back when someone pressures them to move fast on a call.
6.4 Make user training hyper specific and story driven
Generic phishing slides are obsolete. Users need short, narrative based scenarios that mirror the real attacks we saw this year:
- A fake HR message about updating payroll routing.
- A text message about missed deliveries during holiday season.
- A “quick Zoom” with a partner or investor asking for sensitive documents.
Tie each story to a specific behavior: slow down, verify through a known channel, and never act alone on high impact changes.
6.5 Expand your threat model to include psychological safety
The rise of deepfake abuse in schools is not separate from your security program, it is a preview.
Modern awareness must acknowledge that people are not just potential “insiders” or “clickers.” They are also potential victims of very personal attacks that weaponize their image, relationships, and reputation.
Give employees clear guidance on:
- What to do if their likeness or voice is abused online.
- How the company will support them if harassment spills into the workplace.
- How to escalate suspected deepfake or AI generated abuse just as they would escalate malware.
This is as much a cultural responsibility as a security one.
7. Ending 2025 With A Simple Commitment
If I had to summarize 2025 in one sentence, it would be this:
Attackers stopped trying to break into our systems and spent the year learning how to break into our stories.
Every phishing email, deepfake call, forged support ticket, and payroll diversion scheme is an attempt to insert a malicious chapter into someone’s narrative. “Your boss needs this now.” “Your child is in danger.” “Your salary will stop unless you act.”
As we go into 2026, the job of security leaders is not only to harden infrastructure. It is to protect the integrity of those stories by:
- Making verification a cultural reflex, not an exception.
- Designing processes that assume persuasion will be attempted and will sometimes succeed.
- Offering real support when people are targeted as people, not just as accounts.
The technology will keep evolving. Deepfakes will get sharper. Phishing kits will get smarter. The one thing we control is how deliberately we design human trust into our systems.
If 2025 was the year social engineering stopped being subtle, let 2026 be the year we stop being naive about it.