SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Megan squire

Stop framing deepfake harassment of women as a social problem - It's a cybersecurity problem

Wed, 4th Mar 2026

Nonconsensual deepfake imagery is synthetic, sexualized content created without a person's consent, and it disproportionately targets women. Research consistently shows that more than 90% of victims are female, and the targets skew toward women who are public facing in some way: politicians, journalists, executives, streamers, educators. If you are a woman who has built any kind of visible presence on social media, you are a potential target.

On International Women's Day, I want to shine a spotlight on the abuse and harassment women face online and acknowledge that this particular form of abuse is not just cultural or social. It's technical, and it's closely connected to the same online places that support other kinds of cybercrime.

The risk is that we keep analyzing this as a social problem instead of a "real" cybersecurity problem. Platforms deal with it (IF they choose to deal with it!) through content moderation. Researchers study it as online abuse. Occasionally it makes the news cycle when a celebrity is targeted. But my community of cybersecurity professionals spends our days tracking adversary infrastructure, mapping tactics, and deconstructing malware, and for some reason, deepfake harassment is treated as way out-of-scope.

That is our mistake.

Here's a lesson I learned from my time as an extremism and terrorism researcher: so-called "lone actors" never really act alone. Similarly, producing deepfakes is not done by one lonely guy with a grudge and a laptop. It requires AI tools that guy probably didn't develop. It involves monetization on shady subscription sites. It requires distribution on networks hardened against takedowns. 

All the infrastructure that supports this not-so-alone deepfake-maker is the same infrastructure supporting other forms of cybercrime. Many times it's the same forums, Telegram channels, and extortion platforms doing both things.

For example, deepfake creators rely on AI software to generate their images and videos. These tools are shared, modified, and improved inside these online communities that also circulate scripts and automation tools. This is where tutorials get passed around, and techniques improve over time. Learning how to create convincing fake content requires experimenting, troubleshooting, and figuring out how to avoid detection. Those skills also apply to broader cybercrime activity.

Distribution looks similar too. Deepfake content is copied across file hosting sites, private groups, and messaging apps. Accounts are disposable. Links get reposted. When one space shuts down, another pops up like whack-a-mole. 

In some cases, there is also a financial angle to it. There are some paid sites that traffic in this type of content. Some perpetrators prefer an extortion angle, where they threaten to send deepfaked images to employers or family members unless money is paid. It's a threat that could ruin a person's life, unless the ransom is paid, or so the victim thinks. 

When we treat this as just harassment, we miss how closely it tracks with other forms of cybercrime. This misjudgment adds insult to injury, because we're missing out on good intelligence that comes specifically from incidents where women are the primary victims. 

Another thing I learned from studying extremism is that coordinated harassment campaigns function as capability-building exercises. "Troll armies" start by developing tactics to silence and humiliate individual targets. Not only does this strengthen the group identity, but over time the tactics also get refined and sometimes turn into bigger operations. Ignoring the early versions of a tactic because we decide it's "just playground antics" or "boys will be boys" means the problem has time to metastasize.

On International Women's Day, I want to make a straightforward case: the women targeted by deepfakes deserve threat intelligence resources, not just overworked, under-resourced content moderation teams. The deepfake producers deserve the same analytic attention we give to, say, ransomware dealers. And the women in cybersecurity organizations, any of whom could turn into targets themselves, deserve to work in a field that treats their risk as a "real" security problem.