If we want to reduce harm, we need to look upstream
In December, the federal government tabled the ‘Protecting Victims Act’ (Bill C-16) to fill a gap in Canada’s Criminal Code on the distribution of sexual deepfakes, and added a new offence of threatening to distribute an intimate image.
In this post, I ask: how prevalent is this conduct in Canada, and will these new offences make a difference? If not, what more should we do?
Briefly, the Criminal Code already makes it an offence to share an intimate image of an adult without consent, but it doesn’t capture deepfakes. Now it will. And while extortion is already an offence, it requires a threat made “with intent to obtain” something. The bill’s new offence of threatening to distribute an intimate image only requires that a threat be made.
The government has yet to release its Charter Statement, but David Fraser wonders whether the deepfake offence is too broad to survive a freedom of expression challenge. There’s a public interest defence, but does it protect satirical deepfakes of politicians? Should there be broader exceptions for this? Will the courts ‘read in’ these exceptions?
All good questions, but I want to address the conduct targeted here. How prevalent is it and will new law help? Short answer: the conduct is prevalent, but I doubt these new laws — or others on the way — will do much to curb it.
How prevalent is it?
We have limited data on how often sextortion occurs or sexual deepfakes are shared. But we know enough to know they’re both happening with some frequency — especially sextortion.
A tipline run by the Canadian Centre for Child Protection receives an average of 6 sextortion reports per day, or roughly 2,300 in 2024. Where gender is known, 83% of victims are male. Typically, a member of a criminal network poses as a young woman, makes contact with a young man on Instagram, gains his trust, obtains an intimate image over Snapchat, and then demands money. Young women, by contrast, tend to be extorted for additional images.
A recent study of roughly 17,000 people in ten countries found somewhat more even patterns among genders. Fifteen per cent of men and thirteen per cent of women reported being victims of sextortion, roughly one in seven adults. Five per cent of men report perpetrating sextortion. Notably, the most common type of perpetrator identified by victims was a current or former intimate partner, rather than an online stranger.
I haven’t been able to find stats on the prevalence of sexual deepfakes. But we can glean that it’s happening with some frequency by stories reported in the media and through the odd court case — though it’s not clear how frequently.
A recent Ontario case made news when a judge acquitted a man accused of creating a nude deepfake of a woman he photographed wearing a bra. The current offence in the Code did not capture deepfakes.
A search for other deepfake cases only turns up two decisions involving the creation of child porn.
But this doesn’t mean that sexual deepfakes aren’t on the rise in Canada. It means we aren’t seeing prosecutions for it yet — and one obvious reason for this is that police and Crown don’t think they have the tools. The acquittal in Ontario confirms this.
What, then, can we glean about both sextortion and sexual deepfakes from what we know now — and how likely is it that new offences in C-16 will make a difference?
Why new offences will have a limited impact
Running a search of the term ‘sextortion’ in caselaw databases turns up results that lead to two observations.
First, courts come down hard on perpetrators of sextortion, with multi-year jail sentences—10 years in the case of Amanda Todd’s assailant. But since this conduct persists with some frequency (six reports a day), the criminal sanction poses no real deterrent.
The problem has to do in part with social media. As the Centre for Child Protections points out:
Popular platforms have design characteristics that create favourable conditions for predation. Extorters weaponize social media platforms in that they can easily create fake accounts to access potential victims and their personal information and social networks. Victims also pointed to platform reporting functions that failed to provide them with options to accurately describe their situation and a lack of meaningful action being taken by platform operators.
The Online Harms Act (Bill C-63), which died on the order paper last year, was supposed to address this by imposing an obligation to remove images quickly. (The Take It Down Act, just passed in the US, does the same.) Bill C-63 is due to return in short order (and I’ll likely post about it here when it does). That may be a more effective tool for curbing the harm of these images than the criminal law.
But part of the problem also lies with AI providers. As a recent op-ed in The New York Times noted:
Creating abhorrent imagery no longer requires proficiency with Photoshop or aptitude with open-source models; one need only enter the correct text prompt. While both open-source and hosted models typically have safety guardrails built in, these can be surprisingly brittle, and malicious users will find ways around them.
This week, Grok’s AI (from X, formerly Twitter) made news worldwide when it began generating sexual deepfakes using images of women and children uploaded to it — and continued to do so after the platform became aware of it. Calls for law holding platforms liable for this have fallen on deaf ears in Canada and the US, although lawmakers in Britain, Europe, and Australia are threatening action if AI providers like Grok don’t act more responsibly.
Law in Canada and the US — home to most major AI providers — should do more to incentivize meaningful safeguards against the creation of sexual deepfakes. But I suspect there will always be tech at hand for people who want to do this.
A second observation is that the way in which both sextortion and non-consensual sharing cases arise — the variety of scenarios in which they occur — makes it seem unlikely (to me at least) that punishment for conduct in one kind of case will deter conduct in another.
For example, cases range from a stranger in his 30’s in Europe preying upon a 12-year-old he met on social media; an ex-husband demanding money for voyeuristic recordings he made of a house-sitter; a man in his forties who met four local-area teens over snapchat and extorted them for further images by threatening and harassing them over several months; and a case involving a woman who engaged in consensual sex with man who surreptitiously recorded the event and posted it online without her knowledge.
Given the diverse motives of the perpetrators, their identity or situation, and the fact that the prospect of a criminal conviction for extortion or non-consensual sharing posed no deterrence for them suggests that new offences won’t make a difference.
The only hope lies in better enforcement mechanisms against the platforms for enabling this — creating and/or sharing images. But even there, relief will be limited. Digital images are now too easy to make and too hard to contain once shared to ever be truly secure against this menace.
But we should try to do what we can.
