The Liberal’s first bill in Parliament last week proposes a raft of new search powers to give police easier access to our private data. They may turn out to be the most consequential search powers added to the Criminal Code in the past decade.
They have little to do with the primary aim of the bill, strengthening borders by expanding powers in customs and immigration.
Tucked in the middle of Bill C-2 are measures that revive long-standing aims to pass “lawful access” legislation that will make it easier for police to obtain subscriber information attached to an ISP account (with Shaw or Telus) and give police direct access to private data held by ISPs or platforms like iCloud, Gmail, or Instagram.
I’ve written a general overview of these powers for The Conversation here, and Michael Geist has a very informative op-ed in the Globe that sets out a wider context and walks through some of the provisions in detail. If you’re new to this story, you might begin there.
In this post, I offer a few thoughts on the constitutionality of three key powers in the bill: the new production order for subscriber info; the new information demand power; and the provisions that compel service providers to assist police in gaining direct access to personal data.
This is a long post, almost 3k words. It might have been three shorter ones, but I thought I’d put it all in one post.
It’s meant for those looking for a deeper dive on the constitutional questions.
What do you mean by ‘constitutional’?
The larger issue here is whether these provisions will survive a challenge under section 8 of the Charter of Rights and Freedoms, guaranteeing “everyone has the right to be secure against unreasonable search or seizure.”
Two things to keep in mind about section 8: What is a search? And when will a search be reasonable?
A search for the purpose of section 8 is anything done by a state agent for an investigative purpose that interferes with a reasonable expectation of privacy in a place or thing (R v Bykovets).
A search will be reasonable where it is authorized by law, the law is reasonable, and it is carried out in a reasonable manner (R v Collins).
The powers created in this new bill set out authority for a search. The issue here is whether each of them sets out a ‘reasonable law’ authorizing a search.
(In case you’re interested, I’ve co-authored an entire book on section 8, which you can check out here.)
Relevant background: production orders and the Spencer situation
In 2004, Parliament created what are called ‘production orders’ to give police the power to ask an internet or cellphone service provider to hand over data about digital communications, including the content of messages.
That power required reasonable suspicion, and it was challenged under section 8 of the Charter as being too low a standard, giving rise to an unreasonable search.
In 2014, the BC Supreme Court said it was too low: it should be probable grounds; the Alberta Court of Appeal disagreed: it should only be reasonable suspicion.
That same year Parliament passed Bill C-14, which created a general production order requiring probable grounds (487.014) and four more specific production orders requiring only reasonable suspicion — for tracing communications (e.g., metadata attached to email or phone calls), transmission data (call or text histories); tracking data (location data); and financial data (487.015 to 487.018).
Meanwhile, in June of 2014, Supreme Court of Canada decided R v Spencer, which held that subscriber information attached to an IP address — the name and physical address of the person linked to it — is private, because it associates a person with their online search history. Police can’t demand it from an ISP without authority in law to do so (which may or may not involve a warrant).
The Court in Spencer noted (at para 11) that police had demanded the subscriber ID from Shaw without first obtaining a production order in that case — thus contemplating its use as a means for doing so.
But the Court did not address the question of what kind of search power would be reasonable to obtain subscriber info. After explaining why provisions in private sector legislation (PIPEDA) didn’t authorize the search, the Court simply concluded (in para 73) that “in the absence of exigent circumstances or a reasonable law,” police couldn’t lawfully search (i.e., demand) it.
So what remained unclear after Spencer was: what is a reasonable search law that authorizes police to make a demand for subscriber information?
The presumptive standard for a reasonable search in criminal law (i.e., what constitutes a “reasonable law” authorizing a search) is one involving a warrant issued on “reasonable grounds to believe” (probable grounds) that an offence has been or will be committed, rather than “reasonable suspicion.” It would seem, then, that a demand for subscriber ID should be a warrant on probable grounds.
Things said in Spencer support this inference. It held the privacy interest in subscriber information is high, given that it links a person to search activity that can be highly revealing. Anything less than probable grounds would not strike the right balance between law enforcement interests and personal privacy. But at least one privacy scholar disagrees.
In the wake of Spencer, to obtain subscriber info, police have been using the new general production order power added in 2014, requiring probable grounds. Again, this isn’t a powered tailored specifically for obtaining subscriber ID, so it’s unclear whether anything less would suffice. Police and Crown hope so. Probable grounds is a relatively high standard; why not just a warrant on reasonable suspicion?
Privacy in a set of numbers alone?
And what about demanding an IP address? Sometimes police can’t get far without asking an ISP or an online platform like Instagram to reveal a user’s IP address. Did they need a warrant for this? Was an IP address on its own private?
In R v Bykovets, the Supreme Court of Canada held that an IP address is private because it readily links a person to their online activity. But the Court didn’t specify what kind of power would render a search (demand) for an IP address reasonable.
At para 85 of the decision, Karakatsanis J for the majority, points the production order power in section 487.015(1) of the Code (for transmission data) on reasonable suspicion as possible tool police might use here. This is obiter, since the Court is not being asked whether the use of this to demand an IP address would constitute a reasonable law. Yet we can assume that the judges in the majority think that a warrant on reasonable suspicion would suffice.
New production order in the Strong Borders Act
The new bill gives police and Crown what they want: a production order power tailored to making a demand for subscriber info by obtaining a warrant issued on reasonable suspicion that a federal offence has been or will be committed (a new 487.0181(2) of the Criminal Code).
Will this be constitutional? More specifically, a search conducted under this power will be authorized by law, but is this law reasonable?
There is no single test for when a law authorizing a search is reasonable under section 8 of the Charter. But the Supreme Court has generally considered four factors: whether the power relates to a criminal or regulatory offence; the state or law enforcement interest at issue; the impact on personal privacy; and the oversight and accountability safeguards.
Demanding subscriber info on reasonable suspicion is, I think, likely to be found unreasonable. In this case, the privacy interest is high (i.e., the online activity linked to a person’s name). Given things said about this in Spencer, this alone could favour a finding that nothing less than probable grounds is reasonable.
Further possible support may be found in R v Tse, which held that emergency wiretap provisions of the Code were unreasonable for failing to include a post facto notice requirement to persons affected. In this case, there’s no requirement to advise a person that they were subject to a production order, if charges do not follow. Not sure a court would view production order powers to be sufficiently analogous to wiretap provisions. But I flag it as a potential consideration.
The new “information demand” power
Bill C-2 also creates a new power on the part of police to demand information. In some cases, police may only ask if a service provider has info about something. In other cases, they can demand the info itself.
Under a new section 487.0121 in the Code, police can ask a service provider whether they have “provided services to any subscriber or client, or to any account or identifier.” If so, police can demand to be told where and when service was provided — along with info about any other providers who may have offered the person service.
They can do this on reasonable suspicion alone, without a warrant.
Police can thus ask Shaw or Gmail things like: does this user have an account with you? Do you have an IP address or phone number associated with their account? If so, tell us where and when you provided it.
Why do police need this power? Aren’t police free to ask questions as part of their investigation? Is there not a distinction between a person describing to police what they know or have observed and police demanding to see it themselves? Can’t we assume that police only carry out a search when they ask for and receive private data itself?
Recall that a search is anything done for an investigative purpose that interferes with a reasonable expectation of privacy. Police demanding private information in the hands of a third party can constitute a search. For example, police carried out a search in Spencer by asking Shaw: whose name is attached to this IP address?
What is contemplated here differs in some ways but is similar in others. Police might ask simply: do you have a name (or an account) attaching to this IP address? Did you lease this IP address to a person? Or they might ask: when and where did you provide use of this IP address?
In some cases, depending on the question and the limited info revealed by the answer, it may not amount to a search. But in some cases it can.
If police have a name, or an IP or email address and they ask a dating, gambling, or porn website whether they have a user account related to any of them, a “yes” in response could be quite revealing. If a service provider can link a person to a location, or more than one, in a window of time, this could also be invasive.
Should this too require a warrant? We’re in genuinely new terrain here.
The information demand power gives police authority to go poking around the edges of our digital lives — knocking on the doors of anywhere we’ve left a digital trace — to ask questions that could readily create a clear picture of who we are and where we’ve been. All on nothing more than reasonable suspicion.
I can see a challenge to this power leading to a deeply divided the Supreme Court decision similar to that in Bykovets, where half the Court says: reasonable suspicion is enough, and the other half says no, it should require a warrant.
I suspect it will come down to half the Court seeing this power as too preliminary to pose a real threat to privacy and police needing some leeway to act without undue hindrance, and half the Court seeing this as too close in nature to a means of circumventing the protections around subscriber ID and IP addresses. In some cases a positive answer to the question: “does this user have an account with you?” will be all the police need to know to link a person with an extensive amount of personal data.
If I were a betting man, which I’m not, I would bet that a majority of the Court will find this power reasonable. (But there will be a wonderful, eloquent dissent, probably by Karakatsanis J or Martin J or maybe both, on the importance of privacy and the need for a warrant.)
Briefly, Bill C-2 also extends to agents of the Canadian Security Intelligence Service the ability to make an information demand on no grounds at all. But they may not target a Canadian citizen or permanent resident. Given the high state interest in these cases and the limited privacy interest engaged, this power is likely to be found reasonable.
The lawful access provisions
Bill C-2 contains a whole new statute called the “Supporting Authorized Access to Information Act,” which brings about a “lawful access” regime for private data that police and Crown have long been seeking.
(See Professor Geist’s Globe article on the history of this.)
The Criminal Code has long had something called an assistance order, which compels third parties to assist police in executing a warrant. (Open that storage locker please.) The lawful access provisions do the same but on a larger scale.
They impose of obligations on “electronic service providers,” or anyone providing a digital service (storage, creation, or transmission of data) to people in Canada or if situated here, and more onerous obligations on a class called “core providers” who can be added to a schedule to the Act.
An ESP can be ordered to “provide all reasonable assistance, in any prescribed time and manner, to permit the assessment or testing of any device, equipment or other thing that may enable an authorized person to access information”.
But core providers will be subject to regulations that mandate the “installation… of any device, equipment or other thing that may enable an authorized person to access information”.
A core provider might be Google or Meta, Shaw or Telus. And the equipment at issue could enable direct access to accounts, stored files, data logs, and so on.
There are two important limits on this.
One is that police (or an authorized person, such as a CSIS agent) can only go ahead and access data or demand it if they have authority to do so under law — which may or may not involve a warrant, reasonable grounds, and so on.
The other limit applies to both ESPs and core providers: they do not have to follow an order “if compliance… would require the provider to introduce a systemic vulnerability in electronic protections related”. I take this to mean that they cannot be compelled to install a backdoor to encryption.
Are these powers immune to challenge under section 8 of the Charter?
They do not contemplate a search directly. But depending on how an assistance order is used, it could result in an unreasonable search.
For example, a while ago, there was a debate about whether using an assistance order to compel a person to provide police their password might amount to an unreasonable search.
The companies subject to a requirement under this new lawful access statute could challenge it in court — either in response to an order made to them specifically or under a regulation that applies to them as a core provider (on administrative law principles).
But it’s harder to imagine a case where police have conducted a search on lawful grounds, or with a valid warrant, which is found to be unreasonable under section 8 because police were able to gain access to private data more readily through technical means of access made possible under this new statute.
But I can envision two possible exceptions.
One is if the means of access mandated under the new Act amounts to an interception: realtime access to data that police use to obtain the data at issue. An accused person would need to show, however, that police came into possession of their private data in realtime and without a warrant under the wiretap (interception) provisions in Part VI of the Criminal Code. (See the Telus case for more on this distinction.)
Another exception is simply that police gained quick access technically, but without a lawful basis (a warrant, etc.).
But it isn’t inconceivable that the Supreme Court might eventually say that mandating certain measures, means, or forms of access amount to an unreasonable search even if used with lawful authority such as a warrant. These might include means that somehow give police with a warrant access to data being created now and in the future, in addition to data already created.
If you’re still with me, thanks for reading! I’ll continue to follow the bill as it makes its way through Parliament and try to post about it here.
Mar 4, 2025
Last month OpenAI unveiled a new tool for producing lengthy reports with citations to sources on the web. It uses one of the company’s best ‘chain of reasoning’ models to deliver output that far exceeds the quality of what similar tools from Google and Perplexity AI can do – tools that are also called ‘Deep Research,’ as it happens.
But initially, OpenAi’s version was only available to folks with the $200 US a month “pro” subscription. We had to take on faith effusive reviews, like this one from Reddit:
Deep Research has completely changed how I approach research. I canceled my Perplexity Pro plan because this does everything I need. It’s fast, reliable, and actually helps cut through the noise.
For example, if you’re someone like me who constantly has a million thoughts running in the back of your mind—Is this a good research paper? How reliable is this? Is this the best model to use? Is there a better prompting technique? Has anyone else explored this idea?—this tool solves that.
It took a 24-minute reasoning process, gathered 38 sources (mostly from arXiv), and delivered a 25-page research analysis. It’s insane.
Not everyone was so enthused. One commentator noted that it can “miss key details, struggle with recent information and sometimes invents facts.”
The Verge had a piece about using Deep Research to produce a report on the judicial treatment of section 230 of the Communications Decency Act in the last five years – concluding that “it got the facts right but the story wrong.” Although none of the cases it cited were made-up, and its summary was generally accurate, it had one major problem: it ended in 2023. But 2024 was “a rollicking year for Section 230,” with many important developments, as a law scholar quoted in the article pointed out.
When I first read about OpenAI’s new tool, I was keen to find out whether it could look up sources on third-party databases like Canlii or JSTOR. If it could do this, DP would be quite powerful. You could ask it to find a case about X and it would formulate the query, run the search and – most importantly – read through the cases to find the needle in the haystack.
For the moment, however, it doesn’t do this. But OpenAI says that accessing databases like Canlii is on its way.
(Incidentally, I’ve been writing about how much better free or almost free AI has become at doing brief overviews of Canadian law. I’ve got a piece on the way documenting how well OpenAI’s reasoning model GPT o1 produced a summary on the law of unjust enrichment in 500 words, which I knew little about. It came up with a cogent summary, with correct citations to the three leading Supreme Court of Canada decisions on point and the main 3-part test to be applied. Check it out here.)
The thrust of this post, however, is to report on my own test drive with OpenAI’s Deep Research. As a GPT Plus subscriber (30$ Canadian a month), I gained access to DR only last week but couldn’t think of a good query to test it with. With “plus” users getting only 10 queries a month, I’m inclined to be sparing with them.
Here was my ungrammatical query:
What uses are being made in Canada to use artificial intelligence in decision-making by government agencies or entities, and what policy statements or guidelines have been issued by government or agencies with respect to this?
As you may know, before it embarks on a search that can last between about 5 and 30 minutes, it asks you a few questions to sharpen the boundaries. It asked me:
Are you looking for information on AI use by government agencies across all levels (federal, provincial, municipal), or just at the federal level? Also, would you like details on specific areas of application (e.g., public safety, healthcare, immigration, etc.)? Lastly, are you interested in ethical considerations and public consultations regarding AI in government decision-making?
I told it to be as inclusive as possible.
What it produced was thoroughly impressive. It took 7 minutes and came back with an 11,000 word document titled “AI in Government Decision-Making in Canada” that cited 27 sources. You can read it here.
It was both informative and sufficiently engaging that I read it all the way through, and I learned an enormous amount. (I then got it to do a 1,000 word summary, which you can find at the end of the thread.)
The report was impressive both in terms of what it covered and where it pointed. It touched on the use of AI at federal, provincial, and municipal levels across Canada and in various fields: policing, healthcare, immigration, social services, transportation, even the courts.
It also struck a nice balance between useful and reliable government policy docs and reports, and shorter news items.
What struck me reading it was that it would take me easily a week of surfing, reading, note-taking, and compiling to produce something this good. Maybe more.
It would no doubt have been a better report. More selective in some ways, more discerning, maybe more probing.
But this was about 70 to 80% as good as I could do myself – in 7 minutes. It hit all the bases, all the major stories in the government use of AI in recent years: Clearview AI, Chinook, interventions by the Federal Privacy Commissioner, major policy statements on the use of AI.
Quibble with this as you might, but this is not a minor development. The sources are real. The general summary is cogent and more or less accurate as far as it goes. Is it missing that great paper by so-and-so on this or that aspect of the problem? No doubt. Does it contain every relevant story, all the relevant policies, cases, and so on. No it doesn’t.
But is it worth consulting as a starting point? Do I know much more about this topic than I did before I ran the query? Absolutely.
I come away from Deep Research feeling more optimistic about the utility of AI in research, and legal research in particular. I can see a point in time on the horizon when AI will produce a better first draft of an outline of argument or opinion than we could possibly do in less than a few days, even with a good grounding in the field.
It’s even to the point of making me question my assumptions about AI not being a substitute for really knowing the law – that people without a solid foundation in law won’t know how to prompt effectively. Just not sure about this any more.
Jan 11, 2025
In late December, the Toronto Star ran a story about a boy in high school who had created a series of pornographic deepfakes of other girls at his school using images of their faces on Instagram. The nude pictures were discovered on his phone inadvertently, during sleepover when a friend went looking for a selfie taken on his device. Once discovered, the girls were alerted (with screenshots) and soon police were at his door.
They grappled with whether creating the images was criminal. After questioning other boys believed to have seen the images and consulting with Crown, police decided not to proceed. But they invited the girls and their parents to the station to explain that they didn’t think it was a crime without more evidence that the images had been shared.
Police appear to have concluded that only one provision of the Code applied — possession of child pornography in section 163.1 — and there was a good chance the boy could rely on the “private use” exception in R v Sharpe, SCC 2001.
The story points to a larger gap or ambiguity in Canadian criminal law around sexual deepfakes — one that Professor Suzie Dunn (Dalhousie) helped explain to the Star.
As she points out in the story and details at greater length in an informative article forthcoming in the McGill Law Journal, two Criminal Code provisions are relevant to sexual deepfakes: the prohibition in section 162.1 on non-consensual distribution of intimate images (NCII) and the prohibition in section 163.1 on making, distributing, or possessing child porn.
The first offence (intimate images) applies to victims of any age. But as Dunn notes, on a plain reading, 162.1 captures only the distribution of authentic images. It prohibits sharing an “intimate image of a person”, defining this as “a visual recording of a person made by any means …in which the person is nude… or is engaged in explicit sexual activity.”
She notes that 162.1 does not appear to have been applied to a deepfake in any Canadian case. Doing a search of all cases involving 162.1 turns up only a few hits.
But as Dunn also notes, a Quebec court has applied the child porn provision in the Criminal Code (s 163.1) to capture the creation of a deepfake video. There is also a BC case from early 2024 in which the court applied section 163.1 where the accused created images using an app call ‘DeepNude’ and shared them with the victim and her friends. (Both are sentencing cases.)
Was it private use?
To be clear, section 163.1 appears to capture deepfake porn involving persons under 18 because it defines ‘child pornography’ to mean
a photographic… or other visual representation, whether or not it was made by electronic or mechanical means … that shows a person who is or is depicted as being under the age of eighteen years and is engaged in or is depicted as engaged in explicit sexual activity.
In other words, the image doesn’t have to be of the person him or herself. It can be an image that depicts them. But they have to be under 18.
In the Toronto high school case, the boy clearly created child pornography within the meaning of 163.1. The question is whether his possession of it fell within the “private use” exception in Sharpe.
In that case, the Supreme Court held that to avoid an unjustifiable limit of free expression under the Charter, a defence of “private use” had to be read into the child porn offence provisions in 163.1. It contemplates two exceptions.
The first involves “the possession of expressive material created through the efforts of a single person and held by that person alone, exclusively for his or her own personal use.” The second involves recordings of lawful sexual activity for private use “created with the consent of those persons depicted.”
In the Star article, Dunn queries whether the first exception would apply here, since the boy would not have created the image by himself — but by relying on an AI app, which likely entails storage of the image on company servers.
Again, police or Crown probably concluded that it would be risky to proceed with a prosecution under 163.1 without clearer evidence that the boy had shared the images — thus taking him out of the Sharpe exception (without any debate about AI and company servers).
Are deepfakes not intimate images under the Code?
But I want to pick up another thread in Dunn’s comments on the gap in the Code on deepfakes — one that pertains to the other provision at issue, the prohibition on non-consensual sharing of intimate images of persons of any age.
I agree that on a plain reading of 162.1 of the Criminal Code, the intimate images must be of the person themselves. But the Supreme Court of Canada has endorsed departures from the principle of strict construction in criminal law where a narrow reading would give rise to arbitrariness or defeat the larger aim or purpose of the provision.
In R v Paré (SCC 1987), the accused murdered a boy two minutes after committing an indecent assault against him. A provision still found in the Code (231(5)) states that “murder is first degree murder in respect of a person when the death is caused by that person while committing” indecent assault or other offences. Paré argued that because it happened two minutes later, the murder was not caused ‘while committing’ the assault — and he should be entitled to a literal reading. For centuries, courts have applied the principle of strict construction in criminal law.
The Court held that it was time to update the doctrine. The original reasons for it (many offences resulting in capital punishment) have been “substantially eroded”. Ambiguities should still be settled in favour of the accused, since criminal penalties are severe. But the question should now be whether “the narrow interpretation of ‘while committing’ is a reasonable one, given the scheme and purpose of the legislation.”
The narrow reading wasn’t reasonable. We couldn’t assume Parliament meant to limit the meaning of ‘while committing’ to ‘simultaneously,’ because, as Justice Wilson held, doing so would result in drawing arbitrary lines between when the assault ended and the murder began. She also held that a wider reading (one that includes a murder immediately following an assault) would be the one that “best expresses the policy considerations that underlie the provision”, i.e., more serious punishment (first degree) for more serious conduct.
Should Paré apply here?
We have the same disconnect with larger purposes and arbitrariness if we read 162.1 strictly — to apply only to real images.
One might argue that the purpose of 162.1 is to prevent not simply the non-consensual distribution of intimate images, but violations of a person’s sexual privacy or integrity through the sharing of intimate images. If one could circumvent the application of 162.1 by merely doctoring a real image of one’s partner nude before posting it online — allowing one to say “but it isn’t actually her body” — that would make little sense.
Put another way, the question is whether 162.1 makes it offence to share intimate pictures only of a person him or herself — or also what looks to be him or her. If the offence doesn’t include the latter, how do we distinguish between a grainy picture of you good enough to make out and a doctored picture of you that seems real enough to be convincing? Why would non-consensual distribution of the one be criminalized and not the other?
One reason might be that in the one case, a person consented to the creation of the image but not the distribution; in the other case, they consented to neither.
But the gravamen of the offence lies in the non-consensual distribution of an intimate image. Do we not find the same gravamen in the sharing of a deepfake? Is the culprit not trying to do the same thing: compromise the victim’s sexual integrity through exposure?
We might add that while section 162.1 clearly contemplates the distribution of intimate images a person consented to have taken of them, it doesn’t require this. The definition in 162.1(2) does say “intimate image means a visual recording of a person made by any means including…” Those means could include AI. So why does the image itself have to include only images of the person themselves? After all, every digital image is doctored to some degree by our devices.
Private law remedies?
Dunn’s forthcoming McGill paper notes that various provinces (aside from Ontario) have passed tort legislation making the non-consensual distribution of intimate images actionable without proof of damages. And as she points out, all are worded in ways that clearly capture deepfakes. For example, in BC’s act, intimate image “means a visual recording or visual simultaneous representation of an individual, whether or not the individual is identifiable and whether or not the image has been altered in any way, in which the individual is or is depicted as…” engaged in sexual activity, nude or “nearly nude.”
Manitoba’s act was amended in 2024 to be more explicit about deepfakes, adding as a defined term “fake intimate image”, which means “any type of visual recording … that in a reasonably convincing manner, falsely depicts an identifiable person (i) as being nude or exposing their genital organs, anal region or breasts, or (ii) engaging in explicit sexual activity.”
These provincial statutes set out various ways to try to have an image taken down or deleted once circulated. Orders against platforms, third-parties, search engines. All of them are potentially helpful, but how helpful (or realistic) is unclear. The federal Online Harms Act in Bill C-63 (which just died on the order paper, with the proroguing of Parliament) would have placed a host of obligations on platforms to prevent circulating NCII or take them down. I expect that bill will be reprised at some point.
A cursory search on Canlii for cases applying these statutes uncovers a few dozen cases mostly seeking monetary damages for threats to distribute or posting of NCII. The focus appears to be on money rather than removal of the images. And to my knowledge, none involve deepfakes.
It may be too early to assess whether tort law will be an effective tool for curbing the use of AI to create and share sexual deepfakes. But soon, I suspect, both tort and criminal law provisions will begin to be tested on this front.
Jan 5, 2025
This past November, the federal government ordered TikTok’s Canadian subsidiary to wind up its operations in Canada, though it didn’t ban the platform itself. The power to make this order is found in the Investment Canada Act, which allows the Minister of Innovation, Science, and Industry to recommend to the Governor in Council that a foreign company be wound up on the basis that allowing it to continue here “would be injurious to national security.”
In a press release in November, Industry Minister François-Philippe Champagne said only that “The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners.”
The fact that TikTok having offices in Vancouver and Toronto would be injurious to our national security is something we’re being asked to take on faith. I mused about what the reasons could be here, but I could only speculate.
The latest chapter in this story is TikTok’s challenge of the wind-down order in Federal Court, filed in December. TikTok alleges that the Minister’s recommendations and the Governor in Council’s decision to issue the order involved procedural unfairness, were unreasonable, were driven by improper purposes, and were grossly disproportionate in their impact (affecting hundreds of employees and some 250,000 contracts with Canadian advertisers).
But at the very end of their filing, TikTok Canada asks for disclosure “of all materials in the possession of the Minister, the Public Safety Minister and the GIC” when making the decisions to move forward with the ban. This raises two questions.
Will the government have to disclose this material? And can TikTok Canada make a case for the order being improper in some way without the government having to disclose this material?
The government hasn’t responded yet, but we can get a sense of what is likely to unfold by looking at what happened in a recent case involving China Mobile Communications Group. What the government did there it is likely to do here.
In 2021, Minister Champagne recommended that China Mobile’s Canadian subsidiary be wound up and the Governor in Council issued that order. Notably, the preamble to the order offers more specifics as to why it was being issued (as revealed in a case discussed below). The main concerns were:
a) that China Mobile and its subsidiaries and affiliates may be subject to the influence or demands of, or control by, a foreign government;
b) that China Mobile and its subsidiaries and affiliates may disrupt or otherwise compromise Canadian critical telecommunications infrastructure; and
c) that China Mobile and its subsidiaries and affiliates may gain access to highly sensitive telecommunications data and personal information that could be used for non-commercial purposes such as military applications or espionage.
(To my knowledge, there was no equivalent to this in the order against TikTok.)
China Mobile challenged the order. It made some of the same arguments TikTok is making - that the security review of the company was motivated by improper purposes, the final decision was lacking an evidentiary basis, and it was based on the wrong test (‘might be’ injurious rather than ‘would be’).
The company’s challenge to the removal order also contained a request that the government be made to disclose documents used in the decision to issue the order.
In response to the disclosure request, the government issued a certificate under section 39 of the Canada Evidence Act to assert the cabinet confidentiality over the documents. China Mobile then challenged the validity of that certificate.
While it was waiting for a hearing on the certificate, the company sought a stay of the removal order. This entailed showing that it would suffer “irreparable harm” if a stay were not granted and the harm would outweigh harm to the public. The harm alleged here were the many job losses and lost contracts.
The Federal Court held in late 2021 that China Mobile would suffer irreparable harm without a stay of the removal order, but it wouldn’t outweigh the harm to the public in allowing them to remain in Canada. This is as close as a court in Canada appears to have come to assessing the substance of the security concerns motivating the removal of Chinese owned tech companies.
In his reasons (beginning at paragraph 88), Chief Justice Crampton held that the government had provided “some evidence to justify their concerns regarding CMI Canada’s facilitation of espionage and foreign interference activities in Canada by the People’s Republic of China.” This included various third-party threat assessments cited in the judgment. These do little more than repeat what are essentially speculative concerns, but I digress.
Then in early 2022 the Federal Court held that the assertion of cabinet confidentiality over other documents in this case was valid.
China Mobile appealed this ruling and, in late 2023, the Federal Court of Appeal upheld it. The government did all it needed to do under section 39 by attaching a schedule to the certificate describing the date of correspondence between the Ministers of Industry and Public Safety and the fact the documents at issue were (as the schedule put it) “used for or reflecting communications or discussions between ministers of the Crown on matters relating to the making of government decisions or the formulation of government policy.”
And that is where the trail ends for China Mobile, as far as I can tell (it closed operations in BC in early 2022). Without obtaining disclosure of the government’s reasons for having the concerns about national security set out in the preamble to the order, China Mobile couldn’t establish the impropriety (procedural unfairness, unreasonableness) of the decision to issue the order.
To put this another way, the fact that the government can rely on secret information and shield from judicial oversight the reasons for or the manner of arriving at the decision to make the order does not mean it was unfair, involved improper purposes, and so on.
The whole framework in the Investment Canada Act is constructed so as to allow the government to issue an order directing a foreign company to leave Canada based on a belief that it would otherwise be injurious to national security — and the reasons for that belief can remain confidential.
The judge in the China Mobile trial court decision on disclosure comments on this conundrum directly:
The applicants, and the Court, may have nothing to refer to in the assessment of the reasonableness of the Order, other than the Order itself. I am not, however, persuaded that this evidentiary vacuum arises from an improper exercise of authority. Rather, it arises from the nature of the proceedings and the confidences claimed.
At the end of the day, on a challenge to these orders, the government need only show that certain steps were taken: the Industry Minister consulted the Public Safety Minister, a belief was formed, and recommendations were made to the Governor in Council. Again, we have to take it on faith that the belief was reasonable and the decision to order a company like TikTok to leave did not involve improper purposes.
I suspect that TikTok Canada’s challenge to the order is just a means of buying time — possibly until a change of policy around TikTok down south? Or is that too late? Time will tell.
Jan 2, 2025
text…