Five ways to fix bill C-2 – and better protect our privacy

As it inches toward a majority in Parliament, the Liberal government is signaling its intention to move ahead with the controversial parts of the Strong Borders Act it chose to shelve back in October — in response to strong opposition from the other parties.

I’ve written about the various privacy-invasive powers in the bill briefly here, and in more detail here.

Last week I had the pleasure of attending a roundtable with the Honourable Minister of Public Safety, Gary Anandasangaree, who asked for ideas about how to improve the bill.

Here are five:

Read more »


Do CSIS and Police Really Need More ‘Lawful Intercept’ Powers?

Earlier this month, when no other party would support the Liberals in passing Bill C-2—the ‘Strong Borders Act,’ with its controversial surveillance powers—the government shelved it.

More precisely, it split off the contentious parts of the bill from the customs and immigration provisions meant to appease our neighbours to the south and re-tabled those as Bill C-12.

But it didn’t withdraw C-2. The Minister of Public Safety insists that all of C-2 is still on the agenda.

Among the most concerning parts of C-2 that were temporarily shelved are the ‘lawful access’ provisions found in a new statute that the Bill would have brought about: the ‘Supporting Authorized Access to Information Act.’

As I’ve written earlier, this new law would have given the government the power to compel ‘electronic service providers’ like Shaw or Telus, or Apple and Google, to ‘install equipment’ or make technical modifications to give police and CSIS direct access to private data for real-time interception or seizure of stored communications. That’s your email, texts, and everything you have stored in iCloud, in case you were wondering.

Read more »


Will Canada’s new hate crime bill impact free speech online?

Last week, the Liberal government tabled Bill C-9, containing three new criminal offences targeting hate speech — as a response to the alarming and appalling rise in antisemitic violence in Canada in the past two years, along with attacks against places of worship, schools, and community centres.

The new offences primarily capture acts of intimidation of a physical sort: blocking access to a synagogue, mosque, or temple, or promoting hatred by waving flags or symbols of groups listed as terrorist entities.

But two of the offences will apply to speech online and raise questions for me about where they fit in the panoply of hate speech offences in Canada — and whether we’re likely to see further regulation of online speech this fall.

I thought I’d write this short post to help situate the new offences in the Criminal Code’s existing hate speech provisions, highlight what they add to what we already have, and remind readers about Bill C-36 in 2021, which sought to revive a human rights law that would make hate speech a form of actionable discrimination — since it may be coming back.

Read more »


Authorship After AI

image alt *

A new article in AI Magazine draws an illuminating comparison between what AI is doing to writing and what photography did to art in the 1840s. It helps to make sense of a question many of us are thinking about more often: does increasing reliance on AI signal the end of writing?

The insights in this piece resonate with me, given the quantum leap in my own use of AI over the past few months.

I’m now making such frequent use of it — integrating it into my research, writing, and editing — that it has me wondering what’s really happening.

As I describe in a piece for the CBA’s National Magazine, I’ve been dipping in and out of Claude, ChatGPT, and Perplexity constantly — to get a quicker lay of the land on new topics, reword sentences, and tighten drafts. But the pace and intensity feel like a transformation as momentous as the shift from typewriter to computer, or from paper-based research to the internet.

To be clear, I’m not using AI to create texts. But using it more often to edit, it sometimes causes me to think about my claim to authorship. At what point does a suggestion — or re-write of a paragraph — mean it’s no longer me?

Read more »


When AI Turns Deadly: Are Model Makers Responsible?

This week, parents of Adam Raine, a California teen who committed suicide in April after a lengthy interaction with GPT-4o, filed a lawsuit against OpenAI and its CEO, Sam Altman. The case follows a suit brought in late 2024 by the parents of a Florida teen, Sewell Setzer, who took his own life after engaging with a Character.AI chatbot impersonating Daenerys Targaryen from Game of Thrones.

In early August, ChatGPT was also implicated in a murder-suicide in Connecticut involving 56-year-old tech worker Stein-Erik Soelberg, who had a history of mental illness. Although the chatbot did not suggest that he murder his mother, it appears to have fueled Soelberg’s paranoid delusions, which led him to do so.

OpenAI and other companies have been quick to respond with blog posts and press releases outlining steps they are taking to mitigate risks from misuse of their models.

This raises a larger question left unanswered in Canada after the Artificial Intelligence and Data Act died on the order paper in early 2025, when the last Parliament ended: what guardrails exist in Canadian law to govern the harmful uses of generative AI?

Read more »