When AI Turns Deadly: Are Model Makers Responsible?

This week, parents of Adam Raine, a California teen who committed suicide in April after a lengthy interaction with GPT-4o, filed a lawsuit against OpenAI and its CEO, Sam Altman. The case follows a suit brought in late 2024 by the parents of a Florida teen, Sewell Setzer, who took his own life after engaging with a Character.AI chatbot impersonating Daenerys Targaryen from Game of Thrones.

In early August, ChatGPT was also implicated in a murder-suicide in Connecticut involving 56-year-old tech worker Stein-Erik Soelberg, who had a history of mental illness. Although the chatbot did not suggest that he murder his mother, it appears to have fueled Soelberg’s paranoid delusions, which led him to do so.

OpenAI and other companies have been quick to respond with blog posts and press releases outlining steps they are taking to mitigate risks from misuse of their models.

This raises a larger question left unanswered in Canada after the Artificial Intelligence and Data Act died on the order paper in early 2025, when the last Parliament ended: what guardrails exist in Canadian law to govern the harmful uses of generative AI?

Like the United States, Canada has no national or provincial legislation designed to impose liability on AI companies for harms caused by their products. The European Union passed an AI Act in 2024 that does impose liability for harmful AI systems.

But in both the EU law and the Canadian bill that was abandoned, there is a notable flaw in how liability is conceived.

I explored this in a paper I wrote in late 2023, surveying early reports of harmful uses of language models (a suicide in Belgium, help with bomb-making, and other cases).

My article garnered some interest on SSRN but only recently appeared in print (it was published this month). The core argument was this:

Both [the European and Canadian AI] bills are premised on the ability to quantify in advance and to a reasonable degree the nature and extent of the risk a system poses. This paper canvases evidence that raises doubt about whether providers or auditors have this ability. It argues that while providers can take measures to mitigate risk to some degree, remaining risks are substantial, but difficult to quantify, and may persist for the foreseeable future due to the intractable problem of novel methods of jailbreaking and limits to model interpretability.

The problem remains unresolved.

The only guardrails at the moment

The only mechanisms in Canada and the US for holding AI companies liable are laws on product liability, negligence, and wrongful death.

Parents in both the California and Florida cases are suing the model makers (OpenAI and Character.AI, respectively) for wrongful death, a statutory cause of action that allows family members of the deceased to sue for damages including funeral expenses, mental anguish, loss of future financial support, and companionship. Plaintiffs must show the defendant’s negligence or intentional misconduct caused the death.

Here, parents allege that chatbot makers were negligent in product design and failed to provide adequate warnings about risks.

Canadian law works in a similar way. Provinces allow wrongful death suits for a wrongful act. Damage awards in Canada are much smaller than in the US and mostly limited to quantifiable losses. But plaintiffs can also claim that a model maker was negligent in offering a harmful product, or that it was defective or lacked adequate warnings.

At the heart of negligence and product liability is the same question: what steps should OpenAI, Anthropic, or Google reasonably have taken to avoid harm?

Put another way, in making chatbots available, companies clearly owe users a duty of care. The product carries risks, and harm to users is foreseeable.

The key question, though, is: what is the standard of care?

When can OpenAI and others be said to have done enough—or not enough—to avoid harm? If the standard is “reasonably safe” rather than “absolutely safe,” when is that threshold met? And can it even be met, given the nature of these systems?

No one knows. But OpenAI and others are taking—and publicizing—all the steps one might predict a tort lawyer would advise them to take.

OpenAI admits its risk-detection mechanisms work better in shorter conversations and degrade as conversations lengthen. It is working to improve performance in longer chats.

It is also improving detection across different types of harmful conversations, from suicidal to criminal. It has announced plans for parental controls to let parents monitor their child’s activity, and is rolling out systems to route some conversations to human overseers who can terminate the chat and lock the user out of further access.

Whether these steps will be deemed sufficient—enough to absolve OpenAI and others of liability—remains to be seen.

Much may depend on how a model was misused, what jailbreak was employed, and whether that misuse was foreseeable.

In a broader sense, it is worth keeping perspective on AI risks. As tragic as these cases are, hundreds of millions of people use these tools daily, and many find them beneficial. But there are, inevitably, many ways to misuse them.