GSM Cellphones Ltd 750x150 250129_left
Slide

GSM Cellphones Ltd 750x150 250129_left
Slide

HomeNewsOttawa Calls on OpenAI to Involve Canadian Specialists in Reviewing Flagged ChatGPT Conversations

Ottawa Calls on OpenAI to Involve Canadian Specialists in Reviewing Flagged ChatGPT Conversations

Ottawa Calls on OpenAI to Involve Canadian Specialists in Reviewing Flagged ChatGPT Conversations

Canadian experts need to assess ChatGPT conversations that have been flagged for signs that users intend to cause imminent harm to determine whether to alert law enforcement, federal AI Minister Evan Solomon told OpenAI chief executive Sam Altman in a virtual meeting on Wednesday.

 

How artificial-intelligence companies interact with law enforcement has become a major concern after revelations that California-based OpenAI did not tell Canadian authorities about conversations involving gun violence that 18-year-old Jesse Van Rootselaar had with ChatGPT months before fatally shooting eight people on Feb. 10 in Tumbler Ridge, B.C., and then killing herself.

 

In an interview, Mr. Solomon said that he told Mr. Altman that Canadian experts in mental health, law and privacy have to weigh in on such sensitive matters. “When a flag comes up in Canada, it is Canadians, the Canadian perspective, and not Americans, that are helping to determine the legal threshold and mental-health assessment,” he said.

 

Read More On Our Daily Stock Market Reports – Global Tensions Persist, But Markets Stage a Broad Rebound

OpenAI has said it involves experts in these assessments. Mr. Altman agreed to include Canadians, according to Mr. Solomon. “I asked him if that means that would happen in a Canadian office, and he’s taken that under advisement,” he said.

 

 

Mr. Solomon also requested that the Canadian Artificial Intelligence Safety Institute, a federally funded research body launched in 2024, review OpenAI’s safety and reporting protocols to provide an objective opinion, which the company also agreed to. “We need some transparency,” Mr. Solomon said, “and we need to have a deeper assessment from our Canadian experts.”

 

OpenAI did not immediately return a request for comment.

 

Mr. Solomon did not say whether the government will introduce regulations for when AI companies report to law enforcement, which some experts have recommended.

 

Canada does not have overarching AI legislation, nor does it have a set of rules that apply specifically to chatbots, unlike some other jurisdictions. Some experts have said that forthcoming online harms legislation should cover chatbots as well as social-media platforms.

 

The Wall Street Journal reported in February that the Tumbler Ridge shooter’s ChatGPT conversations were automatically flagged and a dozen employees debated whether to report the conversations to law enforcement. OpenAI leaders opted not to alert authorities but banned the account in June, 2025. The company has said that the conversations did not meet its threshold for reporting because it did not identify credible or imminent planning.

 

While the role of AI has become a national topic of conversation, there may have been other red flags associated with the Tumbler Ridge shooter. Police had visited her home multiple times, and officers seized firearms from the home two years ago, but someone in the family successfully petitioned for their return.

 

The chief coroner’s office in British Columbia will hold an inquest into the Tumbler Ridge shooting that will examine factors beyond AI, including the mental-health services provided to the shooter and her access to firearms.

 

Mr. Solomon said that he had other demands for Mr. Altman during the meeting, including that the company establish direct contacts with Canadian law enforcement and that it re-examine every ChatGPT conversation that had been flagged over the past 12 months. “He said they are already doing that, and he did not tell me if that has led to new announcements to law enforcement,” Mr. Solomon said.

 

The company sent a letter to government ministers last week saying that it had changed its reporting criteria to recognize that a user may pose an imminent and credible risk of harm even if conversations do not reference the timing or planning of violence. “Under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” the letter stated.

 

OpenAI also committed in the letter to establishing direct Canadian law enforcement contacts.

 

Experts have said that without a regulatory framework, Canada will have trouble ensuring that OpenAI – and other AI companies – keep their commitments, and will have no means to enforce compliance.

 

Mr. Solomon reiterated that the government is looking at new privacy and online harms legislation. “We’ll be coming forward with the framework that we think keeps Canadians safe,” he said, adding that he will be meeting with other AI companies about their safety and law enforcement reporting protocols.

 

 

 

 

This article was first reported by The Globe and Mail