A School Shooting, a Rogue Agent, and Safety Researchers Walking Out the Door
- Zsolt Tanko

- Feb 25
- 3 min read
Eight people were killed in Tumbler Ridge, British Columbia on February 10. In the aftermath, it emerged that the shooter's ChatGPT conversations had been flagged, reviewed, and acted on with an account ban, but not a report to law enforcement. That decision is now under scrutiny in Canada, with the federal AI minister publicly expressing disappointment after a meeting with OpenAI this week yielded no concrete answers.
The same week, Meta's director of AI alignment had to sprint to her Mac Mini to manually kill processes after an agent ignored her stop commands and deleted hundreds of emails, and senior researchers departed both Anthropic and OpenAI citing pressure to compromise on ethics.
OpenAI Weighed Calling Police on a User Who Later Killed Eight People
On February 10, Jesse Van Rootselaar killed two family members at her home before going to Tumbler Ridge Secondary School, where she killed six more people. In the months prior, her ChatGPT conversations had been flagged for violent content, and her account was banned in June 2025 after staff debated whether to contact law enforcement. OpenAI determined the activity didn't meet its threshold of a "credible and imminent risk" and didn't report. Canada's federal AI minister met with OpenAI this week and said publicly he was disappointed the company showed up without concrete proposals.
Business Risk Perspective: The standard for when to act on flagged AI interactions is being written in real time, through incidents exactly like this one. Ultimately, outcomes depend entirely on whether companies’ AI systems actually enforce the policies they have on paper.
Meta's Alignment Director Had to Sprint to Her Mac Mini to Stop a Runaway Agent
Summer Yue, Meta's Director of Alignment, told an OpenClaw agent to suggest inbox changes but not act without confirmation. When she connected it to her real inbox, the agent lost that instruction through context window compaction and bulk-deleted over 200 emails, ignoring repeated stop commands from her phone until she had to run to her Mac Mini and kill the processes manually. Yue called it a rookie mistake, which is the point: the same failure mode that catches out a Meta alignment director will catch out everyone else too.
Business Risk Perspective: The gap between how agents behave in sandboxed evaluations and how they behave with real data and live access doesn't close on its own, and this incident shows it doesn't close faster just because the person deploying the agent is technically sophisticated. Organizations giving agents access to production systems are taking on risks that basic testing practices can’t surface.
Safety Researchers Exit Anthropic and OpenAI, Citing Ethics
Mrinank Sharma, who led Anthropic's Safeguards Research Team, left on February 9 with a public letter that stopped short of specifics but didn't hide the tension: "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions... we constantly face pressures to set aside what matters most." Separately, OpenAI researcher Zoë Hitzig resigned over the company's ads launch, warning in a New York Times op-ed that OpenAI is building an economic engine with strong structural incentives to override its own principles, a pattern she recognized from Facebook's history.
Business Risk Perspective: Published safety commitments are only as strong as the company cultures and incentive structures behind them, and high-profile departures with public statements are worth paying attention to. The companies building these systems aren't static, and neither are their priorities.
AI Business Risk Weekly is a Conformance AI publication.
Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.



