By Byron Kaye
SYDNEY, April 2 (Reuters) – People who show violent extremist tendencies on ChatGPT will be directed to human and chatbot‑based deradicalisation support through a new tool in development in New Zealand, the people behind it said.
The initiative is the latest attempt to address safety concerns in the face of a growing number of lawsuits accusing AI companies of failing to stop, and even enabling, violence.
OpenAI was threatened with intervention by the Canadian government in February after revealing a person who carried out a deadly school shooting had been banned by the platform without the authorities being informed.
ThroughLine, a startup hired in recent years by ChatGPT owner OpenAI as well as rivals Anthropic and Google, to redirect users to crisis support when they are flagged as being at risk of self-harm, domestic violence or an eating disorder, is also exploring ways to broaden its offer to include preventing violent extremism, its founder and former youth worker Elliot Taylor said.
The company is in discussions with The Christchurch Call, an initiative to stamp out online hate formed after New Zealand’s worst terrorist attack in 2019, which would involve the anti-extremism group giving guidance while ThroughLine develops the intervention chatbot, the former youth worker said.
“It’s something that we’d like to move toward and to do a better job of covering and then to be able to better support platforms,” Taylor said in an interview, adding that no timeframe has been set.
OpenAI confirmed the relationship with ThroughLine but declined to comment further. Anthropic and Google did not immediately respond to requests for comment.
Taylor’s firm, which he runs from his home in rural New Zealand, has become a go-to for AI firms with its offer of a constantly-checked network of 1,600 helplines in 180 countries.
Once the AI detects signs of a potential mental health crisis, it routes the user to ThroughLine, which matches them with an available human-run service nearby.
But ThroughLine’s scope has been limited to specific categories, the founder said. The breadth of mental health struggles that people disclose online has exploded with the popularity of AI chatbots, and now includes dalliances with extremism, he added.
MORE CHATBOTS, MORE PROBLEMS
The anti-extremism tool would probably be a hybrid model combining a chatbot trained to respond to people who show signs of extremism and referrals to real-world mental health services, Taylor said.
“We’re not using the training data of a base LLM,” he said, referring to the generic datasets large language model platforms use to form coherent text. “We’re working with the correct experts.” The technology is currently being tested, but no date has been set for release.
Galen Lamphere-Englund, a counterterrorism adviser representing The Christchurch Call, said he hoped to roll the product out for moderators of gaming forums and for parents and caregivers who want to weed out extremism online.
A chatbot rerouting tool was “a good and necessary idea because it recognises that it’s not just content that is the problem, but relationship dynamics,” said Henry Fraser, an AI researcher at Queensland University of Technology.
The product’s success may depend on questions of “how good are follow-up mechanisms and how good are the structures and relationships that they direct people into at addressing the problem,” he said.
Taylor said follow-up features, including possible alerts to authorities about dangerous users, were still to be determined but would take into account any risk of triggering escalated behaviour.
He said people in distress tended to share things online that they were too embarrassed to say to a person, and governments risked compounding danger if they pressured platforms to cut off users who engaged in sensitive conversations.
Heightened moderation associated with militancy by platforms under pressure from law enforcement has seen sympathisers moving to less regulated alternatives like Telegram, according to a 2025 study by New York University’s Stern Center for Business and Human Rights.
“If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support,” Taylor said.
(Reporting by Byron Kaye; Editing by Kate Mayberry)

Comments