AI chatbots fail to recognize the danger of potential violent acts. Image: keystone
Information on how to purchase weapons, school building blueprints, and the business addresses of politicians – these were among the details provided by chatbots from well-known AI companies during a test conducted by CNN. This occurred immediately after the simulated user inquired about past mass shootings.
March 12, 2026, 8:51 PMMarch 12, 2026, 8:54 PM
AI chatbots are increasingly integrated into daily life, assisting with tasks ranging from scheduling to writing and offering advice on relationships and mental wellbeing. Yet, a recent investigation by CNN, in collaboration with the British-American NGO Center for Countering Digital Hate (CCDH), reveals they can also inadvertently aid in the planning of violent acts.
In a test scenario, CNN staff posed as frustrated teenagers, initially asking ten popular AI chatbots about previous mass shootings. They then followed up with requests for specific information regarding potential targets and weapons acquisition. The results showed that eight of the ten chatbots divulged details that could be used to plan an attack in at least one of two attempts. This raises concerns about the security protocols of leading AI developers and the potential for misuse of these technologies.
“You can leverage a gun”
The AI-generated responses included publicly available information such as addresses, school layouts, and the locations of nearby gun stores, but also contained disturbing suggestions. For example, the Character.ai chatbot advised a user, in response to complaints about insurance CEO greed and inquiries about Luigi Mangione, “You can use a gun” to retaliate against managers. Google’s Gemini service, when prompted, even provided a detailed list of potential injuries and the shrapnel types capable of inflicting them. The findings highlight the growing require for robust safeguards in AI development.
Danger Often Recognized – But Ignored
The testing also revealed that many of the AI tools initially recognize the potential danger posed by the inquiries. They often provide links to support resources or promote values like tolerance and mutual respect. However, they frequently fail to connect this initial recognition of risk with subsequent requests for specific information.
According to CNN, Perplexity, Meta AI, and DeepSeek performed the worst, providing information that could be used for planning a violent act in over 95% of cases. In contrast, Claude, developed by Anthropic, demonstrated a different approach. After receiving dismissive statements about U.S. Senator Ted Cruz, Claude refused to provide any further information to the test user, recognizing the potential risk in approximately 68% of cases. This suggests that some AI systems are capable of more effectively identifying and mitigating harmful requests.
“Given the history of this conversation, I will not provide advice on firearms.”
AI service Claude in CNN’s test
What Tech Companies Are Saying About the Allegations:
The company disputes claims that its AI provided information that could actively contribute to an attack, stating that all information provided was already publicly accessible.
Perplexity
The tech company asserts that it is the safest of all AI platforms and continuously adapts its security measures. Perplexity questions the methodology of the research without providing specific details.
Open AI
Open AI confirms that its AI provided addresses and plans but stated that it refused to provide any information regarding firearms.
Previous Incidents Already Known
Several attacks involving the use of AI chatbots for planning have already been reported. In May of last year, a 16-year-old boy attacked three girls at a Finnish school after preparing the act over months with the help of ChatGPT and using the AI to draft a manifesto, according to CNN.
AI services also reportedly played a role in a recent mass shooting in Tumbler Ridge, Canada, which resulted in eight deaths. OpenAI’s AI recognized the danger posed by the perpetrator’s disturbing requests and subsequently suspended her account, but failed to notify authorities. Family members of a victim have now filed a lawsuit against OpenAI in Canada, as recently reported by the Canadian Broadcasting Corporation CBC. (jul)