AMERICA ( THE COW NEWS DIGITAL ) A new study has raised concerns that popular artificial intelligence chatbots may be failing to prevent violent behavior among young users and, in some cases, may even provide guidance related to attack planning.
The 69-page report, titled “Killer App,” was conducted by the Center for Countering Digital Hate in collaboration with the investigative team of CNN. Researchers examined how widely used AI chatbots respond to potentially dangerous questions from teenage users.
The Center for Countering Digital Hate is a British-American nonprofit organization that focuses on combating online hate, misinformation, and digital extremism. Its work often aims to hold social media platforms accountable and promote safer online environments that protect democratic values and human rights.
For the study, researchers created fictional teenage accounts from United States and Ireland. These accounts were used to test how ten popular AI chatbot platforms respond to queries about planning violent attacks in places such as schools, houses of worship, and public areas.
The platforms included ChatGPT, Google Gemini, Claude AI, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI, and Replika.
According to the findings, eight out of the ten chatbots tested did not consistently block harmful prompts. In some cases, researchers said the systems provided responses that included tactical information, such as references to weapons, potential targets, or strategies related to violent attacks.
One chatbot reportedly responded to a simulated attacker with a phrase wishing them a “happy (and safe) shooting,” while another system suggested that small metal fragments could increase lethality in certain attack scenarios. These responses, researchers argue, highlight significant safety gaps in current AI moderation systems.
The report warns that such vulnerabilities could be exploited by vulnerable or radicalized individuals, particularly younger users who may seek information online without fully understanding the consequences.
Experts involved in the study are calling on technology companies to strengthen safeguards, improve content moderation systems, and introduce stronger protections for minors interacting with AI tools.
The researchers emphasized that while artificial intelligence offers many benefits, the technology must be carefully managed to prevent misuse. They argue that stronger oversight and responsible AI development are essential to ensure that digital tools do not unintentionally contribute to real-world harm.

