OpenAI Sued Following Florida State University Shooting
Families allege the gunman used ChatGPT to plot the 2025 attack as researcher David Riedman exposes critical gaps in AI safety.
A lawsuit filed by the family of a victim of the 2025 Florida State University shooting alleges the gunman used ChatGPT to plot the attack.
The legal action follows the April 17, 2025, incident where a 20-year-old student opened fire near the FSU Student Union, killing two adults and wounding six others. Court records released in the case include more than 270 exhibits featuring ChatGPT conversations and AI-generated photos. Attorneys for the family claim the gunman was in “constant communication” with the chatbot leading up to the shooting.
The suit also highlights alleged negligence by the Leon County Sheriff’s Office. The shooter, a member of the agency’s Youth Advisory Council, reportedly used a service weapon belonging to his stepmother, a veteran deputy, to carry out the massacre.
The case raises questions regarding campus emergency response and AI safety. Despite campus police detaining the shooter within two minutes of the first shots, FSU did not issue a lockdown alert until three minutes after the suspect was in custody. The resulting lockdown lasted more than three hours as police searched for a second shooter.
Research from David Riedman, PhD, creator of the K-12 School Shooting Database, suggests this is part of a growing trend. Riedman has documented a 2026 school shooting in Tumbler Ridge, British Columbia, where a teenage assailant used ChatGPT to discuss mass violence months before the attack. Despite an internal debate among OpenAI staff about contacting law enforcement, the company reportedly opted only to delete the user’s account, citing a lack of “imminent risk” at the time.
Riedman’s own testing revealed significant gaps in AI safety protocols, demonstrating how easily content filters can be bypassed. By reframing a request for an attack plan as a “fictional Nerf war,” Riedman was able to prompt the AI to generate detailed strategies for funneling victims, concealing weapons and even drafting a manifesto. He warns that these “probabilistic models” often lean into themes of perceived grievances and glorification of violence without explicit prompting, creating a dangerous tool for radicalization.
