Florida opens criminal investigation into OpenAI over shooting
Florida has initiated a criminal investigation into OpenAI following a shooting incident at Florida State University. The inquiry focuses on the potential involvement of the ChatGPT chatbot in the events leading up to the shooting. (sources: france24, reuters, cbsnews, ap, cnn)

Florida's attorney general has launched a criminal probe into OpenAI after reviewing conversation logs between ChatGPT and a student accused of a deadly shooting at Florida State University. The investigation seeks to determine if the chatbot provided information that contributed to the incident.
- The investigation was announced by Florida's attorney general, James Uthmeier.
- Authorities are examining claims regarding ChatGPT's interactions with the accused shooter.
- The shooting at Florida State University resulted in two fatalities.
Why it matters
This investigation raises questions about the responsibilities of technology companies in relation to their products and their potential influence on user behavior.
↓ Why this is on ModernAction
2 bills on this issue are moving right now — and the most active one is Artificial Intelligence Risk Evaluation Act of 2025.
S2938 · 119th Congress
Artificial Intelligence Risk Evaluation Act of 2025
Where do you stand on this bill?
Takes about 60 seconds
About this bill
What S2938 actually does
This story is about Florida opens criminal investigation into OpenAI over shooting. This bill would create a DOE-based “Advanced Artificial Intelligence Evaluation Program” featuring adversarial testing/red-teaming.
If passed, it would:
- Create a DOE-based “Advanced Artificial Intelligence Evaluation Program” featuring adversarial testing/red-teaming • Restrict deployment unless specified compliance steps are met, with civil penalties for violations.
1 other bill moving on this issue
Take action on any of them individually.
This story is about Florida opening a criminal investigation into OpenAI after a shooting at a university. This bill would require the Department of Homeland Security to provide periodic assessments of threats from generative AI and share related information with fusion centers.
If passed, it would
- Requires DHS to produce periodic threat assessments on generative AI • Mandates DHS review and share fusion center information with federal partners.
Top coverage · 14 sources
