Florida opens criminal investigation into OpenAI regarding FSU shooting
The Florida attorney general has initiated a criminal probe into OpenAI's ChatGPT following a shooting incident at Florida State University. The investigation focuses on the chatbot's interactions with the accused shooter. (sources: cbsnews, ap, cnn, npr, arstechnica)

The Florida attorney general's office is investigating OpenAI's ChatGPT for its potential role in a shooting at Florida State University. This follows a review of conversation logs between the chatbot and the accused shooter.
- The attorney general's office is examining conversation logs between ChatGPT and the student accused of the shooting.
- The investigation was prompted by the chatbot's alleged advice to the shooter.
- OpenAI has stated that ChatGPT is not responsible for the actions of individuals.
Why it matters
This investigation raises questions about the responsibilities of technology companies in relation to user interactions and public safety.
↓ Why this is on ModernAction
2 bills on this issue are moving right now — and the most active one is GUARD Act.
S3062 · 119th Congress
GUARD Act
Where do you stand on this bill?
Takes about 60 seconds
About this bill
What S3062 actually does
This story is about Florida opens criminal probe into OpenAI over ChatGPT's alleged role in FSU shooting. This bill would require age verification and recurring disclosures that a user is interacting with an AI system and restrict deceptive.
If passed, it would:
- Require age verification and recurring disclosures that a user is interacting with an AI system (not a human) and • Create penalties tied to designing/making available certain chatbots (including provisions addressing encouragement of.
1 other bill moving on this issue
Take action on any of them individually.
This story is about Florida opens criminal investigation into OpenAI over FSU shooting. This bill would establish federal legal standards for advanced AI products.
If passed, it would
- Establish federal legal standards for advanced AI products • Create a federal framework intended to make AI product-liability outcomes more predictable (which can shift incentives.
Top coverage · 12 sources
