Florida initiates criminal investigation into OpenAI and ChatGPT
Florida's attorney general has opened a criminal investigation into OpenAI following a shooting incident at Florida State University. The inquiry focuses on the interactions between ChatGPT and the accused shooter. (sources: reuters, cbsnews, ap, cnn, npr)

The attorney general's office is examining whether ChatGPT provided assistance to the suspect in the FSU shooting. This investigation follows a review of conversation logs between the chatbot and the accused.
- The investigation was launched after a review of conversation logs involving ChatGPT and the accused shooter.
- The shooting incident resulted in the deaths of two individuals at Florida State University.
- The attorney general's office is assessing the potential role of the chatbot in the events leading up to the shooting.
Why it matters
This investigation raises questions about the responsibilities of technology companies in relation to violent incidents involving their products.
↓ Why this is on ModernAction
2 bills on this issue are moving right now — and the most active one is GUARD Act.
S3062 · 119th Congress
GUARD Act
Where do you stand on this bill?
Takes about 60 seconds
About this bill
What S3062 actually does
This story is about Florida opens criminal probe into OpenAI over ChatGPT's alleged role in FSU shooting. This bill would require age verification and recurring disclosures that a user is interacting with an AI system and restrict deceptive.
If passed, it would:
- Require age verification and recurring disclosures that a user is interacting with an AI system (not a human) and • Create penalties tied to designing/making available certain chatbots (including provisions addressing encouragement of.
1 other bill moving on this issue
Take action on any of them individually.
This story is about Florida opens criminal investigation into OpenAI over FSU shooting. This bill would establish federal legal standards for advanced AI products.
If passed, it would
- Establish federal legal standards for advanced AI products • Create a federal framework intended to make AI product-liability outcomes more predictable (which can shift incentives.
Top coverage · 13 sources
