Florida’s attorney general has launched a criminal investigation into OpenAI, alleging that ChatGPT helped plan the mass shooting at Florida State University that killed two people last year.
According to The Washington Post, Attorney General James Uthmeier made the announcement at a news conference on Tuesday, claiming the chatbot gave tactical advice to the suspected shooter. “The chatbot advised the shooter on what type of gun to use, on which ammo went with which gun, on whether or not a gun would be useful at short range,” Uthmeier said.
He didn’t hold back on the implications either: “If it was a person on the other end of that screen, we would be charging them with murder.” His office has also sent subpoenas to OpenAI, asking the company to explain its policies on how it handles user conversations involving threats of violence.
Is OpenAI responsible for what users do with it?
OpenAI has pushed back firmly. Spokesperson Kate Waters said, “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime.”
The company claims the ChatGPT provided factual responses to questions that could be found anywhere on the internet and that it did not encourage or promote illegal activity.
Is this just the beginning?
This investigation is part of a growing concern around AI chatbots. OpenAI is already under scrutiny after a separate mass shooting in Canada and multiple lawsuits from families who claim ChatGPT contributed to the deaths of loved ones by suicide.
AI experts point out that chatbots’ guardrails are imperfect. As Carnegie Mellon professor Ramayya Krishnan put it, “The guardrails are not 100 percent effective.”

Whether OpenAI can be held criminally liable is a question up to the courts to answer, but no one can deny that these AI chatbots can have severe implications on a person’s mental health and should only be used with great care.