
Send Email
Confidentiality Guaranteed
Confidentiality Guaranteed

Private eye
Police investigate the scene of a shooting shut to the pupil union at Florida Instruct College on April 17, 2025 in Tallahassee, Florida. Two other folks were killed and 5 injured in the attack. Florida’s attorney overall is now investigating OpenAI for the explanation that alleged shooter used ChatGPT to succor contrivance the attack.
Miguel J. Rodriguez Carrillo/Getty Pictures
shroud caption
toggle caption
Miguel J. Rodriguez Carrillo/Getty Pictures
Florida’s attorney overall is launching a prison investigation into ChatGPT and its guardian company OpenAI over claims that the accused gunman in a shooting at Florida Instruct College closing one year consulted the AI chatbot forward of killing two other folks and injuring 5 extra.
The Republican attorney overall, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice forward of the shooting, including what form of gun to utilize, what ammunition went with it, and what time to inch to campus to shut upon extra other folks, in step with an preliminary assessment of Ikner’s chat logs.
“My prosecutors have looked at this and they’ve told me, if it was a person on the other end of that screen, we would be charging them with murder,” Uthmeier said. “We cannot have AI bots that are advising people on how to kill others.”
OpenAI spokesperson Kate Waters said in a written assertion to NPR: “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime.” She said the company reached out to allotment files in regards to the alleged shooter’s fable with law enforcement after the shooting and continues to cooperate with authorities.
Uthmeier’s office is issuing subpoenas to OpenAI seeking files about its policies and within practising provides linked to person threats of wretchedness and the design it cooperates with and reports crimes to law enforcement, relationship abet to March 2024. At the press conference, Uthmeier acknowledged the investigation is sharp into uncharted territory and is unsure about whether OpenAI has prison obligation.
“We are going to look at who knew what, designed what, or should have done what,” he said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”
OpenAI’s Waters said that the chatbot “provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.”
She persisted: “ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”
Ikner, 21, is facing a pair of charges of waste and attempted waste for the April 2025 shooting shut to the pupil union on FSU’s Tallahassee campus, the save he used to be a pupil at the time. His trial is determined to inaugurate on Oct. 19. Per court filings, bigger than 200 AI messages were entered into evidence in the case.
The Florida investigation comes amid increasing concerns over the characteristic of AI chatbots in mass violence. Uthmeier had already announced a civil investigation into ChatGPT’s characteristic in the FSU shooting, which is ongoing, and attorneys for the family of one of many victims recount they contrivance to sue OpenAI.
OpenAI is already facing a lawsuit from the family of a victim severely wounded in an attack in British Columbia in February 2026 that killed eight other folks and injured dozens extra. The alleged shooter discussed gun violence eventualities with ChatGPT and used to be even banned from the platform months forward of the shooting, nonetheless used to be ready to evade detection and carry out one other fable, OpenAI suggested Canadian authorities.
The Wall Avenue Journal reported that OpenAI’s within programs flagged the fable’s posts and staffers were scared enough to attach in solutions alerting law enforcement, nonetheless that the company determined now to not. OpenAI has said it’s making adjustments to “strengthen” its protocol for referring accounts to law enforcement in the aftermath of the Canadian shooting.
Court cases are additionally mounting against OpenAI and varied makers of AI chatbots alleging they’ve contributed to mental successfully being crises and suicides. (OpenAI has said the cases are “an incredibly heartbreaking situation” and that it be working with mental successfully being experts to toughen how ChatGPT responds to indicators of mental or emotional injure.)
A wrongful death lawsuit filed against Google in March over the suicide of a Florida man accuses the company’s Gemini chatbot of pushing the man to “stage a mass casualty attack near the Miami International Airport [and] commit violence against innocent strangers,” in step with court paperwork.
Primarily based on that lawsuit, Google said: “Gemini is designed to not encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.” The company added that on this particular case, Gemini had “referred the individual to a crisis hotline many times.”
