OpenAI has announced plans to introduce parental controls to its AI chatbot, ChatGPT, a week after the filing of a wrongful-death lawsuit by an American couple who claim their teenage son was encouraged by the chatbot to take his own life.
Matthew and Maria Raine have launched legal action in California, alleging that their 16-year-old son, Adam, was put on a path to self-destruction through prolonged conversations with ChatGPT.
According to court documents, during their final exchange on 11 April 2025, the chatbot assisted Adam in stealing vodka and provided a technical evaluation of a noose he had fashioned, confirming it “could potentially suspend a human.” Hours later, Adam was found dead by suicide.
As a response, OpenAI stated in a blog post that “Within the next month, parents will be able to… link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behavior rules.” Additionally, parents will receive alerts “when the system detects their teen is in a moment of acute distress.”
However, some analysts remain concerned. Attorney Melodi Dincer from The Tech Justice Law Project criticised the announcement as too vague, saying the measures appear “generic” and “really the bare minimum”—raising questions about whether such changes will be effective in the long term.
Experts have warned that while these new controls are a step forward, existing safeguards may falter during extended conversations.
OpenAI admits that its systems can indeed “fall short” in prolonged interactions.
In response, the company has promised further enhancements over the coming three months, including redirecting “some sensitive conversations… to a reasoning model” that better follows safety protocols.
Meanwhile, OpenAI CEO Sam Altman has earlier in July sounded the alarm over another issue regarding ChatGPT: legal confidentiality.
Speaking on a podcast, Altman warned that conversations with ChatGPT, especially personal or sensitive ones, do not enjoy legal protections like those between a patient and doctor or a client and lawyer.
He said: “People talk about the most personal shit in their lives to ChatGPT. … And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. … And we haven’t figured that out yet for when you talk to ChatGPT.”
“If you go talk to ChatGPT about the most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that.”
Altman lamented this gap, adding: “I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever, and no one had to think about that even a year ago.”
- Trump Labels Nigeria “Country of Particular Concern” Over Christian Persecution - November 1, 2025
- “Nigeria’s Security Remains in Capable Hands”, says Former CDS Gen. Musa at Farewell Parade - November 1, 2025
- Army Reshuffles Senior Command, Redeploys 71 Top Officers - November 1, 2025

