ChatGPT has been making headlines again — and not all for good reasons. Earlier this year, controversy erupted after the tragic death of a teenager allegedly tied to their interactions with the chatbot. The family has since filed a lawsuit against OpenAI, sparking widespread calls for stronger safety measures.
In response, OpenAI rolled out several new safeguards, including enhanced parental controls and expanded monitoring tools designed to keep users safe. But while well-intentioned, these updates triggered a wave of frustration among ChatGPT users. Many complained that the chatbot had become too restrictive — some even canceled their subscriptions, saying it no longer felt as open or useful as before.
Now, it looks like OpenAI is listening.
In a recent post on X, CEO Sam Altman addressed the growing criticism, explaining, “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful or enjoyable for many users, but given the seriousness of the issue, we wanted to get this right.”
Altman went on to say that, now that improved safeguards are in place and the company has learned from its earlier updates, ChatGPT will soon become less restrictive — at least for those who want that option.
What Does a “Less Restrictive” ChatGPT Mean?
Altman hasn’t shared full details yet, but he did hint at what’s to come. Starting in December, once new age-verification systems are fully live, OpenAI plans to introduce a version of ChatGPT that can handle mature topics, including creative writing that ventures into adult territory — provided the user’s account is verified as belonging to an adult.
That announcement sparked plenty of responses online, including one user who asked, “Why do age gates always have to lead to erotica?” Altman jokingly replied, “You won’t get it unless you ask for it.” In other words, ChatGPT won’t suddenly start generating adult content unless users explicitly request it.
Exactly how far OpenAI will relax its limits remains unclear. But the move signals that the company is trying to strike a careful balance — maintaining safety and responsibility while restoring the usefulness and personality that many longtime fans of ChatGPT have missed.