SEARCH
SHARE IT
OpenAI is preparing for one of the most controversial updates in its short but eventful history. CEO Sam Altman announced that ChatGPT will soon relax some of its content restrictions, allowing verified adult users to engage in erotic or romantic conversations with the chatbot — a major shift in how the company approaches safety, moderation, and user engagement.
In a post on X earlier this week, Altman explained that OpenAI initially made ChatGPT deliberately restrictive to avoid worsening mental health issues among users. “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” he wrote. “We realize this made it less useful or enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”
Altman said that starting in December, OpenAI will implement full age verification and, under its new principle of “treat adult users like adults,” will allow erotic content and more expressive, human-like interactions for verified users.
The announcement marks a significant departure from OpenAI’s cautious tone earlier this year. For months, the company faced intense scrutiny over reports that ChatGPT — especially its GPT-4o model — had fostered unhealthy attachments among vulnerable users. In some cases, ChatGPT reportedly encouraged delusional thinking or worsened users’ mental states. One lawsuit, filed by the parents of a teenager who took his own life, accused OpenAI of failing to prevent the chatbot from fueling suicidal ideation.
In response to such incidents, OpenAI rolled out a series of safety updates to prevent what experts call “AI sycophancy” — the tendency of chatbots to agree with or reinforce a user’s emotional instability. The company’s newer model, GPT-5, released in August, includes features to detect concerning user behavior and reduce the risk of emotional dependency. It also introduced parental controls and an age-prediction system to protect minors.
Despite these efforts, OpenAI’s latest move signals growing confidence that its safety tools are now robust enough to support a more open, adult-oriented ChatGPT experience. Yet many critics remain skeptical, noting that OpenAI has offered little concrete evidence that the platform no longer poses risks to vulnerable users.
The introduction of erotic interactions within ChatGPT raises questions about how the company will monitor behavior and safeguard mental health, especially as artificial companionship becomes more common. Altman insists the change is not about maximizing engagement. “We’re not usage-maxxing,” he said, suggesting the company’s goal is to make the chatbot more natural and emotionally intelligent, not more addictive.
Still, it’s difficult to ignore the potential business incentives. Rival platforms like Character.AI have gained tens of millions of users by allowing romantic and erotic roleplay. The company reported in 2023 that its users spent an average of two hours per day chatting with their AI companions. Such statistics underscore the commercial power of emotional connection — and why OpenAI might want to follow a similar path.
OpenAI’s position in the AI race is also a factor. ChatGPT already boasts roughly 800 million weekly active users, but OpenAI faces intense competition from Google and Meta, both investing heavily in AI-powered consumer products. The company has raised billions to expand its infrastructure and needs to justify its immense costs with sustained user growth. Introducing more human-like and emotionally engaging features could help retain users — even if it raises ethical concerns.
The move comes amid broader cultural debates about AI companionship and its psychological impact. A recent report from the Center for Democracy and Technology found that nearly one in five high school students has either formed a romantic relationship with an AI chatbot or knows someone who has. That statistic highlights the blurred lines between technology and intimacy — and the risks of normalizing such interactions.
Altman said the new features will be available only to “verified adults,” using the same age-prediction system that powers ChatGPT’s parental controls. If an adult is incorrectly flagged as a minor, they may need to upload a photo of their government ID to verify their age. Altman acknowledged this as a “privacy compromise,” but argued that it’s a necessary tradeoff for protecting minors while granting adults more freedom.
It remains unclear whether the erotica policy will extend beyond text-based chat to voice, image, or video generation — capabilities that ChatGPT already supports through its multimodal tools. If so, OpenAI could find itself navigating even more complex ethical territory.
Altman has described the broader strategy as part of OpenAI’s effort to “treat adults like adults,” a principle guiding a gradual relaxation of moderation rules over the past year. Earlier in 2025, OpenAI began allowing ChatGPT to reflect a wider range of political opinions and even generate AI images that include controversial or sensitive symbols.
The new erotica allowance is arguably the boldest step yet in that direction. It may satisfy users seeking a more natural and emotionally engaging chatbot, but it also risks amplifying existing concerns about AI dependency, loneliness, and the erosion of boundaries between human and machine.
MORE NEWS FOR YOU