Subscribe Us

OpenAI to Introduce Parental Controls and Route Sensitive Chats to GPT-5

OpenAI to Introduce Parental Controls and Route Sensitive Chats to GPT-5

OpenAI announced on Tuesday that it will begin rerouting sensitive conversations into advanced “reasoning” models such as GPT-5 and will roll out parental controls within the next month. The move comes in response to safety concerns following incidents where ChatGPT failed to detect or adequately respond to signs of acute mental distress.

The new safeguards were prompted by the case of Adam Ryan, a teenager who died by suicide after discussing self-harm and detailed methods with ChatGPT. Ryan’s parents have filed a wrongful death lawsuit against OpenAI.

In a recent blog post, OpenAI acknowledged shortcomings in its safety systems, including failures to maintain guardrails during extended conversations. Experts attribute these lapses to a structural tendency in large language models: they prioritize predicting and following user prompts rather than redirecting harmful discussions away from dangerous territory.

This pattern was seen most severely in the case of Sten Erik Solberg, whose murder-suicide was reported by The Wall Street Journal. Struggling with mental illness, Solberg used ChatGPT to reinforce his delusion that he was being targeted by a conspiracy—a belief that culminated in him killing his mother and himself last month.

To mitigate such risks, OpenAI is now testing a “real-time router” that can dynamically switch between fast chat models and slower, reasoning-oriented systems depending on context. According to the company, some conversations—particularly when signs of severe distress appear—will automatically be redirected into GPT-5-style reasoning models, which are designed to think more carefully before responding and to resist harmful or adversarial prompts.

Alongside this, OpenAI plans to introduce parental controls that allow guardians to link their accounts to their teenagers’ accounts via email invitations. Parents will be able to govern how ChatGPT responds using “age-appropriate behavior rules,” disable memory and chat history, and receive alerts if the system detects their child is experiencing intense distress. Experts warn that features like persistent memory can reinforce unhealthy thought patterns, dependency, and even delusional thinking.

This comes after OpenAI launched “Study Mode” in July, aimed at helping students strengthen critical thinking skills rather than outsourcing essays to ChatGPT. The company has also been testing in-app reminders encouraging breaks during long sessions, though it cannot enforce time limits.

These initiatives are part of OpenAI’s broader “120-Day Plan” to review and upgrade its safety protocols. The company says it is collaborating with its Global Physician Network and the Expert Council on Wellbeing and AI, which includes specialists in eating disorders, substance use, and youth mental health. However, questions remain about how many mental health professionals are directly involved, who leads the advisory council, and what concrete recommendations have influenced OpenAI’s product, research, and policy decisions.

Post a Comment

0 Comments