Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
OpenAI is doubling down on safeguarding minors in ChatGPT. The company announced a new “age prediction” feature designed to identify under-18 users and automatically apply content restrictions to keep them away from potentially harmful conversations.
AI’s influence on teens has been under the microscope for years. ChatGPT, like other chatbots, has faced criticism over its potential impact on mental health and exposure to inappropriate content. Several teen suicides were reportedly linked to interactions with the chatbot, and last year a bug allowed ChatGPT to generate erotica for minors, forcing OpenAI to respond quickly.
The new feature is part of a broader effort to protect young users while still giving adults full access to the platform.
OpenAI says the age prediction system analyzes a combination of behavioral and account-level signals, including:
The user’s stated age
How long the account has existed
Times of day the account is typically active
If the algorithm flags an account as under 18, content filters are automatically applied. These filters already restrict discussions of sex, violence, and other sensitive topics.
For users mistakenly classified as underage, there’s a reverification option. They can submit a selfie through OpenAI’s partner Persona to restore their adult account status.
Safer AI experiences for minors: Reduces exposure to harmful content without banning underage users outright.
Responsible AI practices: Shows OpenAI is acknowledging criticism and taking concrete steps to mitigate risks.
Balancing privacy and protection: The system raises questions about automated age detection and the data OpenAI needs to collect.
OpenAI isn’t stopping teens from using ChatGPT — but it’s making sure the conversations they have are age-appropriate, using AI to enforce the rules.