OpenAI Introduces AI-Based Age Detection To Strengthen Teen Safety On ChatGPT

0
OpenAI Introduces AI-Based Age Detection To Strengthen Teen Safety On ChatGPT

OpenAI has begun deploying a new age prediction system on ChatGPT, marking a significant shift in how the artificial intelligence platform protects teenage users from inappropriate or harmful content.

The system operates quietly in the background, analysing user behaviour and account activity to estimate whether an account is likely being used by someone under the age of 18. Once the system reaches that conclusion, protective settings are automatically activated without requiring users to submit identity documents during sign-up.

The move signals OpenAI’s effort to address long-standing concerns from parents, regulators, and child-safety advocates about how generative AI tools interact with minors. Rather than relying solely on self-declared ages, the company is now applying probability-based assessments to determine the level of safeguards each user receives.

Brandspur Brand News understands that the technology evaluates a combination of signals, including how long an account has existed, usage patterns over time, typical login hours, and the age provided at registration. OpenAI emphasises that no single factor determines the outcome, as the system weighs multiple indicators before assigning an age likelihood.

Accounts identified as belonging to teenagers are automatically placed under stricter content rules. These restrictions limit access to material involving graphic violence, sexualised or violent role-play, depictions of self-harm, extreme body image narratives, dangerous online challenges, and unhealthy dieting content. In cases where the system cannot confidently determine a user’s age, it defaults to the most protective settings.

In announcing the rollout, OpenAI said the goal is to ensure younger users receive an experience tailored to their safety needs, while avoiding broad identity verification measures that could discourage access or raise privacy concerns. The company described the initiative as part of a broader strategy to build age-appropriate AI experiences at scale.

Also read: https://brandspurng.com/2026/01/21/netflix-slightly-beats-revenue-estimates-shares-slide-amid-bidding-war-for-warner-bros/

Adults who believe they have been incorrectly classified as under 18 are not permanently restricted. OpenAI allows affected users to verify their age through Persona, an external identity verification service. This process may involve a live selfie and, in certain regions, a government-issued ID. OpenAI maintains that it does not store or receive users’ identity documents, only confirmation of age eligibility.

Beyond age detection, the company is expanding optional parental control tools. These features allow guardians to set usage limits, restrict certain capabilities such as memory functions, and receive alerts if a child’s interactions suggest emotional distress or self-harm risks.

The age prediction feature is already active in several markets, with a phased rollout planned for Europe in line with regional regulatory requirements. OpenAI says it will continue refining the system, improving accuracy, and closing loopholes as users attempt to bypass restrictions.

As scrutiny around artificial intelligence and child safety intensifies globally, OpenAI’s latest move reflects a broader industry trend toward proactive safeguards rather than reactive enforcement, placing teen protection at the centre of AI product design.