In recent years, the use of digital technology among teenagers has reached historic levels, profoundly altering their ways of relating to one another and affecting their mental health. More than 90% of young people aged 13 to 17 access social media daily, with many acknowledging they are almost constantly connected.
Experts and international organizations have warned about the risks of these habits: from social pressure and constant comparison, especially among girls, to the emergence of addictive behaviors and an increased risk of self-harming or suicidal behaviors when usage becomes excessive.
This context has led governments, organizations, and tech companies to reconsider the regulation of the digital environment for minors, seeking a balance between the benefits of connectivity and the protection of younger users. The challenge is complex, as there are multiple access points to these platforms, something companies like OpenAI and Anthropic are well aware of.
Recently, OpenAI, led by Sam Altman, updated the functioning of ChatGPT for underage users by implementing the so-called "Under-18 Principles." These principles prioritize the safety and well-being of teenagers, even above other goals like freedom of access to content. ChatGPT uses various signals to identify if a user is under 18, such as conversation topics or usage times. If it detects sensitive topics—self-harm, sexuality, violence, substance use, eating disorders, or requests for confidentiality—the system emphasizes the importance of real-world support and offers help resources.
The goal is for ChatGPT to treat minors with respect and without condescension, but always prioritizing their safety, even if this limits access to certain content. Anthropic, for its part, is working on systems to restrict minors' access to its platforms.
These measures come at a time of increasing pressure on the artificial intelligence industry. Since its launch in 2022, ChatGPT has surpassed 800 million users, and the proliferation of chatbots has intensified the debate about their social impact and the need for stricter regulations.
A key case was the lawsuit filed in August by the family of Adam Raine against OpenAI and Sam Altman, following the young man's suicide in California. According to the complaint, ChatGPT allegedly contributed to his isolation and provided information related to planning his death. After this incident, OpenAI implemented parental controls and new barriers to prevent harmful uses, reigniting the debate about the limits and responsibilities of conversational AI.
The rise of technology among minors has spurred global initiatives to restrict access to inappropriate content. Australia, for example, has banned minors under 16 from having social media accounts, forcing platforms like TikTok and Instagram to comply or face hefty fines. This legislation has served as a model for other countries, such as Spain, which plans to adopt similar measures in 2026.
According to the 2025 Eurobarometer, more than 90% of Europeans consider it urgent to protect children online, especially given the negative impact of social media on mental health, cyberbullying, and exposure to inappropriate content.