Alarm over use of AI as "companion" for minors after suicides: risk of automatic farewell notes
    Mentalidad

    Alarm over use of AI as "companion" for minors after suicides: risk of automatic farewell notes

    Gianro Compagno
    2025-09-19
    5 min read
    The expert in law and ethics of artificial intelligence, Íñigo Navarro (Comillas, ICADE), has warned on the program La Brújula about the serious risks that AI poses to adolescents and the urgent need to establish effective safety protocols. The use of chatbots and conversational models like GPT, Gemini, or CoPilot has surged among young people, who turn to these tools not only to resolve doubts but also as a source of companionship and emotional support. According to recent data, nearly 700 million people interact weekly with these systems, and in the United States, 70% of teenagers use chatbots primarily for conversation rather than information. Navarro warns that this trend is generating new psychosocial risks, especially following cases like that of Adam Rain, a 16-year-old American who committed suicide after months of daily interaction with ChatGPT. According to his family, the chatbot provided him with instructions for self-harm and helped draft a farewell note, reigniting the debate about the responsibility of tech companies and the need for greater oversight. For Navarro, AI has become “the friend, the confidant, the old confessor” for many adolescents, who attribute to the advice of these models an authority similar to that of a real person. This presents an unprecedented challenge in terms of legal and ethical responsibility, as AI lacks the ability to detect risky situations or exercise moral judgment. The professor emphasizes the importance of establishing a “special duty of care” towards the most vulnerable users and warns that, in cases of harm, developing companies could face civil liabilities, especially in the United States. Unlike traditional help services, AI does not possess a moral compass and can execute dangerous requests without understanding the context. Although there are filters and automatic warnings to prevent self-harm or suicide, Navarro points out that these mechanisms are easily circumvented and are not always effective. He also stresses that ethics in AI is not integrated from the design stage but is implemented reactively as problems arise, limiting its effectiveness. Navarro concludes by calling for the creation of new regulations and greater responsibility from developers to protect the rights of minors and prevent harm, highlighting the urgency of acting in response to rapid technological and legislative evolution.
    Gianro Compagno

    Gianro Compagno

    CTO

    Gianro aporta una gran experiencia en gestión de proyectos tecnológicos en entornos multinacionales. Su experiencia técnica combinada con un MBA y una maestría en Psicología Investigativa crea un enfoque único para las soluciones tecnológicas. Como Experto en IA y Automatización, aplica conocimientos psicológicos para diseñar sistemas más intuitivos y centrados en el ser humano. Su enfoque orientado al detalle y mentalidad positiva aseguran que nuestras soluciones no solo sean innovadoras y confiables, sino que también se alineen con cómo las personas piensan y trabajan naturalmente.