Inteligencia Artificial (IA)
'ChatGPT Health' Debuts: Spanish Doctors Warn About Risks and Precautions
Paloma Firgaira
2026-01-17
5 min read
Every week, millions of people turn to ChatGPT for health-related questions. Among them is P., a professional in his 50s to 60s, an athlete in good health, who uses artificial intelligence to guide him in exercise routines, manage work stress, and learn about home remedies like digestive infusions. “It doesn’t replace the doctor, but it is a complementary source in psychology, motivation, and nutrition. I use it occasionally,” he states.
P. details that when consulting OpenAI's AI, he clearly describes his situation and context to obtain precise answers. For example, in a toxic work environment, he explains the dynamics and how it affects him emotionally, seeking strategies to cope. Why do so many users turn to ChatGPT? “For accessibility, privacy, and clarity of information,” P. responds, dismissing economic reasons and stating he has never felt at risk using the tool.
Pedro Moreno, a psychiatrist and member of the Spanish Society of Clinical Psychiatry, highlights the value of AI as an assistant available 24/7. “It can alert about concerning symptoms and help identify the need for professional attention, especially given long waiting lists in Spain,” he notes.
Recently, OpenAI launched ChatGPT Health, a health-focused version with an open waiting list in Spain and support in Spanish. However, experts like Fernando Eiras from the Spanish Society of Intensive Medicine's Technology Assessment Working Group warn about the risks of blindly trusting AI. “It can provide incorrect information, as happened in a simulation about insulin doses. Medical supervision is essential,” he emphasizes.
The rise of AI has led tech companies to offer their tools to doctors, beyond traditional pharmaceutical representatives. Helena Bascuñana, president of the Spanish Society of Rehabilitation, warns about the dangers of using AI as a substitute for medical acts without guarantees or supervision. “It may not adequately contextualize, present biases, or violate privacy,” she explains. María García Gil, a pharmacist, adds that medical authority could be affected if patients consider AI as a second opinion, although if used correctly, it can enrich consultations.
Despite the promises of security and privacy from ChatGPT Health, such as encryption and temporary chats, the tool falls outside European health regulation. “It is not considered a medical product and does not guarantee compliance with European legislation on AI and data protection,” warns Eiras. The lack of traceability and transparency in responses is another critical point, according to García Gil, who alerts about potential misuse of personal data in the future.
The use of AI in health is particularly common in areas like primary care, psychiatry, psychology, and rehabilitation, and for common issues like pain, insomnia, anxiety, or digestive symptoms. Experts agree on the need to train healthcare professionals to leverage the benefits and avoid the risks of AI. “Education is key for safe use,” concludes Moreno. P., for his part, is clear: “I would never leave a serious matter in the hands of ChatGPT.” The general recommendation is to always act with caution.
Source: diariodenavarra.es