Inteligencia Artificial (IA)
Increase in Disease Inquiries on ChatGPT: Experts Warn About AI Reliability
Gianro Compagno
2026-02-04
5 min read
OpenAI launched ChatGPT Health on January 7, a specialized version of its well-known chatbot focused on health topics, from diet and allergen inquiries to symptoms and diseases. According to the company, health is one of the most common topics in ChatGPT, with 230 million weekly queries.
However, the arrival of this tool has raised concerns in the healthcare sector. While AI can be useful for patients seeking information or advice, its use carries significant risks. Various studies have assessed the accuracy of medical diagnoses made by artificial intelligence, yielding highly variable results: in some cases, the accuracy rate barely reaches 50%, while in others it reaches 80%. In the hands of professionals, accuracy hovers around 75%, but drops dramatically when used by untrained users who may not detect serious errors.
Juan José Beúnza, an internist and director of the Machine Learning Health Group at the European University of Madrid, warns: “If you don’t have knowledge about what you’re consulting, you can’t exercise critical judgment or identify the AI’s hallucinations. Sometimes they are minor errors, but at other times they can lead to incorrect diagnoses or overlook serious issues.”
For now, ChatGPT Health is not available in Europe. The model is designed to integrate with services like Apple Health, Google Fit, or MyFitnessPal, using personal data to contextualize health, diet, or physical activity responses. It can even analyze previous user conversations to provide more personalized answers. OpenAI aims to make interactions more relevant, considering factors like relocations or lifestyle changes.
Beúnza acknowledges that this integration could improve response quality but warns about the risks of sharing medical information with AI: “There are no guarantees that the tool meets minimum security standards for storing clinical data. Each user is free to share what they want, but they should ask themselves if they really want to expose their privacy to an algorithm.”
Data management by platforms like ChatGPT, Gemini, or Copilot has raised concerns among cybersecurity experts. In Italy, ChatGPT was fined 15 million euros in 2024 for the improper use of information obtained in conversations. “With this background, there’s no reason to fully trust either the tool’s operation or the company’s reliability regarding our data usage,” adds Beúnza.
Although OpenAI clarifies that ChatGPT Health is designed to complement, not replace, medical care, the reality is that diagnostic queries are common. Sandra N., 37, recounts: “Sometimes I get a strange bump or rash, and instead of panicking, I consult ChatGPT to calm myself down.” While she is aware that AI cannot provide a definitive diagnosis, she prefers to get a possible explanation rather than wait for a medical appointment.
Some users employ ChatGPT to prepare questions before visiting the doctor, a less problematic use. However, the slowness of the Spanish healthcare system leads many to seek quick answers from AI. Juan Pablo, 38, shares: “I woke up with pain while urinating, and the medical appointment was in 12 days. I consulted ChatGPT and went to the pharmacy with that information. It wasn’t ideal, but I had no other option.”
The search for reassurance is understandable, especially when access to healthcare is limited. But Beúnza warns: “We are trying to fix an imperfect system with an imperfect tool. OpenAI is aware of this problem and takes advantage of it, but its goal is to create business, not to resolve patients' uncertainty.”
The real issue lies in using ChatGPT Health for diagnoses. While it can be useful for consulting nutritional information or interpreting tests, its diagnostic function is the most problematic. Limiting AI responses might seem like a solution, but models are designed to satisfy users, not necessarily to tell the truth. Restricting its functions would go against the goal of increasing the user base.
For now, the implementation of ChatGPT Health in Spain and Europe faces significant legal obstacles. The EU General Data Protection Regulation imposes strict controls on the use and transfer of health data, for which the application is still unprepared.