Inteligencia Artificial (IA)
Risks of AI in Health: How Artificial Intelligence Can Compromise Your Clinical Data
Paloma Firgaira
2026-01-18
5 min read
Consulting an artificial intelligence about symptoms or medical results has become a common practice for millions of people. The promise of quick and understandable answers is appealing, but it hides significant risks. Sharing clinical data with these platforms can lead to medical errors and a deeper loss of privacy than many realize.
Today, medical consultations often begin long before stepping into a clinic. More users are entering symptoms, uploading tests, or seeking second opinions from AI systems before contacting a professional. The immediacy and sense of control these tools offer can be addictive.
However, a key question arises: what happens to all that sensitive information we provide? Who stores it, processes it, and for what purposes? The rise of solutions like ChatGPT Health has reignited the debate about the intersection of technology, health, and privacy.
While big tech companies promise efficiency and clarity, experts in medicine, law, and cybersecurity warn about the risks of trusting clinical data to systems that are not always prepared to handle it securely.
Currently, over 230 million people use ChatGPT weekly, with a growing number doing so for health inquiries. OpenAI has launched ChatGPT Health, which allows users to upload medical reports, test results, or wellness app data, aiming to help “better understand” clinical information, not to diagnose.
The problem is that the line between assistance and medical interpretation is blurred. Language models do not reason like a doctor nor verify the accuracy of their responses; they simply generate plausible texts. Various studies indicate that these systems can fail in relevant medical recommendations in up to one in five cases. In healthcare, this margin of error can have serious consequences: incorrect decisions, delays in care, or inappropriate treatments.
Beyond the quality of responses, the main concern is data protection. Medical information is particularly sensitive and is strongly protected by law in Europe. Sharing it outside healthcare systems removes the safeguards present in hospitals and health centers, such as access control and traceability.
Data protection specialists warn that by entering clinical history into private platforms, users lose real control over that information. Although companies claim that data is not used to train models or that it is encrypted, the risk of leaks is real, as has already happened with healthcare databases after cyberattacks.
There is also concern about the commercial use of data. Some experts warn that this information can be valuable to insurers or third parties interested in profiling risks, adjusting prices, or making automated decisions. Often, users are unaware of the extent of what they are giving up.
Generative AI can not only make mistakes but do so with great conviction. Its responses are often confident and well-written, which can lead to excessive trust. In health, that trust can be dangerous, especially with complex symptoms, rare diseases, or mental health issues.
Recent research from MIT has shown that some models trained with medical histories can "memorize" patient data, even if anonymized, which could lead to leaks of private information. Patients with rare diseases are especially vulnerable, as they are easier to identify.
Most experts agree that AI can be an ally in healthcare, but under clear limits: as support for professionals, not as a substitute or clinical confidant. Tools that help reduce bureaucracy, interpret results, or prioritize cases can improve care, but the problem arises when it becomes normalized to delegate medical decisions and personal data to platforms outside the healthcare system.
Technology is advancing rapidly, and the temptation to use it for everything is great. However, when it comes to our health, it is worth reflecting on whether immediate convenience justifies long-term risks.