Privacy Risks: Can Your Conversations with AI Become Public?
    Inteligencia Artificial (IA)

    Privacy Risks: Can Your Conversations with AI Become Public?

    Gianro Compagno
    2026-02-15
    5 min read
    Security breaches in companies and their users often have a common origin: configuration errors, a problem much more widespread than one might think. The explosive growth of mobile Artificial Intelligence (AI) applications, which attract millions of users, multiplies the impact of these failures. Google Firebase, a mobile app development platform that offers cloud services such as databases, authentication, and analytics, has been at the center of a recent incident. A cybersecurity specialist detected that an incorrect configuration in Firebase allowed anyone to authenticate and access the internal storage of certain applications, exposing sensitive user data. The case came to light following an investigation by 404 Media, which revealed the vulnerability in Chat&Ask AI, one of the most popular AI apps on Google Play and the App Store, developed by the Turkish company Codeway. With over 50 million users, the application exposed hundreds of millions of private messages. The expert claimed to have accessed 300 million messages from over 25 million users, extracting and analyzing a significant sample. Codeway fixed the issue within hours across all its applications. The severity of the incident increases when considering that many messages contained extremely sensitive information, such as inquiries about mental health, suicide, drugs, or app hacking. Among the exposed data were complete chat histories, timestamps, personalized chatbot names, and the configuration of AI models used, including ChatGPT, Claude, and Gemini. This type of configuration error is common in Firebase, as it leaves security rules open and allows anyone with the project URL to read, modify, or delete data without authentication. Despite warnings from experts, the situation persists: the researcher found the same vulnerability in 103 out of 200 analyzed iOS apps, which implies tens of millions of files at risk. It is surprising that, after a similar incident in 2024 that exposed nearly 20 million secrets, developers continue to make these mistakes. In that case, incorrect configurations allowed access to credentials, API keys, and other confidential data. To help identify vulnerable apps, the expert has launched Firehound, a website that lists affected applications and removes them when their developers correct the flaw. Additionally, it is worth noting that in July 2025, it was discovered that Google was indexing shared ChatGPT conversations, making them public, although this issue has since been resolved. Beyond cybersecurity, this finding underscores the urgent need to regulate AI applications, as the exposed data included questions about suicide, drug prescriptions, and misinformation. Mental health apps cannot replace qualified professionals, and their use can exacerbate problems while risking user privacy. Last year, The New York Times documented how AI chatbots can negatively affect people's lives, citing cases where ChatGPT may have influenced fatal decisions. These facts reinforce the urgency of limiting the capabilities and use of AI to protect users. (Source: publico.es)
    Gianro Compagno

    Gianro Compagno

    CTO

    Gianro aporta una gran experiencia en gestión de proyectos tecnológicos en entornos multinacionales. Su experiencia técnica combinada con un MBA y una maestría en Psicología Investigativa crea un enfoque único para las soluciones tecnológicas. Como Experto en IA y Automatización, aplica conocimientos psicológicos para diseñar sistemas más intuitivos y centrados en el ser humano. Su enfoque orientado al detalle y mentalidad positiva aseguran que nuestras soluciones no solo sean innovadoras y confiables, sino que también se alineen con cómo las personas piensan y trabajan naturalmente.