Inteligencia Artificial (IA)
Hidden Risks of Everyday Artificial Intelligence: How Safe Are Language Models?
Gianro Compagno
2026-02-02
5 min read
Reflect Before Acting: 4 Daily Keys to Protect Against Cyberattacks in the AI Era
The arrival of large language models (LLMs) has revolutionized the way we work, communicate, and develop new ideas. Tools like ChatGPT or Gemini, powered by advanced artificial intelligence, have transformed workflows, bringing efficiency and new perspectives. However, this advancement also brings new challenges and responsibilities regarding security.
LLMs, trained on vast amounts of data, not only generate surprisingly human-like texts but are also applied across multiple sectors. However, their widespread use can pose significant risks: from the leakage of sensitive information to the spread of false content, as well as non-compliance with regulations and a loss of trust in technology.
It is easy to forget that, although LLMs are convincing, they can make mistakes. The more we rely on them, the harder it becomes to question their responses. Therefore, it is essential to maintain a critical attitude and not assume that everything they generate is correct or safe.
Traditional cybersecurity was not designed for the challenges posed by LLMs. These models function as "black boxes," generating unpredictable and difficult-to-audit responses. This complicates the detection of threats such as instruction manipulation, data poisoning, or the exploitation of vulnerabilities through insecure APIs and plugins.
Cybercriminals exploit these weaknesses, using techniques such as overloading models with repetitive commands or accessing training data. However, the most common method remains large-scale phishing, as LLMs facilitate the creation of fraudulent messages that mimic legitimate communications to steal credentials or cause data breaches.
The integration of AI into everyday tools like Google Workspace or Microsoft 365 makes data protection and regulatory compliance more crucial than ever. Security must evolve at the pace of technology, identifying and correcting potential blind spots.
These risks are not hypothetical. For example, Samsung engineers introduced confidential information into ChatGPT for routine tasks, raising concerns about potential leaks of trade secrets. In response, the company limited the use of the tool and developed internal solutions. Another case is DeepSeek AI, whose model stores data on servers accessible by the Chinese government, raising privacy and security concerns.
To minimize risks, it is essential to share only necessary information and carefully review the responses generated by LLMs. On a technical level, it is recommended to implement access controls, customize security restrictions, and conduct periodic audits focused on AI-specific risks.
Security strategies must adapt, incorporating intelligent mechanisms that authenticate users, prevent unauthorized access, and continuously evaluate interactions. Only then can LLMs continue to drive innovation safely and reliably.
Source: 20minutos.es