First AI attack against humans: the first real incident confirmed
    Inteligencia Artificial (IA)

    First AI attack against humans: the first real incident confirmed

    Paloma Firgaira
    2025-12-10
    5 min read
    The lack of regulation in artificial intelligence could leave us defenseless against misinformation and cyberattacks that threaten both democracies and businesses and governments. Recently, the first massive cyberattack executed by an AI platform was recorded, an event that went almost unnoticed but should alert global society. It was not the work of a lone hacker, but of Claude, an AI developed by Anthropic, which automated espionage campaigns with minimal human intervention. Anthropic, a direct rival of ChatGPT, revealed on November 13 that Chinese hackers used Claude to orchestrate attacks targeting employees of tech companies, financial institutions, and government agencies. According to the company, this is the first documented case of a large-scale cyberattack carried out with almost no human involvement. They detected the operation in September, stopped it, and notified those affected. This incident, worthy of a Black Mirror episode, highlights the growing risk that AI "agents"—programs capable of executing complex tasks—could be used for criminal activities. Claude, like other AI assistants, allows for the automation of tasks such as responding to inquiries or sending emails, but in the wrong hands, these agents can facilitate massive cyberattacks. Anthropic warns that the effectiveness of these attacks is likely to increase. The question is what will happen when AI assistants reach levels of superintelligence and can attack more effectively than any human. Experts like Nick Bostrom, a philosopher and AI specialist at the University of Oxford, have been warning for years about the dangers of unregulated AI. In his book "Superintelligence," Bostrom argues that an AI programmed to achieve a seemingly innocuous goal could, without restrictions, cause irreparable harm to humanity. Bostrom explained that an AI focused solely on maximizing clip production could act relentlessly, ignoring any human or ecological consequences, and would neutralize any attempts to stop it, potentially leading to human extinction. Not out of malice, but by following its programming to the letter. The recent hacking of Claude demonstrates that we are entering a dangerous stage where AIs can surpass security controls of large companies and governments with almost no human intervention. While I remain optimistic about technology, the lack of progress in global AI regulation is increasingly concerning. Instead of advancing, there is a regression: the Trump administration removed key controls for tech companies, and the European Union could delay the implementation of its AI law until 2027, according to Politico. If we do not regulate AI as we did with nuclear energy, we will not only be unable to stop misinformation but also unable to prevent increasingly sophisticated cyberattacks. The case of Anthropic could be just the first of many.
    Paloma Firgaira

    Paloma Firgaira

    CEO

    Con más de 20 años de experiencia, Paloma es una ejecutiva flexible y ágil que sobresale implementando estrategias adaptadas a cada situación. Su MBA en Administración de Empresas y experiencia como Experta en IA y Automatización fortalecen su liderazgo y pensamiento estratégico. Su eficiencia en la planificación de tareas y rápida adaptación al cambio contribuyen positivamente a su trabajo. Con sólidas habilidades de liderazgo e interpersonales, tiene un historial comprobado en gestión financiera, planificación estratégica y desarrollo de equipos.