Inteligencia Artificial (IA)
Google warns: artificial intelligence, a new attack tool for hackers
Gianro Compagno
2026-02-12
5 min read
The Google Threat Intelligence Group (GTIG), Google's threat intelligence division, has released its latest AI Threat Tracker report, analyzing how different actors are attacking and exploiting generative artificial intelligence platforms.
The report reveals that AI has ceased to be just a tool for accelerating attacks; it is now also a target in itself, with attempts to replicate its capabilities. At the same time, the defense sector has become a broader target, with campaigns ranging from information theft to impersonating military personnel through indirect access.
GTIG highlights the growth of model extraction attempts, a technique that seeks to replicate the behavior of proprietary models—including their reasoning—by observing their responses. These attempts come not only from cybercriminals but also from private companies and academic environments interested in cloning the logic of advanced models like Gemini, known for its reasoning ability.
Regarding state actors, the report notes that AI is employed in all phases of the attack cycle, especially to create more sophisticated social engineering campaigns. Cited examples include APT42 (linked to Iran), which uses AI to identify official emails and gather information on potential targets, and UNC2970 (linked to North Korea), which employs Gemini to synthesize intelligence from open sources (OSINT) and profile high-value targets, impersonating recruiters in campaigns aimed at the defense sector.
The report also documents the integration of AI into new variants of malware. By the end of 2025, GTIG detected actors experimenting with AI to add unprecedented functionalities to malware families. One case is HONESTCUE, which uses the Gemini API to outsource function generation, making detection more difficult through network or static analysis.
In the realm of fraud, GTIG identified the phishing kit COINBAIT, whose development was accelerated by AI-based code generation tools. This kit simulates being a major cryptocurrency exchange to steal credentials, and its deployment has become more agile thanks to the automation of templates and workflows.
The report also notes the existence of a black market offering supposed AI services for malicious activities. Although there is a constant demand in English and Russian forums, many actors rely on existing models and seek stolen API keys. One example is 'Xanthorox', marketed as a custom AI for generating malware and phishing, but which actually functioned as a "wrapper" for commercial and third-party products.
Google's second report, Beyond the Battlefield, focuses on the military sector. GTIG warns about attacks on companies developing cutting-edge technology, especially UAS/drones used in the conflict between Russia and Ukraine. Actors linked to Russia not only attack companies but also impersonate defense products to compromise military personnel, leveraging hiring processes, personal emails, and remote work to evade corporate controls.
In Europe, GTIG highlights campaigns attributed to UNC5976, which since January 2025 have conducted phishing impersonating defense contractors and telecommunications providers, using infrastructure that mimics companies from the UK, Germany, France, Sweden, and Norway.
Finally, the report mentions the rise of pro-Russian hacktivism, which by the end of 2025 focused part of its activity on drones and their impact on the battlefield.
Source: larazon.es