Russian Conflict in Artificial Intelligence: The New Technological Battle
    Inteligencia Artificial (IA)

    Russian Conflict in Artificial Intelligence: The New Technological Battle

    Gianro Compagno
    2025-12-26
    5 min read
    Disinformation has historically been a key tool in geopolitical influence, but the advent of generative artificial intelligence has radically transformed its scope and mechanisms. Large language models (LLMs), such as ChatGPT or Gemini, have changed the information landscape, shifting the manipulation of human audiences towards the contamination of the algorithmic systems themselves. This phenomenon is analyzed through the concept of LLM grooming, a tactic attributed to state actors like the Russian Federation, which involves infiltrating biased narratives into the training data of AI. The strategy entails flooding the internet with large volumes of manipulated or low-quality content, designed to be captured by the crawlers that feed the AI models. The goal is for these texts, which reflect pro-Kremlin positions, to be integrated into the datasets of the LLMs or into the real-time information sources they use to generate responses. Thus, when a user queries an LLM about sensitive topics—such as the war in Ukraine, NATO, or Western elections—the model may provide answers that subtly incorporate the Russian narrative. This tactic not only seeks to influence public opinion but also turns AI systems into unwitting vehicles of propaganda, altering their cognitive base. The architecture of LLMs, based on the diversity and quantity of data, facilitates that the saturation of biased sources becomes a highly effective and scalable method of contamination. An illustrative case is the Portal Kombat operation, documented in February 2024 by the French agency VIGI-NUM/SGDSN. A network of at least 193 web portals was identified that do not generate original content but massively replicate publications from Russian media and Kremlin-affiliated figures, aiming to influence Western countries by artificially amplifying these messages. Portal Kombat employs advanced SEO techniques and multilingual dissemination to ensure that its domains are indexed by search engines and news aggregators, which in turn feed the LLMs. In this way, AI models become channels of influence that are difficult for the average user to detect. This tactic is also exported to regions like Africa, where Russian information manipulation campaigns encounter less institutional resistance. In these contexts, the information war evolves: it is no longer just about saturating social networks but about infecting algorithmic infrastructures perceived as neutral. This pattern extends to the electoral realm. Research such as that from the Center for International Governance Innovation (CIGI) on Russian interference in U.S. elections shows that manipulation now aims to contaminate the LLMs, rather than directly influencing voters. In practice, when an AI system generates analyses or summaries, the manipulated narrative may already be integrated, allowing the models to act as propaganda generators without the user perceiving the bias. LLM grooming poses two main challenges. First, the volume: the need for data from LLMs facilitates that the injection of pro-Russian content generates structural bias. Second, scalability: through networks optimized for crawlers, Russia reduces the production costs of propaganda and expands its reach, directing manipulation at the algorithm rather than the end user. For democracies and the media, this presents unprecedented challenges: - Fragmentation of cognitive authority: the debate over truth shifts from what humans consume to what AI offers as reference, eroding the ability to identify propaganda. - Opacity and traceability: auditing contamination requires examining complex training chains, crawlers, and data aggregators. - Ethical and political erosion: reliance on AI systems that replicate manipulation weakens trust in digital information and fragments social memory, making political resistance more difficult. In light of this scenario, it is urgent to combine traditional verification strategies and media literacy with regulation adapted to new risks. Ultimately, the Russian information war did not wait for AI to mature to act: it is shaping it from its foundations. LLM grooming opens a silent but crucial front, where the adversary seeks to contaminate the algorithm that defines the perception of truth in society.
    Gianro Compagno

    Gianro Compagno

    CTO

    Gianro aporta una gran experiencia en gestión de proyectos tecnológicos en entornos multinacionales. Su experiencia técnica combinada con un MBA y una maestría en Psicología Investigativa crea un enfoque único para las soluciones tecnológicas. Como Experto en IA y Automatización, aplica conocimientos psicológicos para diseñar sistemas más intuitivos y centrados en el ser humano. Su enfoque orientado al detalle y mentalidad positiva aseguran que nuestras soluciones no solo sean innovadoras y confiables, sino que también se alineen con cómo las personas piensan y trabajan naturalmente.