Inteligencia Artificial (IA)
Grok and Gemini: Analyzing the Ideology Behind Artificial Intelligences
Paloma Firgaira
2026-03-12
5 min read
The expansion of Artificial Intelligence has opened a crucial debate: can these tools have their own political orientation?
Currently, over a billion people turn to AI for information, work, or even cheating on exams. According to AP data from late 2025, nearly 60% of adults in the United States already use AI systems as their primary source of information, displacing references like Google or Wikipedia. This shift, beyond value judgments, radically transforms the way we access knowledge and raises a relevant question: is there an ideological bias in AIs?
The topic has gained traction following the launch of Grok, Elon Musk's AI, and its encyclopedia Grokipedia, both powered exclusively by artificial intelligence. In its early days, Grokipedia provided data ranging from the surprising to the unbelievable. Musk has claimed that Grok will be the only "non-woke" AI on the market. However, other platforms have also generated controversy. DeepSeek, developed in China, has avoided responding to sensitive topics for the regime, such as the deaths during the Cultural Revolution or the events of Tiananmen in 1989. Gemini, on the other hand, has been pointed out for alleged progressive bias.
Researcher David Rozado has studied these biases in his works "The Political Preferences of LLMs" (2024) and "Measuring Political Preferences in AI Systems - An Integrative Approach" (2025). His conclusions are clear: most AIs tend toward center-left positions and avoid extremes. ChatGPT and Gemini often provide responses aligned with the center-left, while Grok leans toward positions considered "libertarian" in the United States.
AIs operate based on large language models (LLMs), trained with vast amounts of data, conversations, and texts to capture nuances and reproduce human reasoning. They do not think for themselves but identify patterns and generate coherent responses based on their training. Therefore, the type of information and the guidelines from programmers directly influence their responses.
Developers and testers provide feedback and set priorities in information sources. Thus, AI tends to replicate the biases present in its training data or in the instructions received. This implies a risk: by prioritizing certain sources and discarding others, AIs may omit relevant information, not by intention, but by design.
Consequently, AIs can completely exclude certain knowledge if it does not fit the initial programming parameters. This is not traditional censorship but an algorithmic selection that prioritizes internal coherence over absolute truth. The danger lies in the fact that, by filtering information, AIs can shape collective memory and the interpretation of history and culture. The technical programming decisions ultimately become decisions about which aspects of our culture and past will remain in digital memory.
Source: larazon.es