Inteligencia Artificial (IA)
Silicon Valley Challenges the Pentagon in the Battle for Control of Artificial Intelligence
Gianro Compagno
2026-03-15
5 min read
In early March, Anthropic, one of Silicon Valley's most influential artificial intelligence startups, filed a lawsuit against the U.S. government after being excluded by the Department of Defense from several technology projects related to national security. The Pentagon classified it as a "supply chain security risk," a designation typically reserved for foreign companies susceptible to interference or sabotage.
However, Anthropic is an American firm, founded in California and backed by some of the world's most significant tech investors. Until recently, it even collaborated as a technology partner with the Pentagon. What has changed?
For decades, the U.S. military relied almost exclusively on traditional defense contractors like Lockheed Martin, Raytheon, or Northrop Grumman, responsible for manufacturing fighter jets, submarines, satellites, and missile systems. Silicon Valley, on the other hand, focused on developing computers, software, and later, mobile devices and digital services. But the evolution of warfare has transformed this landscape: today, military superiority largely depends on the ability to process and analyze vast amounts of data in real-time, from satellite imagery to intercepted communications.
In light of this new reality, the Pentagon began seeking partnerships with tech companies capable of providing advanced data analysis tools. Palantir, founded in 2003 with the support of Peter Thiel, was a pioneer in this field, developing software capable of integrating and analyzing large databases, a solution especially valuable for military intelligence.
Currently, the public sector remains Palantir's main client: in 2024, over 54% of its revenue came from government contracts, many related to defense. In 2025, the company signed a data analysis contract with the U.S. military valued at around $10 billion over ten years.
Palantir represents the model of a tech company that openly collaborates with the state. However, not all companies in the sector share this vision. The new generation of AI companies, like Anthropic—creator of the Claude model—has introduced a different perspective. In 2025, Anthropic signed a contract worth about $200 million with the Department of Defense to adapt its models for military uses, including processing classified information. Nevertheless, the company has advocated for the need to establish clear limits on the use of its systems.
AI tools can analyze large volumes of data, identify complex patterns, and generate predictions. While for many private companies these capabilities serve to optimize processes, in the military context, they can become decision-support systems in operations. This has opened a debate about the ethical and operational limits of AI in defense.
According to various recent reports, the tension between Anthropic and federal authorities arose when the company attempted to impose restrictions on the use of its models in military projects. The Pentagon believes these limitations are incompatible with defense needs. A senior tech official at the Pentagon described the situation as "absurd," criticizing that an American company refuses to allow its AI to support national security missions. Anthropic, for its part, has turned to the judicial route.
The case is now in the hands of federal courts but has already sparked an intense debate in Washington about who should decide the use of these technologies. For the Pentagon, the priority is clear: to incorporate AI as soon as possible to maintain military advantage. For some tech companies, the issue is much more complex.