The ethical crisis shaking Silicon Valley: a deep analysis
    Inteligencia Artificial (IA)

    The ethical crisis shaking Silicon Valley: a deep analysis

    Paloma Firgaira
    2026-03-23
    5 min read
    While Anthropic and the U.S. government face off in court over the military use of artificial intelligence, the Pentagon has struck a controversial deal with OpenAI, generating intense criticism in the tech sector. The central issue revolves around whether a private company can impose limits on the government regarding the use of its AI and whether the administration is willing to respect those limits. The conflict intensified following the dispute between Anthropic and the Trump administration over the application of AI in military operations. On February 23, Dario Amodei, CEO of Anthropic, met with Secretary of War Pete Hegseth to discuss the future military use of their AI models. According to sources from the Department of Defense cited by latribunadetoledo.es, Anthropic's Claude tool was reportedly used in operations such as the incursion into Venezuela to capture Nicolás Maduro and in actions in Iran, despite restrictions imposed by the Pentagon. Concerned about the potential use of its technology in autonomous weapons or mass surveillance, Anthropic demanded guarantees to avoid these scenarios. However, the Pentagon insisted on unrestricted access for any legal purpose. Tensions escalated when Anthropic publicly stated that the use of AI for domestic mass surveillance is incompatible with democratic values and poses serious risks to fundamental freedoms. The Trump administration's response was immediate: it ordered federal agencies to stop using Anthropic's technology and classified the company as a "Risk to Supply Chain and National Security," blocking any commercial relationship with the military. This decision dealt a severe economic blow to Anthropic, which had signed a $200 million contract with the Armed Forces. In response, the company sued the U.S. government, claiming irreparable damages from the measures taken. Meanwhile, the Pentagon moved forward with its plans and signed a deal with OpenAI to use its AI models in classified networks, leaving Anthropic out. Sam Altman, CEO of OpenAI, assured that the Pentagon would respect the principles of not using AI for mass surveillance and would ensure human accountability in the use of force, even in autonomous systems. However, the alliance faced harsh criticism, especially from Anthropic. Amodei described OpenAI's agreement as "theater" and "lies," accusing the company of falsely presenting itself as a mediator. Additionally, OpenAI's Robotics Director, Caitlin Kalinowski, resigned over ethical concerns regarding the collaboration with the Pentagon. Altman attempted to calm the situation by announcing a review of the agreement, but the controversy has reignited the debate over the ethical and legal limits of artificial intelligence in the military context, amid rapid technological advancement and increasing government pressure. Source: latribunadetoledo.es
    Paloma Firgaira

    Paloma Firgaira

    CEO

    Con más de 20 años de experiencia, Paloma es una ejecutiva flexible y ágil que sobresale implementando estrategias adaptadas a cada situación. Su MBA en Administración de Empresas y experiencia como Experta en IA y Automatización fortalecen su liderazgo y pensamiento estratégico. Su eficiencia en la planificación de tareas y rápida adaptación al cambio contribuyen positivamente a su trabajo. Con sólidas habilidades de liderazgo e interpersonales, tiene un historial comprobado en gestión financiera, planificación estratégica y desarrollo de equipos.