Inteligencia Artificial (IA)
The phenomenon of mass resignations in AI: lessons and analysis of labor impact
Paloma Firgaira
2026-02-22
5 min read
Resignations at major artificial intelligence companies have evolved from simple internal moves to genuine public manifestos. Over the past two years, open letters, posts on X, and opinion articles have shaped a new genre: the resignation letter in AI. These missives, often filled with philosophical reflections and warnings about the future, reveal both the internal tensions of the industry and the ethical concerns of its protagonists.
This week, the phenomenon added new chapters. Researchers from xAI and OpenAI made their departures public, but the most striking letter was from Mrinank Sharma, who led the Safeguards team at Anthropic, one of the most recognized startups for its focus on safety. In a 778-word letter published on X, Sharma quoted poets like Rainer Maria Rilke and Mary Oliver, reflecting on the risks of "AI-assisted bioterrorism" and global "polycrisis." His farewell, steeped in melancholy, included the full poem "The Way It Is" by William Stafford and confessed his desire to dedicate himself to poetry and the "practice of brave speaking."
Although less dramatic than Sam Altman's brief dismissal as CEO of OpenAI in 2023, Sharma's letter illustrates the deep emotional bond researchers have with their work and teams. It also highlights recurring tensions: the struggle between research and the development of commercial products, the pressure to launch technologies without sufficient testing, and the sense of betrayal when values are compromised by economic interests.
Most who resign publicly in the AI sector come from safety and alignment areas, concerned about the weakening of safeguards under financial pressure. Few leave the field entirely; many migrate to other startups or think tanks, maintaining their influence in the industry.
The case of OpenAI is paradigmatic. After Altman's dismissal and rapid reinstatement, the company experienced a wave of departures. Ilya Sutskever, co-founder and leader of the superalignment team, left his position in May 2024, followed by Jan Leike, who denounced that safety had been sidelined in favor of products. Both joined Anthropic shortly after. Others, like Miles Brundage, warned that neither OpenAI nor any lab is prepared for AGI, although they acknowledge the impact of working in these environments.
Zoë Hitzig's resignation, following the announcement of advertising in ChatGPT, highlighted the risks of manipulation and loss of user autonomy, comparing the situation to the exploitation of personal data on social media.
At xAI, the departure of several founding members reflects the volatility of the sector, while at OpenAI, resignations have been motivated by internal disagreements and concerns about the company's direction. Steven Adler, a former security researcher, expressed his fear about the pace of AI development and its potential catastrophic consequences.
However, these letters rarely address the current impacts of AI: from energy consumption and mass surveillance to automation and the educational crisis. Warnings often focus on future threats, overlooking immediate issues affecting millions of people.
As William Stafford wrote in the poem quoted by Sharma, "Nothing you do can stop the unfolding of time." In the AI industry, the sense of inevitability and resignation seems to permeate even the most passionate acts of protest.