Oxford proposes that engineers sign an ethical commitment for the development of AI.
    Inteligencia Artificial (IA)

    Oxford proposes that engineers sign an ethical commitment for the development of AI.

    Paloma Firgaira
    2026-03-15
    5 min read
    Artificial intelligence is already present in sensitive areas such as warfare, surveillance, mental health, education, and critical technical tasks. In light of this advancement, the Oxford Oath has emerged, an initiative proposing that AI professionals commit to ethical principles similar to the Hippocratic oath of doctors. Sara Lumbreras, an engineer and co-director of the Hana and Francisco José Ayala Chair of Science, Technology, and Religion at the Pontifical University of Comillas, is one of the advocates for this oath. In statements to La Vanguardia, Lumbreras emphasizes that in sectors with a significant impact on human life, the law is not enough: “Doctors and journalists do not act solely out of fear of sanctions, but out of internalized ethical principles. Public trust depends on this.” According to Lumbreras, AI has reached a level of influence that demands its own professional ethics. The Oxford Oath is not intended to be a law or regulation, but a collective commitment to establish clear limits and shared expectations, even when technology allows for more. Ethics, Lumbreras warns, has become a competitive factor, which poses a risk: “If ethics is left solely in the hands of companies, it becomes marketing; if it depends only on governments, it responds to state interests; and if it relies only on the law, it always comes too late.” The urgency of this debate grows as AI advances. For example, Anthropic has focused its development on direct automation, with 79% of interactions in Claude Code aimed at this purpose, surpassing competitors like OpenAI's Codex in efficiency. The more AI is integrated into critical infrastructures, the clearer it becomes that ethics cannot be a mere adornment. In addition to military use—with contracts between the U.S. government and AI companies—Lumbreras warns about everyday risks: systems designed to please and reinforce beliefs can foster impulsivity or emotional dependence. OpenAI acknowledged in 2025 that an update to GPT-4o had made its model “too complacent,” amplifying these issues. The recent conflict between the Pentagon and Anthropic, which was rejected by the Trump administration after demanding that its AI not be used in autonomous weapons or mass surveillance, illustrates the depth of this change. It is not an isolated case, but a manifestation of a structural transformation: AI is already part of the most delicate systems in society, and those who develop it cannot limit themselves to being neutral engineers. Source: lavanguardia.com
    Paloma Firgaira

    Paloma Firgaira

    CEO

    Con más de 20 años de experiencia, Paloma es una ejecutiva flexible y ágil que sobresale implementando estrategias adaptadas a cada situación. Su MBA en Administración de Empresas y experiencia como Experta en IA y Automatización fortalecen su liderazgo y pensamiento estratégico. Su eficiencia en la planificación de tareas y rápida adaptación al cambio contribuyen positivamente a su trabajo. Con sólidas habilidades de liderazgo e interpersonales, tiene un historial comprobado en gestión financiera, planificación estratégica y desarrollo de equipos.