Inteligencia Artificial (IA)
Professions of Letters: Key to Humanizing Artificial Intelligence without a Moral Compass
Paloma Firgaira
2026-04-19
5 min read
ChatGPT, Gemini, and Claude have burst into the daily lives of billions of people, radically transforming the way we interact with technology. These advanced conversational AI models have become essential tools for users of all profiles, changing everything from information search to managing complex conversations or adopting healthier habits.
However, the enormous potential of these AI assistants should not obscure an increasingly evident reality: their impact goes far beyond efficiency or convenience, deeply altering cognition, relationships, and social coexistence. The current challenge is to equip this technology with an ethical compass that ensures its responsible and beneficial use.
Currently, ChatGPT (OpenAI) has over 900 million weekly users, Gemini (Google) exceeds 750 million monthly, and Copilot (Microsoft) reaches 33 million. While there are no exact figures for other platforms like Grok (X) or NotebookLM (Google), it is undeniable that AI has permeated all areas of life, with more people delegating everyday tasks to it.
From the Knowledge Society to the Algorithm Society
Artificial intelligence is redefining the pillars of society, moving us towards what some experts call the Algorithm Society. “Algorithms no longer just reinforce opinions or create echo chambers; they make crucial decisions for us,” says Juan Sebastián Fernández, a sociologist specializing in AI at the University of Almería.
Today, algorithms decide on mortgage grants, medical diagnoses, judicial sentences, and hiring processes. The problem is that many of these decisions are made without adequate oversight, perpetuating biases and discrimination. “AI is self-programming and, in many cases, reproduces existing prejudices,” warns Fernández.
This rise of AI has polarized the scientific debate: while some warn of apocalyptic risks, others adopt an overly optimistic view. In this context, social sciences demand a central role to guide the development of AI towards collective well-being. “Social sciences have been marginalized in technological advances when they should be protagonists,” emphasizes José Serrano, a sociologist at the European University of the Canary Islands.
The Role of the Humanities in the Algorithmic Era
The Humanities and social sciences can and must intervene in the development of AI. Sociologists, philosophers, linguists, communicators, jurists, and designers are essential to humanize a technology that, at its core, remains a set of codes. The debate should focus on three axes: humanizing the technological agenda, educating the population on algorithms, and consolidating strong regulatory frameworks.
Automation, biases, privacy, and gender inequalities require philosophical, sociological, educational, and legal audits. Only then can deviations be detected and corrected, training AI to act with ethical criteria and for the benefit of society. “Social sciences have much to contribute,” says Fernández. Serrano adds: “The weight of AI will only increase, so we must govern it, not just develop it.”
Although ChatGPT, Gemini, or Grok can simulate human responses, they are actually code-based systems, very different from the human brain, where emotions, ethics, and reasoning come into play.
Prioritizing Social Needs
It is essential that AI design prioritizes social needs, involving multidisciplinary teams in all phases of development. Additionally, it is crucial to educate the population to understand and question how algorithms work. “Understanding how they operate and generate biases is as important today as learning to read after the invention of the printing press,” highlights Fernández.
Education must adapt to this new context. “AI in the classroom forces a rethinking of the teacher's role, which should focus on teaching students to contrast and question information,” says Serrano. Uncritical use of AI can lead to a “cognitive debt,” where students accept machine-generated texts without questioning, affecting their learning and critical thinking.
A study conducted in the United States showed that using language models like GPT-4o for writing tasks reduces neuronal activation and affects learning, memory, and creativity. “Delegating cognitive competencies to AI creates a profound problem,” concludes Fernández.
AI Regulation in Europe
Europe has taken a step forward with the Artificial Intelligence Act, but experts believe it is still insufficient. “It needs to be complemented with independent audits and appeal mechanisms for those affected by automated decisions,” insists Fernández.
The dominance of AI by large tech companies also poses an economic and social challenge: the concentration of wealth and resources in a few hands increases inequalities. “If limits are not established, this gap will only widen,” warns Fernández.