Technology has been an essential component in human evolution, from the earliest tools to today's digital systems. The relationship between the human and the technical has sparked deep debates about the impact of technology on our freedom and autonomy. Since the 19th century, these discussions have focused on how much technologies influence our lives and whether we are truly free in choosing how to use them. This article offers a critical analysis of three approaches to this issue, inspired by the chapter “Autonomy and Technology: from instrumentalism to technocomplexity” from the book Outonomy: Fleshing out the Concept of Autonomy Beyond the Individual (Springer, 2026).
The first approach considers technology as a mere instrument under human control. Phrases like “guns don’t kill, people kill” reflect the idea that technical artifacts are neutral and their value depends on how we use them. This view, rooted in classical Greece and solidified during the Enlightenment, holds that human autonomy is the basis of morality and freedom, and that technology is merely a means to achieve ends determined by human reason. Thus, ethical responsibility rests solely with those who design and use technology.
However, the industrialization of the 19th century challenged this perspective. Factory work demonstrated how technology could surpass and alienate the individual, turning people into cogs in a larger machine. Marx analyzed how technologies, controlled by the capitalist class, condition social and economic relationships, generating situations of exploitation and heteronomy for the majority. In this context, technology ceases to be a neutral tool and becomes a factor that determines the life and autonomy of workers.
In the 20th century, authors like Jacques Ellul delved into the idea of autonomous technology. In his work The Technological Society, Ellul argues that modern society is governed by a “technological rationality” oriented towards efficiency, which subordinates all aspects of life to its own logics. According to Ellul, technology becomes a self-sustaining and opaque system that absorbs human autonomy and transforms people into means for its own ends. This “techno-autonomism” suggests that technology can come to dominate and redefine human existence.
Today, the notion of technological autonomy is common, especially with the rise of artificial intelligence. There are discussions about “autonomous” systems capable of operating without human intervention, such as driverless vehicles. Transhumanist narratives, like those of Ray Kurzweil, envision a fusion between humans and AI, while others, like Nick Bostrom, warn about the risks of a superintelligence misaligned with human interests. These diverse visions converge in highlighting the growing independence of technology from human will.
Both positions—the technology as an instrument and as an autonomous entity—prove insufficient to understand the complexity of the relationship between humans and technology. On one hand, reducing technology to a neutral means ignores its capacity to structure behaviors and social forms. On the other, attributing total autonomy obscures the possibility of intervening and guiding its development. Therefore, since the late 20th century, the perspective of “technocomplexity” has emerged, recognizing the co-constitution of humans and technology and the need for new forms of ethical-political understanding and action.
Post-phenomenology, for example, argues that technology not only mediates but also shapes human intention. Authors like Don Ihde and Paul Verbeek show how different technologies affect our perception and action in diverse ways, from tools that integrate into our bodies to complex systems like AI, which challenge the distinction between human action and technical mediation. Thus, autonomy becomes a relational and situated issue, in which humans and technologies co-participate in the construction of reality.
Bruno Latour, in his work Reassembling the Social, proposes thinking in terms of networks of human and non-human actors, where autonomy is not exclusive to individuals but a result of interdependent relationships. This implies that technology is neither completely neutral nor entirely autonomous, but part of a common ecology that shapes our ways of life.
From this perspective, policies aimed at autonomy must include technology as a central element of political action. Following Langdon Winner, it is necessary to democratize technological design and management, promoting open and participatory models that respond to the needs of communities, especially those traditionally marginalized. Platforms like Decidim exemplify this approach by fostering citizen participation in technological development.
Technological design must be an inclusive and recursive process, where communities actively intervene in the creation and adaptation of the technologies that affect them. As Sasha Constanza-Chock argues in Design Justice, this involves transforming design into an exercise of justice and autonomy, rather than a top-down imposition.
Beyond design, it is essential to rethink the entire life cycle of technologies, from problem identification to recycling and repair, incorporating the perspective of distributed responsibility among designers, users, and institutions. Inspired by Hans Jonas, we must adopt an expanded “imperative of responsibility” that ensures the compatibility of technologies with human and non-human life, both present and future.
This transformation requires questioning not only technology but also the social and economic system in which it is embedded, especially capitalism, which tends to reduce everything to its exchange value. Only then can technology mediate more free and just ways of life. In a context of technopolitical capitalism, it is urgent to open new horizons that allow for more autonomous and responsible thinking and action in the complexity of our time.
Source: elsaltodiario.com