Ethical AI is not a brake. It is what makes it truly useful.
    Inteligencia Artificial (IA)

    Ethical AI is not a brake. It is what makes it truly useful.

    Gianro Compagno
    5 min read
    Last week, Dario Amodei, CEO of Anthropic, did something that very few tech leaders dare to do: say "no" to the Pentagon. It wasn't a whimsical "no." It wasn't a political gesture. It was a business and ethical decision with real consequences: the Trump administration ordered all federal agencies to stop using Anthropic's technology. Secretary of Defense Pete Hegseth labeled the company as a "risk to the national security supply chain" — a designation typically reserved for foreign adversaries. All of this against an American company. For upholding two principles. What are those two "red lines" that Anthropic refused to cross? First: that their AI not be used for mass domestic surveillance of American citizens. Second: that their AI not feed fully autonomous weapons that select targets and fire without human intervention. That's it. Two limits. They didn't ask for anything extravagant. They didn't refuse to collaborate with the military — in fact, Anthropic was the first AI company to integrate its models into classified military networks. What they said was: "We will work with you, but there are two things we will not do." And for that, they were punished. I want to be transparent about something: I work with Claude, Anthropic's AI model, as a tool in my consulting firm Gialoma Life Solutions. I use it professionally. And precisely for that reason, I feel I have the authority — and the responsibility — to speak on this matter. Because what Amodei has defended is not an abstract position. It is exactly what makes tools like Claude valuable for professionals and companies like mine. AI is an extraordinarily powerful tool. At Gialoma, without artificial intelligence, our company simply would not exist as it does today. AI has given us growth opportunities, capabilities, and reach that were unthinkable a few years ago for a consulting firm of our size. It allows us to automate processes, create solutions for our clients, and compete in a market that was previously reserved for large corporations. But that same power demands responsibility. A powerful tool without ethical limits is not a tool — it is a risk. What concerns me about the dominant discourse is the false dichotomy that has emerged: either you deliver technology without restrictions, or you are an obstacle to national security. That narrative is dangerous. Amodei explained it clearly in his interview with CBS News: AI is advancing at an exponential rate. The computing power that fuels these models doubles every four months. Legislation, legal frameworks, and oversight mechanisms are not keeping pace. When technology outpaces the law, someone has to hold the line. And if Congress does not act — and Amodei openly acknowledges that it should be Congress that legislates these issues — then the companies that create this technology have a responsibility they cannot evade. Let's think of something concrete: today it is technically possible to buy massive data on citizens — locations, personal information, political affiliations — and analyze it with AI to build detailed profiles. This is legal. But is it acceptable? The judicial interpretation of the Fourth Amendment has not been updated to consider these capabilities. And until it does, who puts the brakes on? Some argue that a private company should not decide what the government can or cannot do with technology. It is a legitimate argument. Boeing manufactures planes for the military and does not tell the military how to use them. But Amodei responded to this accurately: AI is not a plane. It is a new, unpredictable technology, evolving exponentially. A general understands how a plane works. No one fully understands how an advanced AI model works — not even those who build them. And that is not a weakness. It is exactly why caution is necessary. We are not talking about stifling innovation. We are talking about the fact that innovation without ethics is not progress — it is recklessness. And in a field where mistakes can mean mass surveillance of innocent citizens or weapons that kill without human judgment, caution is not cowardice. It is responsibility. What struck me most about Amodei's statements was a simple phrase: "Disagreeing with the government is the most American thing in the world." And he is right. But I would add something more: demanding that the technology we use and that transforms us be ethical, transparent, and serve people — that is not just American. It is human. It is universal. The response from the tech sector has been revealing. Hundreds of professionals from companies like OpenAI, IBM, Salesforce, and Slack signed an open letter asking Congress to investigate whether using these pressure tools against an American company is appropriate. An OpenAI researcher publicly stated that blocking mass domestic surveillance is also his "personal red line." The debate has opened. From Gialoma, from my daily experience with AI as a work tool, I want to say something I firmly believe: Ethical AI is not a brake on innovation. It is what makes it truly useful. It is what allows professionals, companies, and citizens to trust these tools. It is what makes it possible for a consulting firm like ours to exist and grow. Without that trust, AI is just technology. With it, it is transformation. Dario Amodei not only defended his company's principles. He defended the principles that make AI a positive force for all of us. And that deserves our support. The question is not whether AI should have limits. The question is whether we have the courage to maintain them when maintaining them comes at a cost. Anthropic has shown that it does.
    Gianro Compagno

    Gianro Compagno

    CTO

    Gianro aporta una gran experiencia en gestión de proyectos tecnológicos en entornos multinacionales. Su experiencia técnica combinada con un MBA y una maestría en Psicología Investigativa crea un enfoque único para las soluciones tecnológicas. Como Experto en IA y Automatización, aplica conocimientos psicológicos para diseñar sistemas más intuitivos y centrados en el ser humano. Su enfoque orientado al detalle y mentalidad positiva aseguran que nuestras soluciones no solo sean innovadoras y confiables, sino que también se alineen con cómo las personas piensan y trabajan naturalmente.