Inteligencia Artificial (IA)
Fines of up to 35 million for violating the European AI law: everything about the AI Act starting in August.
Paloma Firgaira
2026-03-16
5 min read
On August 2, 2026, a milestone for tech companies in Europe will be marked: that day, the European Union's Artificial Intelligence Regulation (AI Act) comes into effect, establishing the world's most advanced legal framework for regulating AI, especially high-risk systems. Penalties for non-compliance can reach up to 35 million euros or 7% of global annual revenue.
The AI Act (EU Regulation 2024/1689) classifies AI systems based on the risk they pose. The most restrictive category includes prohibited systems, such as subliminal manipulation, social scoring, and real-time biometric identification for police purposes, banned since February 2025.
Next are high-risk systems that affect fundamental rights, access to essential services, or safety, such as recruitment algorithms, credit scoring, medical diagnosis, or public administrative tools. These systems must meet strict requirements: documented and updated risk management, traceability and quality of data, auditable technical records, and effective human oversight. Additionally, they must pass a conformity assessment, sometimes involving external bodies.
Lower levels include chatbots and content generators, subject to transparency obligations, and minimal risk systems, which have few regulatory demands.
The penalty regime of the AI Act is even stricter than that of the GDPR. The most serious violations can result in fines of up to 35 million euros or 7% of annual revenue. Non-compliance with obligations regarding high-risk systems can lead to penalties of up to 15 million euros or 3%, and providing incorrect information to authorities can cost up to 7.5 million euros or 1.5%.
Although SMEs have some proportionality in penalties, they are not exempt from liability for serious violations.
The impact goes beyond fines. The new AI Liability Directive makes it easier for affected parties to claim damages and establishes a presumption of causality: if the company does not demonstrate compliance with the AI Act, it is presumed that the damage was caused by its non-compliance. This places companies at a disadvantage in litigation, especially if they have not conducted the mandatory conformity assessment. Additionally, reputational damage can exceed the cost of the fine, in a context where technological ethics are increasingly relevant to consumers.
The first step for organizations is to inventory all AI systems in use or development, including those integrated by external providers. After identifying them, they must classify them according to the AI Act and review contracts to ensure they have the necessary technical documentation. The conformity assessment for high-risk systems can take between three to twelve months, so it is advisable to start it as soon as possible. The regulation also requires ongoing governance: post-market surveillance, documentation updates, and periodic reviews of risk management.
The AI Act is directly applicable and has a deterrent penalty regime. In Spain, the Spanish Agency for AI Supervision (AESIA) is strengthening its inspection capacity. The time to prepare is now, before a hasty reaction leads to greater costs and risks for companies.
Source: 20minutos.es