Negocios y Empresas
7 Key Requirements to Comply with the Artificial Intelligence Act in 2026
Paloma Firgaira
2026-02-13
5 min read
The adaptation process for the new European regulation on artificial intelligence is still underway. EU member states must incorporate Regulation EU 2024/1689 into their national legislations, and in Spain, this will be realized through the Law for the Good Use and Governance of Artificial Intelligence, expected in August.
Knowmad Mood highlights that the EU AI Law introduces a pioneering regulatory framework, requiring companies, especially those developing or using high-risk AI systems, to ensure data quality, model transparency, human oversight, cybersecurity, and team training.
In this context, Knowmad Mood identifies seven key requirements that Spanish companies must meet by 2026. Among them is the traceability of data used in the training, validation, and operation of AI systems, ensuring its origin, quality, and use throughout the entire lifecycle. This involves mechanisms to ensure that data is representative, up-to-date, and free from biases, as well as the ability to reconstruct any automated decision.
The law requires that AI models be understandable and auditable, not mere 'black boxes.' Organizations must have technical and functional documentation explaining the design of the models, the variables influencing their results, and the assumptions under which they operate, facilitating the work of both regulators and internal teams.
Cosmomedia emphasizes the importance of SMEs registering all AI tools they use, from writing assistants to data analysis software, identifying their purpose and risk level. This inventory is essential for the transparency required by the law.
Since many companies rely on external AI providers, it is crucial to request all technical documentation and usage instructions to certify regulatory compliance. The law requires providers to supply sufficient information for safe use.
Failure to comply with these obligations can result in fines of up to 35 million euros or 7% of global revenue in severe cases, such as lack of documentation or transparency. However, for SMEs and startups, the law provides proportional penalties to avoid jeopardizing their viability.
One of the pillars of the regulation is effective human oversight. It is not enough for a person to be present; there must be defined roles and clear procedures to intervene, correct, or annul automated decisions in case of errors or risks. In high-risk systems, such as personnel selection, credit analysis, or critical infrastructures, human oversight is mandatory, and total automation is not allowed.
Companies must also continuously identify and assess the technical, ethical, legal, and reputational risks associated with their AI systems, implementing evaluation frameworks that consider potential biases, impacts on fundamental rights, and unintended consequences.
Cybersecurity is another fundamental aspect. Knowmad Mood warns that protection against attacks, data manipulation, or information leaks must be integrated from the design stage and throughout the operation of intelligent systems. According to Perforce, 60% of organizations have experienced data breaches in AI training environments, underscoring the need to strengthen security before production deployment.
Compliance with the AI Law does not end with the implementation of technology. Companies must continuously monitor the performance and behavior of their models, detecting deviations or misuse and taking preventive action against potential risks.
Finally, the regulation places special emphasis on team training. Companies must demonstrate that their professionals have the appropriate training and updating programs to ensure competencies aligned with technological and regulatory evolution. It is not enough to install software; it is essential to train staff on its operation and potential biases, thereby minimizing associated risks.