Uncertain Progress and Challenges in European AI Regulation
    Negocios y Empresas

    Uncertain Progress and Challenges in European AI Regulation

    Paloma Firgaira
    2026-01-11
    5 min read
    The legal framework for artificial intelligence in Europe is at a crucial phase, with significant questions still to be resolved. The European Commission has proposed extending deadlines for companies to adapt to obligations related to high-risk systems, thus providing greater flexibility. Initially, August 2026 was the set date for the enforcement of rules regulating AI uses that could cause serious harm to health, safety, or fundamental rights, such as biometrics, credit assessments, or exam corrections. However, within the Digital Omnibus package, Brussels suggested in November to delay implementation until December 2027, allowing for a maximum postponement of sixteen months, provided that support tools for companies are guaranteed. This proposal still needs to be approved by the European Parliament and Council. Although the date may seem far off, implementing robust governance poses a considerable challenge. Organizations that prepare in advance will avoid last-minute rushes and potential non-compliance, but many still have doubts about key aspects of the regulation, complicating their adaptation. To facilitate this process, the Spanish Agency for AI Supervision (AESIA) has published 16 practical guides, resulting from the AI Sandbox, aimed at helping companies develop innovative and responsible systems and meet the requirements for high-risk tools. Guillermo Hidalgo, counsel and head of cyber law at MAIO Legal, emphasizes that the Commission seeks to introduce flexibility in the application of obligations, linking their enforcement to the availability of harmonized standards and support tools. Thus, enforcement would not be automatic in August 2026, but rather after a decision by the Commission, with a limit of sixteen months for extension. However, this measure is not yet definitive. The market shows some confusion regarding the new regulation, which introduces a compliance framework based on risk levels, new roles (provider, importer, distributor, deployment responsible), and requires technical evidence such as risk management, data governance, and traceability—areas that many companies have not internalized. In Spain, uncertainty is exacerbated by the difficulty of classifying use cases and determining whether they are considered high-risk, as well as reliance on external providers. Many companies use AI integrated into third-party software and wonder if, by acquiring it, they assume legal responsibilities. Additionally, the lack of definitive standards and guidelines creates what Hidalgo calls "operational uncertainty." In this context, AESIA's guides are particularly valuable as they offer a practical approach aligned with the Sandbox. For SMEs, it is common to use third-party AI, meaning both the provider and the user assume responsibilities. The developer bears most of the obligations, but the user company must also comply with usage, oversight, incident management, and transparency requirements. A critical aspect is that a company can become a provider if it substantially modifies the tool, re-labels it, or changes its purpose, especially if it places it in a high-risk context. Therefore, a compliance-based supplier management approach is recommended, requiring guarantees, audits, and maintaining an inventory of use cases and risk assessments. César Alonso, director of Consulting at GlobalSuite Solutions, emphasizes that if an SME modifies the AI or changes its purpose, it assumes the responsibilities of a provider, including legal and technical ones. He also points out the regulatory fatigue companies experience when trying to align the AI Regulation with other regulations like GDPR, NIS2, or DORA, making the challenge one of integrated governance. Víctor Morán, partner at Letslaw, notes that the regulation requires companies to self-diagnose in two dimensions: their role in the value chain and the classification of each use case according to risk. The main concerns are high-risk classification, governance and documentation burdens, and the impact on generative and general-purpose AI tools. Morán warns that the regulation could pose a barrier to entry, especially for SMEs, as compliance requires inventorying uses, classifying risks, documenting processes, and demanding guarantees from providers—tasks that larger companies can more easily manage. However, he believes AESIA's guides help make compliance more accessible and practical. Hidalgo reminds that the regulation is risk-based, so using an informational chatbot is not the same as applying AI in critical processes like personnel selection or credit scoring. The biggest challenge for SMEs arises when they develop or integrate high-risk AI without sufficient support, as compliance requires advanced technical and documentation capabilities. Therefore, he recommends an incremental approach, first addressing essential aspects and then moving on to more complex ones. Alonso acknowledges that the law entails considerable effort, especially for organizations with fewer resources, due to the need for technical documentation, event logging, and human oversight. However, he highlights that the regulation provides support measures for SMEs, such as participation in regulatory Sandboxes, and considers that rather than a barrier, it is a quality filter that can provide competitive advantages in terms of trust and security. The main current concern is the correct classification of use cases, along with the complexity of data governance, traceability, and transparency. Many companies fear they will not be able to justify to the regulator the decisions made by their AI systems, especially with an application timeline that progresses relentlessly. Although the regulation came into force in 2024, obligations for high-risk systems, including AI in biometrics, critical infrastructures, education, employment, and essential public services, will begin to apply next August. After 36 months, sector-specific product security obligations will be added. Companies must prepare now, as technical and documentation adaptation takes time and cannot be left until the last moment. The countdown has begun. (Source: abc.es)
    Paloma Firgaira

    Paloma Firgaira

    CEO

    Con más de 20 años de experiencia, Paloma es una ejecutiva flexible y ágil que sobresale implementando estrategias adaptadas a cada situación. Su MBA en Administración de Empresas y experiencia como Experta en IA y Automatización fortalecen su liderazgo y pensamiento estratégico. Su eficiencia en la planificación de tareas y rápida adaptación al cambio contribuyen positivamente a su trabajo. Con sólidas habilidades de liderazgo e interpersonales, tiene un historial comprobado en gestión financiera, planificación estratégica y desarrollo de equipos.

    🍪 Expérience Améliorée

    Nous utilisons des cookies pour vous offrir la meilleure expérience et des analyses pour améliorer nos services. Vous pouvez continuer avec notre configuration recommandée ou .

    Consultez notre Politique de Cookies et notre Politique de Confidentialité.