Inteligencia Artificial (IA)
Regulation of Marking and Labeling of Synthetic Content in the New AI Regulations
Paloma Firgaira
2026-03-06
5 min read
Content Index
- The Crisis of Digital Authenticity
- Article 50 of the RIA: Structure and Objectives
- Technical Marking: Obligations for Providers (art. 50.2 RIA)
- Visible Labeling: Obligations for Deployers (art. 50.4 RIA)
- Informative Texts and Matters of Public Interest
- Code of Conduct for Transparency and General Purpose AI Models
- Technical and Legal Challenges
- Conclusions
The Crisis of Digital Authenticity
The emergence of artificial intelligence (AI) in the generation and manipulation of content has radically transformed trust in digital information. The presumption of authenticity, already weakened before generative AI, has lost relevance due to the ease with which images, videos, audios, and texts indistinguishable from real ones can be created. This situation affects public communication, markets, and democratic processes.
Regulation (EU) 2024/1689 on Artificial Intelligence (RIA), approved on June 13, 2024, addresses this challenge by imposing transparency obligations: while the creation of synthetic content is not prohibited, its artificial origin must be identifiable to users.
Article 50 of the RIA: Structure and Objectives
Article 50 of the RIA establishes a horizontal framework of transparency applicable to all generative AI systems, regardless of their risk level. It distinguishes four main obligations: (i) transparency in conversational systems; (ii) machine-readable technical marking for providers; (iii) user information in emotion recognition or biometrics systems; and (iv) visible labeling for deepfakes and artificially generated or manipulated texts.
This regulatory architecture aims to enable users to identify when they interact with AI or consume content generated by it, reducing the risk of deception and protecting the integrity of public debate.
Technical Marking: Obligations for Providers (art. 50.2 RIA)
Article 50.2 requires providers of generative AI to technically mark synthetic content (images, videos, audios, texts) in a way that is detectable by automatic systems. No specific technology is mandated, but the marking must be robust and resistant to common manipulations. The responsibility to demonstrate technical effectiveness lies with the provider.
According to the Code of Conduct for Transparency, marking can be done through: (i) invisible digital watermarks; (ii) structured metadata informing about the origin and modifications of the file; and (iii) fingerprinting techniques linking the content to the generating system. Combining several techniques is recommended to enhance detectability, without requiring closed solutions, and a reasonable robustness against foreseeable manipulations is required.
Visible Labeling: Obligations for Deployers (art. 50.4 RIA)
Article 50.4 requires those deploying AI systems that generate or manipulate images, audio, or video simulating real facts, people, or objects (deepfakes) to label the content visibly and understandably for the public. Unlike technical marking, labeling serves a communicative purpose: to warn the user about the artificial nature of the content.
This obligation must be applied proportionately, considering the medium and public expectations. If the content is artistic, creative, or satirical and is clearly identified as such, the obligation is adapted to avoid restricting freedom of expression while protecting the rights of affected individuals.
Informative Texts and Matters of Public Interest
Visible labeling also applies to texts generated or manipulated by AI that inform about matters of public interest, such as news or institutional statements. Their artificial origin must be clearly indicated, unless the content has undergone editorial control and a natural or legal person assumes responsibility, provided it is not deepfakes likely to mislead. Competent authorities are exempt when acting within the framework of investigating or prosecuting crimes.
Code of Conduct for Transparency and General Purpose AI Models
Adherence to the Code of Conduct for Transparency does not automatically guarantee compliance with the RIA, but it can serve as evidence before authorities. The Code proposes a multi-level strategy based on signed metadata, technical marks, and visible labeling to ensure the traceability of synthetic content.
Regarding general purpose AI models (GPAI), the Code acknowledges that many systems subject to Article 50 are based on models developed under Articles 53 and 55 of the RIA, which require technical documentation, transparency about training data, and, in systemic risk models, additional evaluation and mitigation measures. Although these articles do not directly impose marking, they are part of the regulatory framework that conditions content generation.
Technical and Legal Challenges
Compliance with Article 50 poses relevant challenges:
- Robustness of marking: Techniques like watermarks or metadata can be altered by compression, format changes, or editing, making it difficult to distinguish between legitimate modifications and deliberate removal of marks.
- Interoperability: The lack of a European technical standard creates uncertainty about compatibility between systems and cross-verification of marks.
- Difficulties in text: Unlike images or videos, text does not allow for the incorporation of imperceptible signals resistant to modifications, and current detection systems are not fully reliable.
- Scope of manipulated content: Article 50.4 does not precisely define what degree of intervention triggers the labeling obligation, requiring interpretation of the impact on user perception and potential to mislead.
Authorities will need to develop clear criteria to ensure legal certainty and proportionality in the application of these obligations.
Conclusions
Article 50 of the RIA establishes an essential transparency regime for content generated or manipulated by AI, reinforcing trust in the digital environment. It clearly differentiates the obligations of the provider (technical marking) and the deployer (visible labeling), delineating responsibilities throughout the value chain.
Supervision will focus on the technical robustness of marking, the clarity of labeling, and the documentation that allows tracking of content. Transparency is no longer optional but becomes a legal requirement, key for compliance assessment and mitigation of liabilities.
Source: elderecho.com