The EU AI Act’s draft Code of Practice on marking and labelling of AI-generated content: what providers and deployers need to know

The second draft Code of Practice on the Marking and Labelling of AI-generated content (“Second Draft Code”) issued on 5 March 2026 matters because it is likely to become the main practical benchmark for complying with the EU AI Act’s transparency rules on AI-generated content, even though the legal obligations themselves sit in the Act, not the Code. 

Ahead of Article 50 of the EU AI Act (“the Act”) becoming applicable on 2 August 2026, setting out transparency obligations for providers and deployers of certain AI systems, the European Commission has published the second Code of Practice to assist providers and deployers in demonstrating compliance with the specific marking, detection and labelling requirements, under Article 50. It is the second iteration of this Code of Practice, with the First Draft Code of Practice on Transparency of AI-Generated Content (“First Draft Code”) having been published on 17 December 2025. Compared to the First Draft Code, the Second Draft Code is more streamlined and simplified, offering greater flexibility and reducing the compliance burden. The European Commission has stated that it expects to finalise the Code in early June 2026 – two months ahead of the transparency rules becoming applicable on 2 August 2026.

This article explores the Second Draft Code (“the Code”), focussing on how it interacts with the EU AI Act and how companies can use the Code in practice. It first considers the legal status and practical significance of the Code under Article 50, before turning to the main compliance points for providers and deployers and the key legal and governance risks likely to arise in practice.

Understanding the Code of Practice in light of the EU AI Act

To begin with, it is necessary to address the status of the Code itself before turning to what it means in operational terms for those in scope.

Voluntary Code or important benchmark? 

Article 50 of the Act contains the legal obligations to which the Code refers to, but while the Code is technically voluntary, it is likely to become an important benchmark. Upon approval by the Commission, the finalised Code will provide a practical framework for providers and deployers to demonstrate compliance with their respective obligations under Article 50 (2) and 50 (4) and, in practice, Article 50(5) on the manner in which information is provided. Article 50 (2) of the Act provides that those who develop AI systems and place them on the market (“providers”) are to mark the outputs of their AI in machine-readable format and ensure that those outputs are detectable as artificially generated or manipulated. While Article 50 (4) of the Act details that those using AI systems that generate or manipulate image, audio or video content constituting a deep fake in the course of professional activity (“deployers”), must label the content as having been artificially generated or manipulated. the same paragraph also applies to certain AI-generated or AI-manipulated text published for the purpose of informing the public on matters of public interest, subject to the statutory exception for human review or editorial control coupled with editorial responsibility. In this way, in the current absence of harmonised technical standards or detailed binding guidance, many organisations will likely treat the Code as the clearest available benchmark for compliance. 

Moreover, it should be noted that Article 50 (7) of the Act specifically contemplates the drawing up of codes of practice at Union level and provides that, should the Code prove to be inadequate, there is a possibility of the introduction of common rules through implementing acts. 

How deployers and providers can demonstrate compliance with the transparency rules 

The next question is how providers and deployers are expected to use it in practice. The analysis below first considers marking obligations on the provider side before turning to labelling obligations on the deployer side.

Marking

The Act sets out that providers are to mark AI generated or manipulated content in a manner which is effective, interoperable, robust, and reliable. That obligation is expressly qualified by technical feasibility and must take into account the specificities and limitations of different types of content, implementation costs and the generally acknowledged state of the art. Importantly, where AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or its semantics, there is no obligation to mark the content. 

Against that statutory backdrop, the Code suggests that marking will likely involve a multi-layered approach with at least two layers of machine-readable active marking. However, it is possible for signatories to demonstrate compliance in the future with an alternative approach, if they can prove that it meets the same standards.  It is therefore safer to say that the Code favours a revised two-layered approach, rather than imposing two rigid technical measures in all cases.

The Code highlights two mandatory measures of marking. The first is the inclusion of digitally signed metadata. Signatories are to record and embed through the metadata information on whether the content is AI-generated or AI-manipulated, where the generated content format supports this. The second is imperceptible watermarking techniques interwoven within the content - the watermark will be designed so that a fragment of the content suffices to detect the watermark. Additional supplementary measures of fingerprinting or logging facilities are mentioned in the Code to address deficiencies in the effectiveness of the previously mentioned marking techniques. These are to be used with signatories taking into account potential trade-offs related to privacy and security, as well as scalability challenges and cost. That said, the second draft is more flexible than the first draft: fingerprinting and logging are presented as supplementary measures, not as universal baseline requirements.

Importantly, the Code goes further than marking alone. It also contemplates detection mechanisms, including interfaces or tools made available free of charge so that deployers, end-users and other legitimate third parties can verify whether content has been generated or manipulated by the relevant AI system. For providers, therefore, compliance is not simply a question of embedding marks; it is also a question of making those marks meaningfully verifiable in practice.

Labelling: deepfakes and beyond?

If marking is the main issue for providers, the key issue for deployers is centred on labelling. It is therefore necessary to distinguish between deepfakes, public-interest text and creative works.

Under the Act, deployers must label deepfake content (images, audio, or videos). The Code contains different guidance for different modes of content, and these can be summarised as follows:

  • Real-time deepfake videos: signatories are to display an icon or an alternative label consistently throughout the exposure where feasible, or alternatively display a visual or audio disclaimer at the beginning and at regular intervals during the exposure.
  • Non-real-time videos: signatories are to disclose that the video contains deep fakes with an icon or label. For short videos, this will need to be indicated throughout.
  • Images: signatories are to display a distinguishable and prominent icon in the image.
  • Audio: signatories are to include a short disclaimer at the start of content less than 30 seconds, if longer, signatories will need to provide a repeated audio disclaimer at intermediate phases.

The above examples apply to deepfakes, but the Act also applies this obligation to text informing the public on matters of public interest and includes less stringent obligations for creative works. The key point is not that every deployer must adopt one identical design solution, but that disclosures need to be clear, prominent and appropriate for the relevant medium. That reflects Article 50(5), which requires the information to be provided in a clear and distinguishable manner and, at the latest, at the time of first interaction or exposure.

In relation to public interest text, signatories are to place the icon or label in a consistent position which could be in the headline of the text or in the colophon at the beginning of the text. Crucially, the Act provides that this obligation does not apply where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. 

The Code details that signatories should have procedures and documentation which demonstrate that the AI-generated or manipulated text published for the purposes of informing the public on matters of public interest has undergone human review prior to publication and that a natural or legal person holds editorial responsibility. The procedures should identify the natural or legal person with editorial responsibility (name, role and contact details) and include an overview of the concrete organisational measures as well as human resources allocated to ensure that the signatory is compliant. Also, the Code states that signatories may optionally record additional information on how reviews are carried out and the type of involvement of AI.  In practice, any organisation seeking to rely on that exception should be able to show who reviewed the content, what that review involved and who held editorial responsibility.

Finally, as to creative works, labelling rules also apply differently to content that is clearly artistic, satirical, or fictional (“creative works”). For these creative works, the labelling requirement is more flexible; it must be done in a way that provides transparency without disrupting the viewer's enjoyment or the display of the work itself. The Code states that this means the disclosure will be placed in a non-intrusive yet clear and distinguishable position when a user first sees or hears the content. A list of examples is given which include the inclusion of an icon during the opening credits of a video or an integrated icon into a still image while preserving the ability for the end-user to discern the labelling. The important point here is that the exception is not to labelling altogether, but to the more intrusive form that might otherwise apply.

Using the Code in practice: compliance steps for providers and deployers

It is worth considering where organisations are most likely to go wrong, and what they should now be doing to reduce that exposure.

Risks

In practice, compliance risk for AI system providers and deployers will likely crystallise in (i) wrongly classifying the organisation’s role, (ii) treating transparency as a ‘mark-only’ or ‘label-only’ exercise, (iii) Misapplying the simplified scope triggers, (iv) misreading the statutory exceptions related to human review and creative works, and (v) governance gaps and contractual misalignment .

  • Wrongly classifying the organisation’s role: under the Act, organisations must determine whether they are a provider or deployer. The Code clarifies that system providers who integrate third party models via API still bear their own full responsibility for marking outputs, even if they did not train the underlying model.
  • Treating transparency as a ‘mark-only’ or ‘label only’ exercise: this could be a provider assuming that one layer of marking is enough or a deployers believing that displaying a label to the end-user once is enough. It is clear that the Code demands a stricter set of actions which vary depending on the mode of content. On the provider side, that also includes the detection and verification dimension, not only the insertion of a marker.
  • Misapplying the simplified scope triggers: the Code has now removed the complex taxonomy distinguishing “AI-generated” content from “AI-assisted” content. The risk is now a binary legal one, which asks whether the AI perform a purely “assistive function for standard editing” or does not substantially alter the input data or its semantics? Misclassifying a potential trigger could create a compliance gap.
  • Misreading the statutory exceptions: an organisation must understand that these exceptions are not absolute and still require action to be relied upon. For example, for the human review of public-interest text, the Code clarifies that documentation and policies are needed to demonstrate compliance. For creative works, a label is still required but the obligations are less stringent than for other deepfakes.
  • Governance gaps and contractual misalignment: organisations must ensure that they are discharging their transparency obligations in line with the Code’s emphasis on cooperation along the value chain. This is not only a technical issue. It is also a governance, process and contracting issue.

Compliance steps

Against that risk profile, the priority compliance task is to build a framework that addresses scope, controls, documentation and allocation of responsibility. In practical terms, organisations should now consider the following steps.

To sufficiently address these risks organisations should:

  • Scope the relevant systems, outputs and roles: organisations need to know whether they are a provider or a deployer, as well as ensuring they are compliant if they use third party AI systems. Additionally, organisations must know the relevant marking or labelling required for each mode of output (text, image, audio or video).
  • Implement both provider-side marking controls and deployer-side labelling controls: where a mark or label is mandated, organisations should ensure it substantially complies with the Code’s guidance for example making sure markings are robust, effective, interoperable and reliable. For providers, that should also include a plan for detection or verification mechanisms.
  • Map Article 50 trigger points and exceptions: organisations should have clear policies outlining what constitutes an AI system performing an assistive function for standard editing, as well as having a robust public interest text check, to ensure triggers aren’t missed.
  • Document human review and editorial responsibility where relied upon: this applies to the public interest text exemption where a reviewer’s role and contact details must be documented along with the policy for reviewing.
  • Allocate responsibilities contractually across the supply chain: downstream organisations should not assume that transparency obligations sit only with the original AI model provider or only with the final deployer, therefore all organisations across the value chain should have contracts, governance documents and internal approval processes that clearly allocate responsibilities and dependencies. That is particularly important where one party provides the model-level marking capability, another provides the downstream system, and another publishes or distributes the content.

Conclusion 

The Code is likely to shape how compliance with Article 50 is assessed in practice. Organisations should initiate compliance preparations now as the final Code is set to become the practical benchmark for compliance with Article 50(2) and 50(4), read together with Article 50(5), and delaying remediation until the rules apply on 2 August 2026 will inevitably lead to greater operational disruption. For most organisations, that means addressing scope, governance, documentation, technical controls and contracts now, rather than waiting for the final text to appear.