·
This report separates three layers throughout: binding law in Regulation (EU) 2024/1689, draft Commission guidance published for consultation on 8 May 2026, and outside commentary from specialist firms and policy organisations.[1][3][7][10]
On 8 May 2026, the European Commission opened a consultation on draft guidelines for Article 50 AI Act transparency obligations; it did not change the law or move the applicability date. Article 50 transparency rules still become applicable on 2 August 2026.[8][7][3][9]
The practical effect of the 8 May package is interpretive. It gives companies a much clearer pre-August compliance map for four recurring Article 50 workstreams: direct interaction notices, provider-side machine-readable marking of synthetic outputs, deployer notices for emotion-recognition and biometric-categorisation systems, and deployer disclosures for deepfakes and certain public-interest text.[9][2][7][8][3]
For most companies, the main compliance mistake is treating Article 50 as a narrow deepfake rule. Binding law reaches ordinary chatbots and voice bots, generative systems that output text, image, audio, or video, deployers of emotion-recognition and biometric-categorisation systems, and publishers or platforms exposing people to deepfakes or certain AI-generated public-interest text.[1][9]
The law is already clear on one operational point that many product teams miss: notices under Article 50(1), (3) and (4) must be clear and distinguishable and must reach people at the latest at first interaction or first exposure. Buried terms of service or generic policy disclosures are weak candidates for compliance.[1][7][8]
The biggest unresolved issues before 2 August 2026 are not whether Article 50 exists, but how to apply its thresholds: when AI interaction is sufficiently “obvious,” where standard editing ends and substantial alteration begins, how to classify borderline deepfakes or AI-assisted manipulations, and how demanding the editorial-responsibility exception will be in practice.[1][2][3][4][5][6]
Binding law. The legal baseline remains Regulation (EU) 2024/1689. Article 50 is already enacted as part of the AI Act, but its transparency obligations become applicable on 2 August 2026.[1][9][7][8]
Draft Commission guidance. On 8 May 2026, the Commission published draft guidelines for stakeholder feedback ahead of adoption and opened a consultation that ran to 3 June 2026. The Commission describes those guidelines as practical guidance intended to clarify the scope of Article 50 obligations and assist authorities, providers, and deployers.[8][7][3]
What is not binding before 2 August 2026. The 8 May package is not itself law. Neither the draft guidelines nor the separate transparency code of practice changes Article 50’s text, reallocates statutory roles, or accelerates the applicability date.[3][7][10][8][2]
What the package does change in practice. It makes the Commission’s likely enforcement framing more visible before August 2026. Companies can now prepare against a public Commission interpretation that emphasizes role-splitting between providers and deployers, first-exposure disclosure design, and a multi-technique approach to machine-readable marking rather than a single watermark.[2][9][7][10][8]
Outside commentary. Specialist lawyers broadly agree that the package matters because it turns a short statutory provision into concrete implementation questions. They are less aligned on how hard the future code of practice will bite for non-signatories: some treat it as a de facto benchmark, while others stress that alternative compliance routes remain possible and that adherence to the code would not itself guarantee compliance.[5][4][6]
| Layer | Status before 2 August 2026 | Why it matters operationally |
|---|---|---|
| AI Act Article 50 | Binding law, with applicability of these transparency duties from 2 August 2026 | Sets the actual triggers, role allocation, exceptions, and timing rules |
| 8 May 2026 draft guidelines | Nonbinding draft Commission guidance under consultation | Signals how the Commission is likely to read scope, exceptions, and compliance design |
| Transparency code of practice | Separate voluntary implementation tool | May become an important benchmark for marking and labelling practice, especially under Article 50(2) and (4) |
| Specialist legal and policy commentary | Nonbinding outside interpretation | Useful for edge cases and implementation strategy where primary sources stay abstract |
The Commission is also explicit that the Article 50 transparency workstream is separate from the GPAI-model workstream. That matters because some companies have already been working on GPAI obligations since 2 August 2025, but those obligations do not replace Article 50 duties for downstream systems or publication workflows.[2][9]
Article 50 creates five main operational trigger buckets, plus a separate Article 50(11) high-risk deployer notice. Two buckets are primarily provider-side; the rest are deployer-side.[1]
Binding law. Article 50(1) places the duty on the provider of an AI system intended to interact directly with natural persons. The system must be designed and developed so that the natural persons concerned are informed that they are interacting with AI, unless that is obvious to a reasonably well-informed, observant, and circumspect person in context.[1]
Draft guidance signal. The Commission’s public materials consistently restate this bucket as a user-facing notice duty for providers when people interact with AI systems.[7][9][2]
Operational reading. This bucket plainly catches customer-service chatbots, website assistants, in-app copilots, voice agents, and avatar interfaces. The main live question is not whether these products are in scope, but whether the provider can rely on the obviousness exception in a given interface.[1][2][11]
Outside commentary. Specialist analysis treats obviousness conservatively: an explicitly branded “AI assistant” is easier to defend than a support widget, human-like voice, or avatar embedded in an ordinary service flow. That is interpretation, not a Commission safe harbour.[4][2][3][11]
Binding law. Article 50(2) places a provider-side technical duty on providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video, or text content. Outputs must be marked in machine-readable format and be detectable as artificially generated or manipulated, using technical solutions that are effective, interoperable, robust, and reliable as far as technically feasible.[1]
Binding carve-outs. Article 50(2) does not apply to the extent the system performs an assistive function for standard editing or does not substantially alter the deployer’s input data or its semantics; it also contains a criminal-law carve-out.[1]
Draft guidance signal. The Commission’s FAQ and code-page materials frame this as a toolbox problem rather than a single-watermark rule. They refer to watermarks, metadata identifications, cryptographic methods proving origin and authenticity, logging methods, fingerprints, and other techniques.[2][10]
Operational reading. This is the broadest Article 50 trigger. It reaches not only frontier model vendors, but also system providers offering branded text generators, coding assistants, image tools, audio tools, or video tools.[1][12][4]
Outside commentary. Specialist commentary strongly suggests that API wrappers and enterprise SaaS layers should not assume they are outside Article 50(2) merely because they use someone else’s model. That reading is plausible and commercially important, but the reviewed primary Commission sources do not state it expressly in those words.[4][2][7][3][1]
Binding law. Article 50(3) places the notice duty on deployers of emotion-recognition systems and biometric-categorisation systems. They must inform the natural persons exposed to the operation of the system, and any personal-data processing must comply with the GDPR or other applicable EU data-protection rules.[1]
Definitions matter. The AI Act defines an emotion-recognition system as one intended to identify or infer emotions or intentions from biometric data, and a biometric-categorisation system as one assigning natural persons to specific categories based on biometric data, subject to limitations and carve-outs in the Act and recitals.[1]
Operational reading. This bucket can catch systems used in retail, employment, venues, advertising, or platforms, not only classic security deployments. Companies that think of these tools as analytics or personalisation can still trigger Article 50(3) if the system function matches the legal definitions.[1]
Binding law. Article 50(4) places the duty on deployers of AI systems that generate or manipulate image, audio, or video content constituting a deepfake. They must disclose that the content has been artificially created or manipulated.[1]
Binding definition. A deepfake is narrower than synthetic content generally. Under Article 3(60), it is AI-generated or manipulated image, audio, or video that resembles existing persons, objects, places, entities, or events and would falsely appear authentic or truthful.[1][10]
Operational reading. Not every synthetic image, clip, or voice output is automatically a legal deepfake. A system may trigger provider-side Article 50(2) marking for synthetic output without triggering deployer-side deepfake disclosure under Article 50(4).[1][10][2]
Binding law. Article 50(4), second subparagraph, requires deployers of AI systems that generate or manipulate text published with the purpose of informing the public on matters of public interest to disclose that the text has been artificially generated or manipulated.[1]
Binding exception. This disclosure duty does not apply where the text has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility for publication.[1]
Operational reading. This trigger is narrower than many summaries suggest. It requires text, publication, and a purpose of informing the public on matters of public interest. Internal enterprise outputs, private messages, and most customer-specific drafts are much less likely to fall inside this subparagraph, even if Article 50(2) still applies upstream.[1]
Binding law. Without prejudice to Article 50, deployers of Annex III high-risk AI systems that make decisions or assist in making decisions related to natural persons must inform those persons that they are subject to the use of the high-risk AI system.[1]
Operational reading. The phrase “without prejudice to Article 50” supports a cumulative reading: this high-risk notice can stack with other Article 50 notices rather than replacing them. The reviewed Commission materials did not provide additional readable explanation of this interaction.[1][11][7][8]
| System or workflow | Most likely trigger | Main role | Core obligation |
|---|---|---|---|
| Website chatbot / in-app assistant / voice bot | Article 50(1) | Provider | Inform people they are interacting with AI unless obvious |
| Text / code / image / audio / video generator | Article 50(2) | Provider | Machine-readable marking and detectability of synthetic outputs |
| Emotion-recognition / biometric-categorisation deployment | Article 50(3) | Deployer | Inform exposed people; comply with data-protection law |
| Published deepfake video / image / audio | Article 50(4), first subparagraph | Deployer | Disclose that content was artificially created or manipulated |
| Published AI public-interest text | Article 50(4), second subparagraph | Deployer | Disclose AI generation/manipulation unless editorial exception applies |
| Annex III decision-support system about people | Article 50(11) | Deployer | Inform people they are subject to the high-risk AI system |
The cleanest way to operationalise Article 50 is to separate provider-side design or marking duties from deployer-side exposure or publication duties. A single company may sit in both roles across different workflows.[1][10]
Article 50(5) supplies the general timing rule for notices under paragraphs 1, 3, and 4: information must be clear and distinguishable and reach the relevant natural persons at the latest at the time of first interaction or first exposure. It must also conform to applicable accessibility requirements.[1]
The Commission’s public pages reinforce the same exposure-based logic, but they do not publicly prescribe detailed modality-by-modality mechanics for banners, overlays, spoken disclaimers, or replay persistence.[2][7][8][3][11]
Outside commentary fills some of that gap by pushing companies toward first-exposure label design: visible on-screen labels for video, clear interface notices for chat, and potentially audible treatment in audio contexts. Those details are useful operationally, but they are not yet visible as binding Commission prescriptions on the reviewed public record.[5][12][2][11]
| Use case | Likely actor | What must happen | Timing |
|---|---|---|---|
| Website chatbot or in-app assistant | Provider | User is informed they are interacting with AI unless obvious | By first interaction |
| Customer-support voice bot | Provider | User is informed they are interacting with AI unless obvious | By first interaction |
| Text/image/audio/video generation tool | Provider | Output is machine-readably marked and detectable as AI-generated or manipulated | At output level; Article 50(2) is a technical marking duty |
| Retail or workplace emotion-recognition deployment | Deployer | Exposed people are informed of system operation | By first exposure |
| Published deepfake clip or image | Deployer | Audience is told content was artificially created or manipulated | By first exposure |
| Published AI public-interest text | Deployer | Audience is told text was artificially generated or manipulated, unless editorial exception applies | By first exposure |
Two operational consequences follow. First, buried terms of service, privacy notices, or creator attestations alone are poor substitutes for first-contact disclosure where Article 50(1), (3), or (4) applies.[1][5][12]
Second, provider-side machine-readable marking is not a substitute for deployer-side user-facing disclosure. A company that both generates content and publishes it may need both.[1][10][9]
Binding law draws a deliberate split. Article 50(2) covers provider-side marking of synthetic audio, image, video, and text outputs generally. Article 50(4) covers deployer-side disclosure only for the narrower category of deepfakes and for certain public-interest text.[1][10][2]
Commission framing follows that split. The Commission’s code page states that providers must ensure outputs of generative AI systems are marked in machine-readable format and detectable as artificially generated or manipulated, while deployers must disclose deepfakes and AI-generated or AI-manipulated public-interest text.[10][2][9]
The binding definition is effect-based and narrower than “anything AI-made.” A deepfake must be image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear authentic or truthful.[1][10]
That means two categories should not be collapsed. A synthetic landscape image or stylised generated video may raise Article 50(2) provider-marking issues without necessarily triggering Article 50(4) deepfake disclosure.[1]
For deepfakes, the required legal disclosure is that the content has been artificially created or manipulated. Recital 134 adds that the disclosure should be clear and distinguishable, and Article 50(5) fixes the latest timing at first exposure.[1]
Outside commentary uses that timing rule to argue for on-content or on-surface labels rather than back-office compliance. That is a reasonable operational reading, but the reviewed Commission public pages do not yet specify exact player, overlay, caption, or audible formats.[5][12][2][3]
Article 50(4) does not exempt these works entirely. Where deepfake content forms part of an evidently artistic, creative, satirical, fictional, or analogous work or programme, disclosure is reduced to disclosure of the existence of generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.[1]
The reviewed Commission public pages do not materially elaborate the mechanics of that softer disclosure route. Specialist commentary generally reads it as requiring a real but non-intrusive label.[3][2][4][13]
The Commission treats public-interest text as a separate category from audiovisual deepfakes. Deployers must disclose AI-generated or AI-manipulated text publications informing the public on matters of public interest unless the publication has undergone human review and is subject to editorial responsibility.[10][9][2]
The editorial exception has two cumulative elements in the binding text: human review or editorial control, and editorial responsibility for publication. Companies should not reduce that to a generic claim that “a human reviewed it.”[1][7]
The provider-side obligation applies across audio, image, video, and text outputs. The Commission’s public materials expressly frame the technical problem across all four output types, not just image watermarking.[10][1]
Specialist commentary broadly converges on a defence-in-depth approach for this provider-side duty: metadata or provenance signals, watermarking where feasible, fingerprinting or logging, and sometimes downstream verification tools.[12][4][6]
For text outputs, public primary sources are less specific than they are for the existence of the duty. Commentary suggests provenance records or certificates may be more practical than trying to watermark words in a way that survives editing and preserves utility.[10][2][4]
The binding standard is thin. Article 50(1) exempts providers only where AI interaction is obvious to a reasonably well-informed, observant, and circumspect natural person, taking account of the circumstances and context of use.[1]
The accessible Commission materials reviewed here do not publish a worked example set for chatbots, voice assistants, avatars, or embedded copilots. That leaves companies with a legal standard but no public Commission safe harbour.[2][7][3][11]
Article 50(2) excludes assistive functions for standard editing and systems that do not substantially alter the input data or its semantics. The legal test exists, but the public Commission materials reviewed here do not supply a concrete example bank for the difficult middle cases.[1][2][3]
This matters most for AI-assisted editing tools that materially change meaning, realism, or authenticity cues without looking like full generative systems.[1]
The AI Act clearly allocates some Article 50 duties to providers and others to deployers, but it does not expressly resolve every modern supply-chain case, especially API wrappers and branded enterprise SaaS layers.[1][3][7]
Specialist commentary reads many wrappers as likely providers for Article 50(2), but that remains interpretation rather than explicit Commission wording on the reviewed public record.[4][2][1]
The binding law gives one definition of deepfake, but it does not provide a more detailed taxonomy of fully AI-generated, AI-assisted, minimally edited, or context-shifting manipulations.[1][10][2]
Outside commentary is exploring that taxonomy more aggressively than the public Commission materials. Some specialists see operational value in distinguishing fully AI-generated from AI-assisted content, while WITNESS warns that overly complex labels may confuse users and fail when content moves across platforms.[4][13]
The binding text requires two things: human review or editorial control, and editorial responsibility for publication. It does not say how extensive that review must be or what records are sufficient.[1][2][7]
Specialist commentary expects a documented editorial workflow with identified responsible persons rather than a token human glance. That is plausible, but it should be presented as interpretation of draft implementation materials, not as enacted law.[4][5][1]
The Commission describes the code as a voluntary tool that can help demonstrate compliance, especially for Article 50(2) and (4). That supports calling it influential, but not binding.[2][8][10]
Experts diverge on how far that influence goes in practice. Some see the future code as a strong benchmark for regulators and courts; others are more cautious and emphasize that compliance can still be shown by alternative routes.[5][4][6]
The Commission is explicit that the Article 50 guidelines and the transparency code of practice are separate instruments with different jobs. The guidelines cover Article 50 as a whole and clarify scope, legal definitions, exceptions, and horizontal issues. The code is a practical implementation tool focused on Article 50(2) and Article 50(4).[2][3][10]
The timing is also distinct. The draft guidelines were opened for consultation on 8 May 2026; the Commission expected the final transparency code in June 2026; Article 50 transparency duties still become applicable on 2 August 2026.[7][8][10][9]
The code is not the GPAI code. The Commission’s own FAQ says Article 50 transparency obligations are complementary to the transparency rules applicable to GPAI models under Articles 53 and 55, and that the transparency code addresses output marking and transparency toward persons exposed to outputs rather than model-level documentation duties.[2][9]
That separation matters because some companies sit in both regimes. A model provider that also offers downstream applications or publishes synthetic media may need separate workstreams for GPAI obligations, provider-side Article 50 obligations, and deployer-side Article 50 disclosures.[9][2][4][14]
The most source-grounded way to describe the future code before August 2026 is this: it is voluntary, likely to matter a great deal in practice, and most likely to influence the technical and procedural expectations around machine-readable marking and deepfake/public-interest-text labeling.[8][2][10][5][4][6]
| Track | Subject matter | Status/timing | Operational implication |
|---|---|---|---|
| Article 50 AI Act text | Binding transparency obligations | Applicable from 2 August 2026 | Set your legal minimum requirements from here |
| 8 May 2026 draft guidelines | Scope, definitions, exceptions, horizontal issues across Article 50 | Draft under consultation from 8 May 2026 | Use to anticipate likely Commission interpretation |
| Transparency code of practice | Practical implementation for Article 50(2) and (4) | Voluntary; final version expected June 2026 | Likely benchmark for marking and labeling design |
| GPAI obligations / GPAI code | Model-level duties for GPAI providers | Separate track, with GPAI obligations already applicable from 2 August 2025 | Do not assume GPAI work satisfies Article 50 |
Made with Webhound · Ask questions about this research, build on it, or start your own
17 sources · $20 spent · Ask Webhound about this research, build on it, or start your own
Start free