·
For companies operating in Europe, the binding baseline is still the AI Act timetable in Article 113: the Regulation entered into force on 1 August 2024; prohibited practices and AI literacy applied from 2 February 2025; governance rules and obligations for providers of general-purpose AI models applied from 2 August 2025; most of the Act applies from 2 August 2026; and the later 2 August 2027 date is a narrower carve-out for Article 6(1) and corresponding obligations for certain product-safety high-risk systems.[6][3][7]
The practical implication is that 2 August 2026 remains the operational deadline most companies should plan against unless and until the law is amended, even though the Commission has publicly acknowledged standards delays and proposed a different sequencing approach in COM(2025) 836.[1][3][5][4]
In the past week, the official picture changed mainly in supervisory posture, not in black-letter law: the 27 April 2026 AI Office update on the Signatory Taskforce for the GPAI Code of Practice said GPAI rules have applied since 2 August 2025 and that, ahead of August 2026 enforcement, the AI Office is preparing compliance assessments and actively engaging with providers.[2]
No official item identified in the 27 April to 3 May 2026 window adopted the Digital Omnibus proposal, issued an implementing or delegated act changing the 2026 deadlines, or otherwise amended Regulation (EU) 2024/1689.[4][5][3]
This report is therefore anchored on the statute as it stands, while treating recent Commission and AI Office materials as interpretive, procedural, or signaling inputs that affect how urgently companies should assemble evidence, contracts, and operating controls this quarter.[1][2][3]
The most consequential official update in the last week is procedural and supervisory, not legislative. The AI Office's 27 April 2026 Signatory Taskforce update states that GPAI rules have applied since 2 August 2025 and that the AI Office is preparing compliance assessments ahead of August 2026 enforcement.[2]
For GPAI providers, that update changes the immediate posture from "track the code process" to "assume regulator-ready file review is near."[2][10][11]
The Commission's public messaging on the standards gap remains important, but it is still a readiness signal tied to a pending proposal, not a live extension of the law. The official FAQ says harmonised standards were not delivered on the requested timeline and that the Commission therefore proposed in the Digital Omnibus to link high-risk obligations to support measures such as standards, common specifications, or guidelines.[1][5]
That means legal, product, and procurement teams should separate three categories of incoming information: binding change, interpretive guidance, and enforcement signaling. The last week produced the third category, not the first.[2][3][4]
The 30 April 2026 Apply AI sectoral event listed on the AI Office page is better read as industrial-policy signaling than as AI Act implementation relief. It does not alter Article 113 timing or the live GPAI obligations.[8][9][2][3]
| When | Status | What is in force or due | What that means operationally |
|---|---|---|---|
| Since 2 February 2025 | Live | Prohibited AI practices and AI literacy obligations apply. The Commission published prohibited-practices guidelines on 4 February 2025 and AI literacy FAQ materials explaining Article 4 in practice.[6][14][13] | Every company already using AI in Europe should have screened out prohibited use cases and documented a risk-based AI literacy programme; this is not a 2026 task.[6][13] |
| Since 2 August 2025 | Live | Governance rules and obligations for providers of general-purpose AI models apply. GPAI providers must maintain technical documentation, provide downstream documentation, maintain a copyright policy, and publish a sufficiently detailed training-content summary; systemic-risk providers have additional notification, risk-management, incident-reporting, and cybersecurity duties.[6][10][11][12] | Model providers with EU exposure are already in live compliance mode, and enterprise groups should test whether fine-tuning, rebranding, or major modification could make them the provider of a new model.[10][11][12] |
| By 2 August 2026 | Next major go-live | The Act becomes generally applicable, including Annex III high-risk obligations and Article 50 transparency obligations, unless a later specific rule applies. The Commission FAQ says Article 50 guidance will be issued before those obligations become applicable.[1][6] | Most companies should treat August 2026 as the date by which classification, conformity evidence, deployer procedures, transparency implementation, database registration where required, and incident workflows must be usable in production.[1][6] |
| From 2 August 2027 | Later, but narrow | The longer transition applies to high-risk AI systems that are high-risk because they are embedded in regulated products under Article 6(1) and corresponding obligations.[6][7][3] | That 2027 date is a specific carve-out, not a general excuse to postpone Annex III, transparency, GPAI, or AI literacy work.[6] |
The planning problem for 2026 is therefore asymmetrical: GPAI obligations are already live; prohibited practices and AI literacy are already live; Annex III high-risk and Article 50 obligations are the next broad compliance cliff; and the 2027 extension helps only a narrower product-regulation subset.[6][1][3]
For GPAI providers, the central compliance split is between documentation for authorities and documentation for downstream providers. The Commission's provider-guidelines FAQ says authority-facing technical documentation covers architecture, training process, training, testing and validation data, computational resources, and energy consumption, while downstream documentation must help integration and downstream compliance by covering intended tasks, acceptable use, input and output specifications, integration requirements, and data provenance and curation information.[11]
A model card on its own is therefore unlikely to be enough. GPAI providers should now maintain at least five controlled evidence sets: authority-facing technical documentation, downstream-provider documentation, copyright-policy evidence, the public training-content summary using the Commission template, and, where systemic risk is plausible, notification and incident-response materials.[11][16][10]
If a provider may cross the systemic-risk threshold, the timing is especially unforgiving: the GPAI Q&A says the provider must notify the Commission without delay and in any event within two weeks after the threshold is met or becomes known to be met.[10]
For Annex III high-risk providers, Article 11 and Annex IV are the spine of the file. Technical documentation must exist before the system is placed on the market or put into service, must be kept up to date, and must be sufficient for authorities to assess compliance.[7][1]
The most important operational consequence is that intended purpose, foreseeable misuse, data governance, testing, human oversight design, logging, post-market monitoring, and pre-determined post-deployment changes need named owners now, not in July 2026.[3][1]
If engineering expects post-deployment learning, retraining, calibration, or threshold changes, those changes should be scoped into the initial conformity file where possible. Article 43(4) makes pre-determined changes materially easier to defend than undocumented ones.[7]
Deployers cannot treat provider compliance as a substitute for deployer compliance. Article 26 requires use in accordance with instructions for use, monitoring, human oversight, and representative input data where the deployer controls inputs; Article 27 adds a fundamental-rights impact assessment for certain deployers before first use.[1][7]
Waiting for the Commission's FRIA template is the wrong control design. A deployer dossier should already capture provider instructions and limitations, local business context, affected-person pathways, input-data controls, human-review steps, log-access method, and incident escalation.[1][15]
Importers and distributors remain easy to overlook because their obligations look administrative until something goes wrong. Under the Act, they have conformity gatekeeping, document-retention, cooperation, and corrective-action duties, and if they rebrand or substantially modify the system they can become the provider.[7][6]
For procurement and channel teams, the control question is basic but important: who checks that the AI declaration, instructions for use, certificate where relevant, and authority-contact chain actually exist before resale or deployment.[6][7]
Most large enterprises buying third-party AI will be deployers first, but some will drift into provider status through rebranding, substantial modification, changed intended purpose, or model modifications that are significant enough to create a new GPAI provider. Role drift is the core commercial risk, not just whether the vendor markets the tool as "AI-assisted" or "GenAI."[7][10][11]
Procurement should therefore ask three separate questions before signature: what role are we on day one, what documentation do we need from the vendor to meet our own duties, and what customization rights could later shift our role or invalidate the vendor's compliance assumptions.[11][7][10]
Incident handling needs a joined workflow across support, security, product, legal, and compliance. Article 26(5) requires deployers that identify a serious incident to inform first the provider, and then the importer or distributor and relevant authorities; Article 73 then gives providers short reporting clocks, including 15 days generally, 2 days for certain critical or widespread incidents, and 10 days in the event of death.[7][1]
Before the end of the quarter, companies should define one intake channel for complaints and anomalies, one triage team with authority to classify possible serious incidents, and one evidence-hold checklist covering prompts or inputs, outputs, model or system version, configuration, user actions, override actions, and relevant logs.[1][7][17]
The standards gap for high-risk systems remains unresolved. The Commission's FAQ says harmonised standards were not delivered on the requested August 2025 timeline and that the work is still ongoing; the legal consequence is that presumption-of-conformity pathways remain incomplete even though the statutory 2 August 2026 date has not moved.[1][3]
Operationally, companies should design controls that can later be mapped to standards or common specifications instead of waiting for the standards to tell them what to build.[1][5]
The practical effect of the GPAI Code of Practice is still limited. Official materials describe it as a voluntary adequacy tool, not a statutory safe harbour, so signatory status should not replace article-by-article compliance mapping against Articles 53 and 55.[19][20][7]
Substantial modification remains one of the hardest live issues. The statutory concept is clear at a high level, but difficult in practice for retraining, pooled-data updates, local adapters, threshold tuning, or continuing-learning systems. Bird & Bird is useful here because it treats those engineering changes as real compliance trigger points rather than edge-case hypotheticals.[7][18]
The statutory anchor remains whether the change was foreseen in the initial conformity assessment and whether it affects compliance or intended purpose.[7]
Downstream reliance on vendor information remains structurally awkward. The Act assumes information will flow downstream, but it does not remove the commercial tension between compliance transparency and protection of trade secrets. Freshfields is directionally right that contracts need to solve more of this than the statute does.[11][1][17]
Enforcement readiness looks uneven. The AI Office is openly preparing GPAI compliance assessments, while Commission materials still say high-risk classification guidance, Article 50 guidance, FRIA templates, substantial-modification guidance, and value-chain guidance are under preparation. GPAI providers therefore face sharper immediate supervisory pressure than many Annex III providers, even though both should plan to the statute.[2][1][15]
| Function | Decisions this quarter | Evidence to demand or produce |
|---|---|---|
| Legal | Reset contract templates around role allocation, information support, modification control, and incident cooperation; track the Omnibus as a contingency, but do not draft to a delayed timeline that is not yet law.[5][3][17] | Produce clause libraries for regulatory status, documentation support, change control, log access, FRIA cooperation, and incident notification assistance.[17][11][1] |
| Product | Freeze role mapping and intended-purpose statements for each in-scope AI product, and identify every foreseeable update path that could change compliance assumptions or trigger substantial-modification analysis.[7][1] | Produce or update intended-purpose records, limitations packs, human-oversight design notes, logging architecture, post-market monitoring plan, and pre-determined change logs.[1][7] |
| Procurement | Make AI purchasing a gated process: no vendor goes live without a classification memo, role statement, documentation package, log-access terms, and change-control terms.[1][11][17] | Demand a vendor evidence pack covering classification, instructions and limitations, logging, unilateral-update rules, incident support, and downstream documentation where GPAI is upstream.[11][1] |
| Compliance / model risk | Build one control stack spanning inventory, role mapping, risk triage, documentation repository, human-oversight controls, change management, and incident escalation; do not treat AI Act work as a freestanding policy exercise.[21][1] | Produce an AI inventory with intended purpose, legal role, use context, high-risk/GPAI/transparency flags, business owner, and documentation location; run at least one incident tabletop before quarter end.[21][1] |
The shortest accurate summary for senior owners is this: if you are a GPAI provider, prepare for scrutiny now; if you are a high-risk provider or deployer, build the evidence file and operating workflow now; if you buy third-party AI, make procurement and contracting carry more of the compliance load now.[2][1][11][3]
Made with Webhound · Ask questions about this research, build on it, or start your own
24 sources · $20 spent · Ask Webhound about this research, build on it, or start your own
Start free