The EU AI Act’s Second Deadline Just Created a Vendor Problem Nobody Planned For

EU AI Act Phase 2 enforcement makes GPAI transparency requirements binding. If your LLM vendor can't document training data provenance, energy consumption, and downstream risk, their compliance problem becomes your compliance problem.
The EU AI Act’s Second Deadline Just Created a Vendor Problem Nobody Planned For
💡
On August 2, 2025, the EU AI Act’s general-purpose AI obligations took effect. The requirements sound straightforward: disclose training data summaries, document model capabilities, and comply with copyright rules. But for enterprises that buy AI rather than build it, the enforcement created an unexpected problem - if your LLM vendor can’t produce documentation you need for compliance, you’re the one holding the regulatory risk.

The EU AI Act’s first enforcement deadline on February 2, 2025, targeted the obvious: banned AI practices like social scoring and manipulative systems. Most enterprises found that deadline manageable because the prohibited categories, while broad, were at least identifiable. You could audit your AI deployments and determine whether any fell into the “unacceptable risk” category.

The August 2, 2025 deadline is a different kind of challenge. It doesn’t ask enterprises to stop doing something. It asks them to prove something - specifically, to prove that the general-purpose AI models they use meet transparency, documentation, and copyright compliance standards that most enterprises never thought to verify before signing their vendor contracts.

What the GPAI obligations actually require

The obligations that took effect on August 2, 2025 apply to providers of general-purpose AI models - the companies that develop and distribute large language models, multimodal models, and other foundation AI systems. Under the AI Act’s framework, these providers must now comply with several specific requirements.

First, technical documentation. Providers must maintain detailed records of their model’s development, training methodology, and evaluation results. The documentation must make the model’s creation process traceable - not just for regulators, but for downstream businesses that integrate the model into their products and services.

Second, training data transparency. Providers must publish a summary of the content used to train their models, following a template published by the European Commission on July 24, 2025. This includes data types, sources, and preprocessing methods. The summary must be detailed enough to be meaningful without requiring disclosure of trade secrets.

Third, copyright compliance. Providers must implement policies to identify and respect copyright holders who have opted out of having their works used for AI training under the EU’s Copyright Directive. They must use “state-of-the-art technologies” to identify and exclude protected content.

Fourth, for models classified as posing systemic risk - generally the most powerful frontier models - additional requirements apply, including adversarial testing, incident reporting, and cybersecurity protections.

The European Commission’s guidelines on GPAI provider obligations, published in July 2025, clarified which organizations qualify as “providers.” The key determination is whether an AI model “displays significant generality and is capable of competently performing a wide range of distinct tasks.” Training compute serves as one metric: the Commission uses floating-point operations as an indicative criterion for identifying GPAI models.

The compliance gap nobody planned for

The obligations themselves are directed at model providers - the Anthropics, OpenAIs, Googles, and Metas of the world. Major providers moved quickly. Within weeks of the Code of Practice’s publication on July 10, 2025, dozens of major AI companies signed on as voluntary signatories, including Amazon, Google, Microsoft, OpenAI, and Anthropic. The optics were good. The substance was more complicated.

Here’s the gap that emerged: the AI Act creates obligations for model providers, but the compliance consequences flow downstream to deployers. If you’re an enterprise running customer-facing applications on a commercial LLM, you need your model provider to supply specific documentation - training data summaries, capability assessments, risk evaluations - to demonstrate your own compliance posture. If the provider can’t or won’t produce that documentation in the format regulators expect, you have a compliance problem that no amount of internal governance can solve.

Kuan Hon, a partner at law firm Latham & Watkins specializing in AI regulation, noted in a detailed analysis of the GPAI obligations that the Code of Practice “leaves several questions unanswered,” including the specific criteria for reporting serious incidents. The analysis highlighted that compliance with the Code “does not exclude the imposition of fines” - meaning that even signatories could face enforcement actions.

The practical problem for enterprise CIOs and procurement teams is this: most vendor contracts signed before mid-2025 don’t include provisions for EU AI Act compliance documentation. The training data transparency templates didn’t exist when those contracts were negotiated. The Commission’s guidelines on what qualifies as a GPAI model weren’t published until weeks before the August deadline. Enterprises are now in the position of retroactively requesting documentation that their contracts don’t entitle them to receive.

The Code of Practice: voluntary but consequential

The GPAI Code of Practice, finalized on July 10, 2025 and endorsed by the European Commission and AI Board on August 1, was designed to solve the “how do we actually comply” question. It’s structured around three chapters: transparency, copyright, and safety and security. Providers that sign the Code benefit from a presumption of compliance with the AI Act’s GPAI requirements.

But the Code is voluntary. And the consequences of not signing are revealing.

The Commission’s AI Office stated explicitly that providers who choose not to adhere to the Code will face “a larger number of requests for information and requests for access” from regulators. Non-signatories must independently demonstrate compliance through other means. The Code also factors into penalty assessments - the AI Office will consider Code adherence when determining fines.

Cédric O, who served as France’s Secretary of State for the Digital Transition and previously helped shape EU digital policy, has noted that voluntary codes of practice in EU regulation tend to become de facto mandatory over time - the presumption-of-compliance mechanism creates a regulatory moat around signatories that non-signatories eventually cannot afford to remain outside of.

For enterprise buyers, the practical implication is that your AI vendor’s relationship with the Code of Practice directly affects your compliance exposure. A vendor that signed the Code and produces the required documentation gives you a cleaner compliance posture than one that didn’t sign and can’t produce equivalent evidence. This isn’t a technical evaluation criterion. It’s a procurement criterion. And most procurement frameworks haven’t been updated to include it.

The enforcement gap - and why it won’t last

There’s a nuance in the August 2025 deadline that some enterprises are treating as breathing room. While the GPAI obligations took effect on August 2, 2025, the European Commission’s full enforcement powers don’t apply until August 2, 2026. During the interim year, the AI Office can investigate potential violations through “qualified alerts” from its scientific panel, but the full enforcement toolkit - including the ability to impose fines of up to 3% of global annual turnover or €15 million - doesn’t activate until 2026.

DLA Piper’s analysis of the enforcement timeline noted that the penalty regime under Article 99 only requires Member States to establish their enforcement measures by August 2025, not to begin active enforcement. Many Member States were still designating their national competent authorities as the deadline passed. The enforcement infrastructure was, in many cases, still being built.

But treating the enforcement gap as a reason to delay compliance is a mistake that enterprise leaders have made before, most memorably with GDPR. The organizations that waited until enforcement began found themselves scrambling to implement controls under regulatory scrutiny. The ones that treated the gap as preparation time were in dramatically better positions.

Additionally, providers of GPAI models that were already on the market before August 2, 2025, have until August 2, 2027 to bring their models into full compliance. This grace period means that the LLMs your enterprise adopted in 2023 and 2024 are not yet required to have complete documentation. But models placed on the market after August 2, 2025, must comply immediately.

The practical question for CIOs: do you know which of your AI models were placed on the market before or after August 2, 2025? Do you know whether your vendor signed the Code of Practice? Do you have contractual rights to the documentation you need?

The fine-tuning question

One of the more complex aspects of the GPAI obligations involves model modification. When does fine-tuning an existing model make you a provider?

The Commission’s guidelines address this directly: a modifier or fine-tuner becomes a GPAI provider in their own right only if “the modification leads to a significant change in the model.” The indicative threshold is using one-third of the original model’s training compute, or one-third of 10^23 FLOPs if the original compute is unknown.

In practice, most enterprise fine-tuning falls well below this threshold. But the determination is not always obvious, particularly for organizations that have done extensive customization of open-source models. If you’ve taken an open-source foundation model and fine-tuned it significantly for your domain, you may have inadvertently become a GPAI provider with your own compliance obligations.

David Wright, a partner at Arnold & Porter who advises on EU AI regulation, highlighted in a detailed analysis that even organizations not modifying models need to maintain “a complete inventory of the systems they use” and “ensure that prohibited applications are not used.” The compliance burden extends beyond model providers to anyone in the AI value chain.

What to do about it

The GPAI obligations create specific procurement and governance actions that enterprise leaders should be taking now - not when enforcement begins in 2026.

Audit your AI model inventory. Create a complete list of every GPAI model your organization uses, including models embedded within vendor platforms. For each, document: the provider, the model version, the date it was placed on the EU market, and whether the provider signed the Code of Practice.

Update procurement requirements. Add EU AI Act compliance documentation to your vendor evaluation criteria. New contracts should include provisions requiring providers to supply training data summaries, technical documentation, and copyright compliance evidence. Existing contracts should be reviewed for amendment opportunities.

Assess your fine-tuning exposure. If your organization has fine-tuned or significantly modified any open-source GPAI model, evaluate whether the modification exceeds the Commission’s threshold for becoming a provider. If you’re close to the threshold or uncertain, get a legal assessment before the enforcement powers take effect.

Map the documentation chain. For each GPAI model in your inventory, determine whether the provider can produce the documentation you need to demonstrate downstream compliance. If they can’t, you have a gap that needs to be closed - either by switching providers, obtaining contractual commitments, or building your own compliance evidence.

Don’t wait for enforcement to signal seriousness. The Commission’s AI Office is already conducting informal engagements with GPAI providers. The scientific panel can issue qualified alerts. The organizations that will be in the strongest position when full enforcement begins in August 2026 are the ones that used the interim period to build their compliance infrastructure, not the ones that treated it as a holiday.

The vendor dependency problem gets bigger

The August 2025 GPAI deadline exposes a structural challenge that will define enterprise AI governance for the next several years: the organizations deploying AI systems are dependent on model providers for compliance documentation they cannot generate themselves. You can audit your own deployments. You can govern your own use cases. But you cannot produce a training data summary for a model you didn’t train. You cannot document the evaluation methodology for a model whose internals you’ve never seen.

This is a new kind of vendor dependency. It’s not about uptime or feature availability. It’s about regulatory exposure that your vendor controls and you can’t mitigate independently.

The enterprises that navigate this well will be the ones that treat AI vendor compliance posture as a first-class procurement criterion - as important as security certifications, data processing agreements, and SLA guarantees. The ones that don’t will discover, somewhere around August 2026, that their AI compliance story has a vendor-shaped hole in it.