The EU AI Act Is Now Law, And Here’s the Compliance Timeline That Should Scare You

The EU AI Act's enforcement timeline is tighter than most enterprises realize. Prohibited AI practices take effect first, high-risk obligations follow, and the penalty structure mirrors GDPR. The compliance window is already shrinking.
The EU AI Act Is Now Law, And Here’s the Compliance Timeline That Should Scare You

On July 12, 2024, the EU AI Act was published in the Official Journal of the European Union. Twenty days later, on August 1, 2024, it entered into force. The most comprehensive AI regulation in history now has legal authority, enforceable fines, and a phased compliance timeline that most enterprises haven’t read carefully enough to understand what’s actually coming.

Here’s the timeline problem nobody wants to talk about: the first enforcement deadline for prohibited AI systems is February 2, 2025. That’s six months from the date this regulation entered into force. Six months to identify whether any of your AI deployments fall into a category that’s about to become illegal in every EU member state.

Most organizations I’ve spoken with are still operating under the assumption that they have until 2026 or 2027 to worry about EU AI Act compliance. That assumption is going to be expensive.

The phased timeline trap

The EU AI Act uses a staggered enforcement schedule that creates a false sense of available time. The European Commission laid out the phases with enough breathing room between each stage that it’s easy to assume the hard deadlines are years away. They’re not.

February 2, 2025: Prohibitions on unacceptable-risk AI systems take effect. This includes social scoring systems, real-time remote biometric identification in public spaces (with narrow exceptions), and AI systems that manipulate human behavior in ways that cause harm. If you’re running anything that resembles these categories, you have six months to decommission it.

August 2, 2025: Obligations for general-purpose AI (GPAI) models kick in. This hits every organization using foundation models like GPT-4, Claude, or Gemini in production. Transparency requirements, technical documentation, copyright compliance, and energy consumption reporting all become mandatory.

August 2, 2026: High-risk AI system requirements become enforceable. This is the big one. Any AI system used in employment decisions, credit scoring, law enforcement, immigration, education, or critical infrastructure must meet rigorous requirements for risk management, data governance, transparency, human oversight, and accuracy.

August 2, 2027: Legacy high-risk AI systems must comply. If you deployed a high-risk AI system before August 2026, you have one additional year. After that, no exceptions.

The fines for non-compliance scale with organizational size: up to €35 million or 7% of global annual turnover for violations involving prohibited practices, €15 million or 3% for violations of other obligations, and €7.5 million or 1% for supplying incorrect information. For large enterprises, these are GDPR-level penalties applied to a much broader range of AI activities.

Why classification is the first crisis

The compliance challenge that hits earliest and hardest is classification. Before you can determine what the EU AI Act requires of your organization, you need to know which of your AI systems fall into which risk categories. And most enterprises don’t have a complete inventory of their AI deployments, let alone a classification of each deployment against the Act’s four-tier risk framework.

The Act defines four categories: unacceptable risk (banned), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (no specific requirements). The boundaries between these categories are drawn with a precision that requires legal analysis of each deployment, not just a quick scan of your AI project portfolio.

Consider an AI system that helps screen job applications. If it simply sorts resumes by keyword, it might be minimal risk. If it scores candidates and influences hiring decisions, it’s high risk under Annex III of the Act. The difference between those two determinations carries the full weight of the high-risk compliance framework: conformity assessments, quality management systems, technical documentation, human oversight mechanisms, and ongoing monitoring requirements.

The extraterritorial reach of the Act compounds the problem. Like GDPR before it, the EU AI Act applies to any organization that places AI systems on the EU market or whose AI systems affect people within the EU, regardless of where the organization is headquartered. An American company using AI to process the applications of EU-based job candidates is subject to the Act’s requirements, even if the AI system runs on servers in Virginia.

The GPAI surprise for August 2025

General-purpose AI model obligations arriving in August 2025 represent a compliance challenge that many organizations haven’t fully registered. If you’re using any foundation model in production, the Act imposes obligations on both the model provider and the deployer.

Model providers must provide technical documentation, comply with EU copyright law, and publish summaries of training data. Providers of models classified as having “systemic risk” face additional requirements: adversarial testing, incident monitoring and reporting, cybersecurity protections, and energy efficiency documentation.

But here’s what catches deployers off guard: the transparency obligations cascade downstream. If you’re deploying a general-purpose AI model in customer-facing applications, you must ensure that users know they’re interacting with AI. Content generated by AI systems must be marked as machine-generated. Deepfake content must be labeled. These aren’t suggestions. They’re enforceable requirements with fines attached.

Most organizations I work with have deployed GPT-4 or similar models into chatbots, customer support systems, document processing pipelines, and internal knowledge bases without any of the transparency mechanisms the GPAI provisions require. The August 2025 deadline gives them fourteen months from the Act’s entry into force to implement labeling, disclosure, and documentation systems that don’t exist in their current architectures.

The documentation burden for high-risk systems

For organizations with AI systems classified as high-risk, the compliance burden starting in August 2026 is substantial enough to require dedicated resources and multi-year planning that should have already started.

Article 9 of the Act requires a risk management system that operates throughout the AI system’s entire lifecycle. Not a one-time assessment. An ongoing, iterative process that identifies risks, estimates their severity, and implements mitigation measures that are documented, tested, and updated.

Article 10 requires that training, validation, and testing data meet specific quality criteria. Data must be relevant, representative, free of errors, and complete. Data governance practices must address collection, preparation, and bias examination. For organizations that assembled training datasets without this level of documentation, retroactive compliance will be painful and potentially impossible.

Article 14 mandates human oversight mechanisms designed to prevent or minimize risks. These aren’t generic oversight committees. They’re technical controls that allow human operators to understand the AI system’s capabilities and limitations, correctly interpret outputs, decide not to use the system or override its output, and intervene or stop the system when necessary.

Each of these requirements generates documentation, process changes, technical modifications, and ongoing governance obligations. Multiplied across every high-risk AI system in a large enterprise, the compliance work is comparable to GDPR implementation. And GDPR took most organizations three to five years to achieve meaningful compliance, despite having a two-year implementation window.

The supply chain dimension

The EU AI Act doesn’t just regulate the organizations that build and deploy AI systems. It regulates the supply chain that supports them. If your organization uses AI components from third-party providers, you need assurance that those components meet the Act’s requirements. If they don’t, the liability falls on you as the deployer.

This creates a compliance cascade that extends through every vendor, cloud provider, and API service in your AI stack. The foundation model provider must meet GPAI requirements. The fine-tuning service must document its data practices. The hosting provider must implement appropriate cybersecurity measures. And you, as the entity that puts the system into production, must verify that every link in that chain meets the applicable requirements.

For organizations that have built AI systems using combinations of open-source models, third-party APIs, and proprietary data, mapping this supply chain and documenting compliance at each layer is a project that takes months, not weeks. Starting that project after the enforcement deadlines have passed is starting too late.

Five actions that shouldn’t wait until 2025

First, complete an AI system inventory. Every AI deployment, every model, every automated decision-making system. Include shadow AI, third-party embedded AI, and AI components in SaaS products you’ve procured. You can’t classify what you haven’t catalogued.

Second, classify each system against the Act’s risk tiers. Work with legal counsel who understand both the regulatory text and your specific AI implementations. The boundaries between risk categories require interpretation, and getting classification wrong means either over-investing in compliance for minimal-risk systems or under-investing for high-risk ones.

Third, identify any prohibited systems immediately. If anything in your portfolio resembles the prohibited categories, start decommission planning now. The February 2025 deadline leaves no room for extended transition periods.

Fourth, appoint an AI compliance officer or equivalent role. Someone needs to own this. Not the CISO, not the CTO, not the general counsel on top of their existing responsibilities. A dedicated owner who understands both the technical architecture and the regulatory requirements.

Fifth, map your AI supply chain. For every AI system, document the providers, models, data sources, hosting infrastructure, and decision-making processes involved. You’ll need this for every high-risk conformity assessment, and assembling it under deadline pressure guarantees gaps.

GDPR taught us what happens when you wait

The organizations that treated GDPR as a future problem spent 2018 in crisis mode, scrambling to implement consent mechanisms, data processing agreements, and privacy impact assessments that should have been planned years earlier. The fines came. The operational disruptions came. And the competitive advantage went to the organizations that had prepared early enough to treat compliance as a strategic capability rather than a last-minute obligation.

The EU AI Act offers the same lesson with higher stakes. The AI systems affected are more numerous, more technically complex, and more deeply embedded in business operations than the data processing activities GDPR regulated. The compliance requirements are more technically demanding. And the enforcement timeline is, for some categories, already running.

The organizations that start classification and compliance planning now will be the ones that can deploy AI confidently in 2026 and 2027, knowing their systems meet regulatory requirements. The organizations that wait will discover that “we have until 2026” was never the right way to read a timeline that started on August 1, 2024.