The EU AI Act Passed, And Your Compliance Team Is Already Behind

The EU AI Act is now law, with enforcement timelines that most compliance teams haven't internalized. The challenge isn't understanding the regulation. It's mapping AI systems to risk categories when most enterprises don't have a complete inventory.
The EU AI Act Passed, And Your Compliance Team Is Already Behind

On March 13, 2024, the European Parliament voted 523 to 46 to adopt the Artificial Intelligence Act, the world’s first comprehensive AI regulation. The EU Council formally approved the legislation on May 21, 2024. It was published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024.

By that point, most enterprises operating in the EU had done exactly nothing to prepare. And the phased implementation timeline, designed to give organizations time to adapt, was creating a dangerous illusion of breathing room that obscured a simple, urgent fact: the first enforcement deadline was less than seven months away.

Prohibited AI systems, including emotion recognition in workplaces and educational settings, social scoring, and certain forms of predictive policing, had to be eliminated by February 2, 2025. Not evaluated. Not flagged for future review. Eliminated. Organizations that failed to comply faced fines of up to €35 million or 7% of global annual turnover, whichever was higher.

That penalty structure should have triggered an immediate compliance audit across every enterprise deploying AI in European markets. For the vast majority, it didn’t.

The phased timeline is a trap

The EU AI Act’s implementation follows a graduated schedule that extends through August 2027. The design intent was reasonable: give organizations time to adapt, starting with the most dangerous AI applications and progressively expanding to cover the full risk spectrum.

The timeline breaks down as follows. Six months after entry into force, February 2, 2025, prohibited AI systems must be discontinued and AI literacy obligations begin. Twelve months, August 2, 2025, general-purpose AI transparency requirements take effect. Twenty-four months, August 2, 2026, the full Act becomes applicable for most operators, including high-risk AI system requirements. Thirty-six months, August 2, 2027, rules for high-risk AI systems embedded in regulated products (medical devices, aviation, automotive) take full effect.

Read that timeline quickly, and it sounds manageable. Three years to prepare. Plenty of time.

Now read it carefully. The first deadline was seven months from passage. The second was seventeen months. And every deadline requires compliance steps that should have started months or years before the deadline itself. You cannot classify your AI deployments the week before the deadline and call yourself compliant. Classification requires inventory, assessment, documentation, legal review, and organizational sign-off. For a large enterprise with dozens or hundreds of AI systems, many of which were deployed with minimal documentation, this is a multi-quarter project.

The phased timeline doesn’t give enterprises three years. It gives them a cascading set of deadlines, each of which requires preparation that should have begun before the previous deadline hit.

Most companies haven’t classified their AI deployments

This is the foundational problem, and it’s staggering in its simplicity. Before an organization can comply with the EU AI Act, it must know what AI it’s deploying. Not in the abstract: specifically. Every model, every system, every third-party AI service integrated into their products or operations.

The Act classifies AI systems into four risk categories: unacceptable (prohibited), high-risk, limited risk, and minimal risk. Each category carries different obligations. Prohibited systems must be eliminated. High-risk systems require conformity assessments, technical documentation, logging, human oversight, accuracy and robustness requirements, and cybersecurity obligations. Limited-risk systems require transparency disclosures. Minimal-risk systems are largely unregulated.

The classification exercise sounds straightforward. It isn’t. Many enterprises have AI embedded in systems they didn’t build and don’t fully understand. A customer service platform purchased from a vendor may use AI for routing, sentiment analysis, and response suggestion. An HR tool may use AI for resume screening, candidate scoring, or performance evaluation. An operations system may use AI for demand forecasting, anomaly detection, or resource allocation.

Each of these embedded AI systems must be classified independently. The classification depends on the specific use case, the data processed, the decisions influenced, and the human oversight mechanisms in place. A sentiment analysis system used for customer feedback is minimal risk. The same technology used for employee emotion monitoring in the workplace is prohibited.

I’ve seen enterprise environments where the IT team can’t produce a complete list of AI-enabled tools deployed across the organization. Not because they’re incompetent, but because the procurement and deployment of AI tools has been decentralized, rapid, and largely undocumented. Shadow AI, employees using unauthorized AI tools, makes the problem worse. You cannot classify what you don’t know exists.

The supply chain problem nobody is discussing

The EU AI Act doesn’t just regulate the AI systems you build. It regulates the AI systems you deploy, including those built by third parties.

Article 26 establishes that deployers of high-risk AI systems have specific obligations including ensuring human oversight, monitoring system performance, maintaining logs, and reporting serious incidents to competent authorities. This means that if you purchase an AI-powered recruitment tool, a credit scoring system, or a biometric identification solution, you are responsible for ensuring that tool complies with the Act, regardless of what the vendor tells you about their own compliance.

For enterprises with extensive vendor ecosystems, this creates a compliance surface that scales with procurement, not with internal development. Every AI-enabled vendor relationship becomes a compliance dependency. Every contract needs AI-specific terms. Every procurement decision needs a risk classification review.

The organizations that are farthest behind are those that assumed compliance was their vendors’ problem. The Act makes clear that it’s everyone’s problem. The provider has obligations. The deployer has obligations. The importer and distributor have obligations. The accountability chain doesn’t stop at the contract boundary.

What the GDPR precedent teaches

The EU AI Act is explicitly modeled on GDPR’s enforcement architecture, and the GDPR precedent offers instructive lessons about what happens when enterprises underestimate EU regulatory timelines.

GDPR was adopted in April 2016 with a two-year implementation period before enforcement began in May 2018. Despite those two years of advance notice, most organizations were not compliant on enforcement day. An industry cottage of GDPR compliance consultants, tools, and services sprang up. Enforcement was initially slow, then accelerated dramatically. By 2024, cumulative GDPR fines exceeded €4 billion.

The AI Act’s penalty structure is significantly more aggressive than GDPR’s. GDPR’s maximum fine is €20 million or 4% of global turnover. The AI Act’s maximum is €35 million or 7% of global turnover. The regulators clearly learned from GDPR that the penalty needed to be large enough to command attention even from the largest multinationals.

But the more relevant lesson from GDPR is about institutional inertia. Two years of advance notice wasn’t enough for most enterprises to achieve compliance. The AI Act’s obligations are, in many respects, more complex than GDPR’s: they require technical assessments of AI system behavior, not just data processing documentation. If GDPR compliance took most organizations two to three years after enforcement began, AI Act compliance could take longer.

The organizations that handled GDPR well were those that treated it as a multi-year program, not a deadline-driven project. The same approach is required for the AI Act, and the clock started in August 2024.

The extraterritorial reach catches everyone

Like GDPR, the EU AI Act has extraterritorial applicability. If your AI system produces output that is used within the EU, the Act applies to you, regardless of where your company is headquartered or where the AI system is operated.

This means that a U.S. company deploying an AI-powered customer service tool that serves European customers is within scope. A Japanese manufacturer using AI quality control systems in EU-based factories is within scope. An Indian IT services firm operating AI tools on behalf of EU clients is within scope.

The practical implication is that the AI Act is not a “European problem.” It is a global compliance requirement for any organization with EU market exposure. And since most enterprise AI systems are deployed globally rather than per-region, achieving compliance requires changes that affect the entire organization, not just the EU operations.

A practitioner’s honest assessment

I build AI systems for enterprise deployment, and I’m going to say something that compliance consultants won’t: the AI Act’s requirements, while well-intentioned, are going to expose just how little documentation most enterprise AI deployments have.

The Act requires technical documentation for high-risk AI systems that includes the system’s design specifications, development methodology, data governance practices, performance metrics, and risk management procedures. Most enterprise AI deployments, especially those built rapidly during the 2023 generative AI surge, have minimal documentation. The model was trained, it worked in testing, it was deployed, and the documentation consists of a Confluence page from nine months ago that three people can find.

Meeting the Act’s documentation requirements isn’t a legal exercise. It’s an engineering exercise that requires going back to systems that are already in production and reconstructing the design rationale, data lineage, and performance baselines that should have been documented during development. This is expensive, time-consuming, and often technically difficult, especially when the team that built the system has moved on to other projects.

Through my involvement with standards bodies including CoSAI and IETF working groups, I see the gap between the Act’s requirements and enterprise reality. The standards community is working to create frameworks that make compliance tractable. But frameworks don’t write documentation, classify systems, or audit supply chains. Humans do. And most organizations haven’t allocated the humans.

The five-step triage for March 2024

For any enterprise that hasn’t started AI Act compliance work, here is the triage that should have begun immediately after the March 2024 vote.

Step one: conduct an AI inventory. Not a survey asking business units “do you use AI?”: an actual technical audit that identifies every AI-enabled system, tool, and service in the organization. Include vendor products, internal tools, and shadow AI. The inventory should capture what each system does, what data it processes, what decisions it influences, and who deployed it.

Step two: classify every system against the Act’s risk categories. Prohibited, high-risk, limited, minimal. This requires reading the Act’s Annex III (high-risk systems list) carefully and applying it to your specific use cases. A general-purpose AI tool that touches recruitment, credit scoring, insurance underwriting, or law enforcement operations is almost certainly high-risk.

Step three: identify prohibited systems and begin elimination. If your organization uses AI for workplace emotion recognition, social scoring of employees, or biometric categorization based on sensitive characteristics, you have a prohibited system that must be discontinued. The February 2025 deadline has already passed: if you’re still running these systems, you’re already in violation.

Step four: begin high-risk documentation for systems due by August 2026. This is the largest work stream and requires technical, legal, and operational collaboration. Start with the AI systems that have the highest organizational risk, those touching HR, finance, customer scoring, or safety-critical operations.

Step five: map your AI supply chain. For every third-party AI tool you deploy, determine the provider’s compliance status. If the provider cannot demonstrate compliance, begin contingency planning. Your compliance depends on theirs, but your liability is your own.

The comfortable fiction of “we have time”

The EU AI Act’s passage in March 2024 marked the beginning of a compliance obligation that will reshape how enterprises build, deploy, and procure AI systems globally. The phased timeline is not a gift. It’s a series of approaching deadlines, each dependent on work that most organizations haven’t started.

The enterprises that treat the AI Act as a 2026 or 2027 problem will discover, as their predecessors did with GDPR, that compliance is a capability that takes years to build and cannot be purchased from a consultant in Q4 of the deadline year.

The Parliament voted 523 to 46. The margin wasn’t close. The direction isn’t ambiguous. The only question is whether your organization’s response matches the urgency that the law’s architects clearly intended.