EU AI Act’s First Enforcement Deadline Just Passed, And Most Companies Aren’t Even Close
February 2, 2025 was supposed to be a wake-up call. Six months after the EU AI Act entered into force on August 1, 2024, the first enforcement deadline arrived. As of that date, AI systems classified as posing an “unacceptable risk” became illegal across the European Union. Fines for deploying prohibited systems: up to €35 million or 7% of global annual turnover, whichever is higher.
The prohibited categories were specific. Social scoring systems that evaluate individuals based on their social behavior or personal characteristics. AI designed for subliminal manipulation or exploitation of vulnerable groups. Real-time remote biometric identification in public spaces for law enforcement purposes (with narrow exceptions). Emotion recognition systems in workplaces and educational institutions. And AI systems that create or expand facial recognition databases through untargeted scraping.
These aren’t hypothetical use cases. Emotion recognition in hiring processes, customer sentiment analysis using biometric data, AI-powered surveillance, all of these had active deployments in enterprises operating in EU markets. As of February 2, those deployments became potential regulatory violations carrying penalties that would make GDPR fines look modest.
The phased timeline creates a false sense of comfort
The EU AI Act doesn’t arrive all at once. It unfolds across a carefully structured timeline:
February 2, 2025 was the first milestone: prohibited systems banned. August 2, 2025 brings obligations for general-purpose AI (GPAI) models, including transparency requirements and training data disclosure. August 2, 2026 is when the heavyweight rules hit: high-risk AI systems must meet conformity assessment requirements, register in the EU database, implement risk management systems, and ensure human oversight. August 2, 2027 covers legacy high-risk systems already on the market.
That phased approach is methodical and reasonable from a regulatory design perspective. From an enterprise compliance perspective, it’s creating a dangerous illusion. Executives see “August 2026” for high-risk system requirements and mentally shelve the problem for another year. But the classification work, the risk assessments, the documentation, the conformity processes, all of that takes months to complete. Organizations that start their high-risk compliance work in early 2026 will not be ready by August 2026.
The European Commission published guidelines on prohibited AI practices on February 4, 2025, two days after the first deadline. The timing was telling. Even the regulator was still clarifying what “prohibited” meant as the prohibition took effect.
What most enterprises got wrong about February 2
The February 2 deadline seemed narrow enough to ignore. Most enterprise AI deployments aren’t social scoring systems or subliminal manipulation tools. But the prohibited categories are broader than they appear on first reading.
Consider emotion recognition. The Act prohibits “AI systems that infer emotions of a natural person in the areas of workplace and education institutions.” A company using AI-powered video interview platforms that analyze candidate facial expressions, voice tone, or body language to assess suitability is potentially deploying a prohibited system. Workforce analytics tools that monitor employee engagement through behavioral signals could fall into the same category.
Or consider the prohibition on AI that exploits vulnerabilities of specific groups. A marketing AI that targets elderly consumers with urgency-driven messaging, or an insurance pricing model that identifies and exploits cognitive biases in specific demographic groups, could trigger this provision.
The question that many compliance teams hadn’t answered by February 2 was simple: do we have any AI systems that fall into prohibited categories? Not “do we think we have them”: do we know? The answer requires a comprehensive AI inventory, which most organizations haven’t completed.
Trilateral Research’s compliance analysis was blunt about the readiness gap: organizations that have not met the February and August 2025 deadlines are currently non-compliant and face potential enforcement action.
The GPAI deadline nobody is talking about
While the February prohibited-systems deadline got the headlines, the August 2, 2025 deadline for general-purpose AI models may actually be more consequential for enterprise technology teams.
Under the GPAI provisions, providers of general-purpose AI models must publish sufficiently detailed summaries of the content used for training, create a policy for complying with EU copyright law, draw up and make publicly available technical documentation of the model, and provide information and documentation to downstream providers who integrate the GPAI model into their own AI systems.
For organizations building products on top of foundation models from OpenAI, Anthropic, Google, Meta, or others, this creates a chain of compliance obligations. The foundation model provider must meet GPAI requirements. But the organization building an application on that model inherits obligations too, particularly around transparency and documentation.
If you’re deploying a customer-facing AI system built on GPT-4 or Claude, you need to understand what documentation OpenAI or Anthropic will provide under the GPAI rules, and whether that documentation is sufficient for your own compliance obligations. If it isn’t, you have a gap that needs to be addressed before August 2025.
High-risk is where the real compliance burden lives
The August 2, 2026 deadline for high-risk AI systems is where the Act’s impact will be felt most acutely. High-risk categories include AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice and democratic processes.
For enterprise technology teams, the employment category alone is enormous. AI systems used for recruiting, screening, evaluating candidates, making promotion decisions, monitoring employee performance, and allocating tasks all fall under high-risk classification. The conformity requirements are substantial: risk management systems, data governance, technical documentation, record-keeping, transparency to users, human oversight, accuracy and robustness, and cybersecurity measures.
Each high-risk AI system must undergo a conformity assessment, either self-assessed or through a notified body, depending on the category, and be registered in the EU’s public database before being placed on the market or put into service.
The compliance infrastructure for this doesn’t exist at most organizations today. Building it requires understanding which of your AI systems are high-risk (classification), documenting how each system works (technical documentation), implementing monitoring and oversight mechanisms (risk management), and establishing processes for ongoing compliance (governance).
That’s an 18-month project for most enterprises, which means the work should have started in early 2025 at the latest. Organizations starting in 2026 are already behind.
Extraterritorial reach, the GDPR playbook
The EU AI Act applies to any organization placing AI systems on the EU market or whose AI systems affect people within the EU, regardless of where the organization is based. This is the same extraterritorial scope that made GDPR a global compliance requirement, and enterprises should expect the same pattern to play out.
A US-headquartered company using AI to screen job applicants that include EU residents needs to comply. A Singapore-based company deploying customer service AI that serves EU customers needs to comply. A technology vendor selling AI-powered products to EU-based enterprises needs to ensure those products meet the applicable requirements.
The enforcement structure is also GDPR-inspired but more complex. The EU’s AI Office, based in Brussels, enforces obligations related to GPAI models. Each EU Member State must designate at least one national competent authority to enforce the Act’s other provisions. By August 2, 2025, these national authorities must be designated and operational.
The fine structure is tiered by violation type. Prohibited AI systems: up to €35 million or 7% of global annual turnover. Non-compliance with other provisions: up to €15 million or 3% of turnover. Supplying incorrect information: up to €7.5 million or 1% of turnover. For smaller enterprises, the fine is the lower of the specified amount and the percentage-based calculation, a carve-out designed to prevent disproportionate impact on SMEs.
What enterprises should have started doing yesterday
The gap between where most organizations are and where they need to be is significant. Having evaluated hundreds of industry award submissions in my work as a judge for organizations like the Globee Awards, I can tell you that the vast majority of enterprise AI deployments I’ve reviewed would struggle to produce the technical documentation the AI Act requires, let alone meet the full conformity assessment requirements for high-risk systems.
The starting point is an AI inventory. You cannot comply with a regulation that classifies AI systems by risk level if you don’t know what AI systems you have. This includes not just the systems you built intentionally, but the AI capabilities embedded in vendor products, the tools employees adopted independently, and the AI components in third-party integrations.
From that inventory, classify each system against the Act’s risk categories. Prohibited, high-risk, limited-risk, or minimal-risk. For anything in the prohibited category, stop deployment immediately. For high-risk systems, begin the conformity assessment process now, not in 2026.
Map your supply chain obligations. If you’re using foundation models from external providers, document which models, what GPAI documentation the providers will supply, and where gaps exist. Engage your model providers early on this, they’re building their own compliance programs and may have timelines that affect yours.
Build your documentation infrastructure. Every high-risk AI system will need technical documentation covering the system’s purpose, how it works, its training data, its testing results, its risk management measures, and its human oversight mechanisms. If that documentation doesn’t exist today, producing it retroactively is substantially harder than building it alongside the system.
Establish a governance function. The AI Act doesn’t just require one-time compliance: it requires ongoing monitoring, reporting, and updating. Someone in your organization needs to own AI Act compliance as a continuous responsibility, not a project.
The cost of being late
The EU AI Act’s phased timeline is not a gift of extra time. It’s a structured compliance program that requires work at every stage. Organizations that treated February 2, 2025 as a distant concern are already non-compliant if they have prohibited systems in their environment. Organizations that treat August 2026 as a future problem will face the same reckoning, except with high-risk systems, the compliance burden is orders of magnitude heavier.
The GDPR precedent is instructive. When GDPR enforcement began in May 2018, many organizations scrambled through last-minute compliance efforts that were incomplete and expensive. The organizations that had treated the two-year transition period as a structured implementation timeline were ready. The ones that waited until the final six months were not.
The AI Act is following the same pattern, except the technology it regulates is evolving faster than GDPR’s subject matter did. The AI systems you have today may not be the AI systems you have in August 2026. The models will change. The capabilities will expand. The risk profiles will shift. A compliance program that’s static, that assesses your AI portfolio once and files the paperwork, will be outdated before the ink is dry.
The February 2 deadline was the easy one. Prohibited systems are relatively rare in mainstream enterprise deployments. The deadlines ahead, GPAI transparency in August 2025, high-risk conformity in August 2026, will test whether enterprises took the phased timeline as an opportunity to prepare or an invitation to procrastinate.
The fines suggest the EU is betting on the latter.