The EU Just Voted on an AI Law That Was Obsolete Before the Ink Dried
On June 14, 2023, the European Parliament voted 499-28 to adopt its negotiating position on the Artificial Intelligence Act, the most comprehensive attempt at AI regulation the world had ever seen. The vote was celebrated as a landmark moment. Major media outlets ran headlines about Europe leading on AI governance. Legislators congratulated themselves on moving faster than Washington or Beijing.
There was one problem. The law they voted on was already outdated.
The European Commission had proposed the AI Act on April 21, 2021; eighteen months before ChatGPT existed, two years before generative AI became the most disruptive technology of the decade. The original draft was designed for a world of narrow AI: hiring algorithms that discriminated, medical devices that misdiagnosed, facial recognition systems that surveilled. It categorized AI systems by risk level and imposed obligations on providers of “high-risk” applications.
Then ChatGPT launched in November 2022 and rendered the entire conceptual framework obsolete. The EU spent the next seven months scrambling to staple generative AI provisions onto a law designed for a fundamentally different technology. On June 14, they voted on the result. What emerged was a regulatory framework trying to govern the future while still structured around the past.
The two-year drafting gap
To understand why the AI Act arrived obsolete, you need to understand what the world looked like in April 2021. GPT-3 had been released in June 2020 but was accessible only through a limited API. Stable Diffusion did not exist. DALL-E was a research demo. The idea that 100 million people would be pasting corporate source code and strategy documents into a chatbot within 18 months was science fiction.
The Commission’s 2021 proposal reflected that reality. Its risk-based framework sorted AI applications into four tiers: unacceptable risk, high risk, limited risk, and minimal risk. The unacceptable-risk category banned social scoring, subliminal manipulation, and certain biometric surveillance uses. The high-risk category focused on AI systems used in critical infrastructure, education, employment, law enforcement, and migration.
None of these categories contemplated a technology that could generate text, images, code, and video from natural language prompts. None addressed foundation models trained on the entire internet. None anticipated that the primary AI risk enterprises would face in 2023 was employees leaking trade secrets into a chatbot: not a misprogrammed hiring algorithm.
The Council of the European Union reached its negotiating position in December 2022, the same month ChatGPT hit 100 million users. The Council introduced the concept of “general-purpose AI” systems, a category that acknowledged foundation models but did not fully address the implications of generative AI for data privacy, intellectual property, and enterprise security. The definition was broad: AI systems that “can be used in, or adapted to, a wide range of applications for which it was not intentionally and specifically designed.”
Then Parliament took over. Between January and June 2023, MEPs processed roughly 3,000 amendments to the Commission’s text. The bulk of the political focus during this period was on foundation models and generative AI; technologies that had not existed when the law was proposed.
The foundation model fix
Parliament’s solution to the ChatGPT problem was to create a new category within the existing framework: foundation models, defined as AI models “trained on broad data at scale, designed for generality of output, and can be adapted to a wide range of distinctive tasks.” These were distinguished from general-purpose AI systems: a subcategory intended to capture the specific capabilities and risks of large language models, image generators, and multimodal systems.
Under Parliament’s position, providers of foundation models would be required to demonstrate risk identification and mitigation before development was complete; produce extensive technical documentation; implement data governance measures including examination of data sources and potential biases; ensure appropriate levels of performance, predictability, safety, and cybersecurity; and register their models in an EU-wide database.
For generative AI systems specifically, Parliament added transparency requirements: content generated by AI must be labeled as such, models must be designed to prevent generation of illegal content, and providers must publish summaries of copyrighted material used in training.
On paper, this looked comprehensive. In practice, it was legislation trying to regulate a technology that was evolving faster than legislative language could be drafted.
The timing problem
Here is the timeline that illustrates the structural challenge:
April 2021: Commission proposes AI Act, designed for narrow AI risk categories. November 2022: ChatGPT launches and reaches one million users in five days. December 2022: Council adopts its position, incorporating “general-purpose AI” provisions. March 2023: Samsung engineers leak semiconductor source code into ChatGPT. March 2023: Italy’s Garante bans ChatGPT using existing GDPR, extracts compliance in 29 days. April 2023: EDPB launches ChatGPT task force, coordinating enforcement across member states. June 2023: Parliament votes 499-28 on its amended position.
Notice the asymmetry. While the legislative process was still debating how to define foundation models, Italy’s data protection authority had already used existing law to force OpenAI into concrete behavioral changes. The Garante imposed its ban on March 31. By April 28, OpenAI had implemented privacy notices, opt-out mechanisms, and begun work on age verification. Twenty-nine days from enforcement action to compliance.
The AI Act, by contrast, would not enter into force until August 1, 2024; more than three years after it was proposed. Most of its provisions would not become applicable until 2025 and 2026. The prohibition on unacceptable-risk AI practices would apply six months after entry into force. Obligations for general-purpose AI models would apply after twelve months. High-risk system requirements would apply after twenty-four months.
By the time the AI Act’s foundation model provisions become fully enforceable, the technology will have gone through multiple generations of capability advancement. GPT-4 was already released in March 2023, three months before Parliament’s vote. GPT-5 development was underway. Claude, Gemini, and dozens of other foundation models were being built and deployed globally. The regulatory target was moving faster than the regulation.
What the AI Act actually regulates
The fundamental tension in the AI Act is between its risk-based structure, which was sound when applied to narrow AI, and the reality of general-purpose systems that resist categorization.
A hiring algorithm that discriminates based on race fits neatly into a “high-risk” category with clear obligations: bias testing, human oversight, transparency requirements. A foundation model that can write hiring criteria, generate discriminatory content, assist with surveillance, and also help a child with homework does not fit neatly into any risk category because its risk depends entirely on how it is used.
Parliament’s solution, creating a separate obligations layer for foundation models regardless of end use, was the correct instinct. But the implementation exposed the limits of ex-ante regulation for technology that advances discontinuously.
Consider the copyright transparency requirement. Parliament’s position required providers of generative AI to publish summaries of copyrighted training data. This sounds straightforward. In practice, foundation models are trained on datasets containing billions of documents scraped from the internet. OpenAI has never published a complete accounting of GPT-4’s training data. It is unclear whether a meaningful “summary” of billions of copyrighted works is even a coherent concept, and if it is, what enforcement of that requirement would look like when the training has already occurred.
Similarly, the requirement that foundation models demonstrate “appropriate levels of performance, predictability, safety and cybersecurity” presupposes that these properties can be measured and verified before deployment. For narrow AI systems, this is possible; you can test a medical diagnostic tool against known outcomes. For a foundation model that can generate arbitrary text, defining what “appropriate performance” means is an unsolved research problem.
The enforcement asymmetry
The most striking lesson from the first half of 2023 was the contrast between regulation that exists and regulation that is coming.
The GDPR was enacted in 2016, became applicable in 2018, and was used to discipline OpenAI in 2023, within 29 days of the enforcement action. The Italian Garante identified specific violations (lack of legal basis, missing transparency notices, no age verification), imposed a specific remedy (processing ban), and extracted specific compliance measures (privacy notices, opt-out mechanisms, age verification plans). By December 2024, a €15 million fine followed.
The AI Act was proposed in 2021, amended extensively in 2022 and 2023, and will not be fully enforceable until 2026. When it becomes applicable, enforcement will require newly established bodies, including an AI Office within the European Commission, to develop implementing guidance, establish compliance standards, and build enforcement capacity from scratch.
This is not an argument against the AI Act. Purpose-built AI regulation addresses risks that data protection law does not cover: the systemic risks of high-impact foundation models, the need for conformity assessments before deployment, prohibitions on specific high-risk uses. These are important governance mechanisms.
But the timing gap creates a regulatory vacuum that existing law fills imperfectly. For the three to five years between the AI Act’s proposal and its full applicability, the GDPR was the only enforceable AI regulation in Europe. Italy proved it worked. The question is whether the AI Act, when fully operational, will work as well, or whether the technology will have moved so far beyond its June 2023 assumptions that another round of amendments will be needed before the ink dries.
What the practitioner actually needs
I build AI systems at enterprise scale. The regulation my team operates under today is not the AI Act; it is the GDPR, supplemented by sector-specific rules. When I make architectural decisions about data processing, consent mechanisms, transparency requirements, and data subject rights, I am applying legal frameworks that exist now, not frameworks that will exist in 2026.
This creates a practical challenge that every enterprise architect building AI systems should understand. The AI Act will eventually impose obligations on high-risk AI systems, foundation model providers, and deployers of general-purpose AI. But the timeline means that enterprises cannot wait for AI Act guidance to make design decisions today. Systems being built in 2023, 2024, and 2025 must be designed to comply with both existing law and anticipated future requirements; without certain knowledge of what those future requirements will look like in their final, enforceable form.
The organizations that navigate this well are treating the AI Act’s principles as architectural guidance while building compliance on the GDPR’s enforceable requirements. Data minimization, purpose limitation, transparency, accuracy, and data subject rights are not AI-specific concepts; they are data protection obligations that apply to every AI system processing personal data of European residents, right now, regardless of whether the AI Act exists.
Five principles for regulation-proof AI architecture
For enterprise leaders making AI deployment decisions in the gap between existing law and future regulation, five architectural principles reduce the risk of building something that either violates current rules or requires expensive retrofitting when new rules arrive.
First, build data provenance tracking from day one. Both the GDPR and the AI Act require you to know what data your system was trained on, where it came from, and whether you have a legal basis for using it. Retrofitting provenance tracking into an existing system is exponentially more expensive than designing it in.
Second, implement modular transparency. The AI Act’s transparency requirements for foundation models will likely evolve as the technology evolves. Build your disclosure mechanisms as modular components that can be updated without redesigning your entire system. Do not hardcode a specific compliance format, build the infrastructure to generate whatever format regulators eventually require.
Third, design for auditability. Both existing GDPR enforcement and anticipated AI Act enforcement require the ability to demonstrate compliance. Log your model’s decisions. Document your training data selection criteria. Record your risk assessments. If you cannot reconstruct why your system behaved a certain way, you cannot defend that behavior to a regulator.
Fourth, separate purpose from capability. The AI Act’s risk categories depend on intended use, not underlying capability. A foundation model is not inherently high-risk, but its deployment in a high-risk context triggers obligations. Design your systems so that purpose-specific deployments can be independently assessed, documented, and controlled without modifying the underlying model.
Fifth, assume the regulation will be stricter than you expect. Every version of the AI Act has been more restrictive than the previous one. The Commission’s 2021 proposal was expanded by the Council in 2022, expanded again by Parliament in 2023, and the final trilogue compromise in December 2023 included provisions for “high-impact” foundation models with systemic risk that neither the original proposal nor the Parliament’s position had fully anticipated. Build to the strictest plausible interpretation, not the most convenient one.
The legislative paradox
The European Parliament’s 499-28 vote on June 14, 2023, was historically significant. It represented the world’s most ambitious attempt to create a comprehensive legal framework for artificial intelligence. The legislators who crafted it were thoughtful, informed, and responsive to the rapid changes in the technology landscape.
And yet the law they voted on was designed for a world that no longer existed. The April 2021 proposal addressed narrow AI risks. The June 2023 amendments addressed foundation models and generative AI. By the time the Act becomes fully enforceable in 2026, the technology will have advanced through multiple generations; potentially including autonomous agents, multi-agent systems, and AI architectures that do not map to any category the current law contemplates.
This is not a failure of the legislation. It is an inherent limitation of ex-ante technology regulation. Laws take years to draft, negotiate, and implement. Technologies evolve in months. The gap between legislative intent and technological reality is not a bug in the process. It is the process.
The enterprises that understand this paradox will build AI systems that comply with current law, anticipate future regulation, and adapt to whatever framework actually arrives. The enterprises that wait for regulatory certainty before acting will discover that certainty never comes, and that existing law applies whether you were paying attention to it or not.
Italy proved that in 29 days.