Italy Just Proved GDPR Is the Only AI Law That Actually Works

While governments race to draft AI-specific legislation, Italy used existing GDPR authority to force OpenAI into compliance changes. The lesson: the most effective AI regulation may already exist. It just wasn't designed with AI in mind.
Italy Just Proved GDPR Is the Only AI Law That Actually Works

On March 31, 2023, Italy’s data protection authority, the Garante per la protezione dei dati personali, did something no other regulator on the planet had done: it ordered OpenAI to immediately stop processing Italian users’ personal data through ChatGPT. Not a warning. Not a request for comments. Not a proposed rulemaking. An emergency ban, effective immediately, backed by the threat of fines up to €20 million or 4% of global revenue.

Twenty-nine days later, ChatGPT was back online in Italy, but only after OpenAI had implemented privacy notices, opt-out mechanisms for data training, age verification controls, and transparency disclosures that it had not offered to users anywhere else in the world. By December 2024, the Garante would fine OpenAI €15 million for the original violations. While politicians in Washington, Brussels, and Beijing debated what AI regulation should look like, Italy had already used existing law to force the most powerful AI company in the world to change its behavior.

The episode revealed something the technology industry did not want to acknowledge: the only AI regulation that works is the one that already exists.

What the Garante actually found

The Garante’s action was not triggered by abstract concern about artificial intelligence. It was triggered by a concrete data breach on March 20, 2023. A bug in ChatGPT’s open-source library exposed users’ chat titles to other users, and, more critically, revealed partial payment information for approximately 1.2% of ChatGPT Plus subscribers who were active during a specific nine-hour window.

That breach prompted the Garante to look more closely. What it found was a stack of GDPR violationsthat had nothing to do with the breach itself:

OpenAI had no legal basis for processing personal data to train its algorithms. Under GDPR Article 6, every processing activity requires a lawful basis; consent, contractual necessity, legitimate interest, or one of several other enumerated grounds. OpenAI had not identified which basis applied to scraping the internet for training data that inevitably contained personal information about millions of Europeans.

OpenAI had failed to provide required transparency information. Articles 13 and 14 of the GDPR require controllers to inform data subjects about how their data is being processed. OpenAI had provided no such notice to the people whose data had been scraped for model training; people who were not ChatGPT users and had never interacted with OpenAI at all.

ChatGPT produced inaccurate information about real people. The GDPR’s accuracy principle requires that personal data be kept accurate and up to date. When ChatGPT generated false biographical information about identifiable individuals, it was processing inaccurate personal data with no mechanism for correction.

There was no age verification. ChatGPT’s terms of service required users to be at least 13, but no technical mechanism enforced that restriction.

OpenAI had failed to notify the Garante of the March 20 breach within the 72-hour window required by GDPR Article 33.

Each of these was a straightforward application of existing GDPR provisions. The Garante did not need new legislation. It did not need to invent novel regulatory theories. It applied the rules that had been on the books since 2018 and found that OpenAI was not complying with any of them.

The conventional wisdom about AI regulation

The timing of the Italian action matters because of what was happening everywhere else. In April 2023, the dominant narrative about AI regulation was that existing laws were inadequate and that purpose-built AI legislation was necessary.

The EU had been working on its AI Act since April 2021, with the European Parliament scrambling to rewrite major sections after ChatGPT’s November 2022 launch rendered the original draft, focused on narrow AI applications like hiring algorithms and medical devices, largely obsolete. In the United States, the conversation was even more speculative, with various proposed frameworks circulating but no binding federal legislation on the horizon.

The assumption embedded in this discourse was that regulators lacked the tools to address generative AI. Companies and their lobbyists argued that AI was fundamentally different from previous technologies and required new regulatory frameworks designed from scratch. The standard line from industry was: give us time to develop responsible AI practices, and give regulators time to develop appropriate rules.

Dessislava Savova, head of the Continental Europe Tech Group at Clifford Chance, captured the significance of the Italian action in a Politico interview: “This is a wake-up call. It will trigger a dialogue in Europe and it will accelerate a position being taken by other regulators.” And in a Reuters interview, she added: “The points they raise are fundamental and show that GDPR does offer tools for the regulators to be involved and engaged into shaping the future of AI.”

That assessment proved precisely correct.

Thirty days that changed AI governance

The Garante gave OpenAI 20 days to comply with its demands. What followed was a concentrated demonstration of regulatory leverage that AI-specific laws have yet to replicate.

On April 11, 2023, the Garante published a detailed set of conditions that OpenAI had to meet before the ban would be lifted. The requirements were specific and technical: implement a privacy notice for both users and non-users; provide opt-out mechanisms for data training; establish age verification; remove references to “performance of a contract” as a legal basis for training data processing, since the Garante had narrowed the acceptable options to consent or legitimate interest.

Andrea Tuninetti Ferrari, a counsel at Clifford Chance, noted in an analysis of the Garante’s order that “generative AI presents a new set of challenges to the application of fundamental privacy principles and rights, such as accuracy and rectification of data.”

By April 28, OpenAI had implemented enough of the Garante’s requirements to lift the ban. The company deployed a privacy notice on its website addressing both users and non-users. It created an opt-out mechanism allowing Europeans to prevent their data from being used for algorithm training. It began work on age-verification systems. And critically, it applied many of these changes not just to Italian users but across Europe.

The timeline speaks for itself. From ban to behavioral change: 29 days. No legislation was drafted. No committee hearings were held. No multi-year rulemaking process was initiated. An existing regulation with existing enforcement powers produced concrete changes in the behavior of the world’s most visible AI company in under a month.

The cascade effect

The Garante did not act in isolation. On April 13, 2023, just two weeks after the Italian ban, the European Data Protection Board established a dedicated ChatGPT task force to coordinate investigations and enforcement actions across EU member states. Data protection authorities in France, Germany, Spain, and Ireland opened their own inquiries within weeks. The Canadian Privacy Commissioner launched an investigation on April 4.

None of these regulators needed new legislation. They all used existing data protection law.

The EDPB task force published its first substantive report in May 2024, establishing preliminary positions on web scraping for training data, transparency obligations, and data accuracy requirements that would apply not just to OpenAI but to any company building large language models on personal data. The report was explicit that ChatGPT was being held to the GDPR’s existing data accuracy principle; meaning hallucinations about real people were not just an inconvenience but a potential legal violation.

Then came the fine. In December 2024, the Garante issued a €15 million penalty against OpenAI for the original violations. The fine addressed OpenAI’s failure to identify a legal basis for processing before launching ChatGPT, its transparency violations, and its failure to report the March 2023 breach. OpenAI called the fine “disproportionate,” noting it was nearly 20 times the company’s Italian revenue during the relevant period. The Garante also ordered OpenAI to conduct a six-month public information campaign across Italian media to inform citizens of their right to opt out of data training.

Why AI-specific laws have not caught up

While Italy was extracting compliance from OpenAI in April 2023, the European Parliament was finalizing its negotiating position on the AI Act. On June 14, 2023, the Parliament voted 499-28 to adopt its position. The law had been proposed in April 2021; before ChatGPT existed, before generative AI was a consumer product, before anyone had scraped the internet at scale to train a foundation model.

The original draft focused on narrow AI use cases: biometric surveillance, hiring algorithms, credit scoring systems. When ChatGPT changed the landscape, Parliament members rewrote significant portions to address foundation models and generative AI. The final version of the AI Act would not enter into force until August 2024, with most provisions not becoming applicable until 2025 and 2026.

The contrast is stark. GDPR: ban to compliance in 29 days. AI Act: proposal to applicability, three to five years.

This is not an argument that AI-specific regulation is unnecessary. The EU AI Act addresses important issues that GDPR does not; risk classifications for AI systems, conformity assessments, prohibitions on specific uses. But the Italian episode proved that the most important function of regulation is not its cleverness or its comprehensiveness. It is its enforceability. And in April 2023, the only enforceable AI regulation in the world was a data protection law that had been written for a pre-AI era.

The practitioner’s lesson

I architect AI-powered systems that process enterprise data at scale. The Italy situation forced me to rethink something basic about how we build these systems.

The conventional approach to AI compliance in early 2023 was to wait; wait for the AI Act, wait for NIST to finalize its AI Risk Management Framework, wait for industry standards to mature. The Italy episode demonstrated that waiting was not a viable strategy. Regulators were not going to politely wait for purpose-built legislation when existing laws were being violated.

For enterprise leaders, this meant that GDPR compliance was not a checkbox to complete after building an AI system. It was an architectural constraint that had to be incorporated from the design phase. If your AI system processes personal data of European residents, and virtually every enterprise system does, then GDPR requirements for legal basis, transparency, accuracy, and data subject rights applied from day one. Not from the day the AI Act became applicable. From day one.

The organizations that understood this in April 2023 built their AI systems with data minimization, purpose limitation, and transparency baked into the architecture. The organizations that treated GDPR as irrelevant to AI are still scrambling to retrofit compliance into systems that were not designed for it.

What enterprise leaders should do Monday morning

The Italy precedent established five operational principles that every enterprise deploying AI should implement immediately:

First, identify your legal basis for every AI processing activity before deployment. The Garante fined OpenAI specifically because it had not identified a lawful basis for processing personal data before launching ChatGPT. Under GDPR, this assessment must happen before processing begins: not after a regulator asks.

Second, build transparency mechanisms for non-users. One of the Garante’s most consequential findings was that OpenAI owed transparency obligations not just to ChatGPT users, but to the people whose data was scraped from the internet for training. If your AI system is trained on data that includes personal information about individuals who never opted in, you have transparency obligations to those individuals under Articles 13 and 14.

Third, implement data subject rights infrastructure for AI outputs. When ChatGPT generates inaccurate information about a real person, that person has a right to correction under the GDPR. Your AI system needs a mechanism to receive, process, and respond to these requests. If it cannot correct or delete inaccurate personal data in its outputs, you have an accuracy problem that GDPR treats as a legal violation.

Fourth, report AI-related data breaches within 72 hours. The Garante specifically cited OpenAI’s failure to notify the March 2023 breach within the required timeframe. AI systems that process personal data are subject to the same breach notification requirements as every other data processing system. If your AI model leaks training data or exposes user inputs, the clock starts immediately.

Fifth, do not treat AI regulation as a future problem. The Italy precedent proved that existing data protection law is enforceable against AI systems today. If your legal team is advising you to wait for AI-specific legislation before implementing compliance measures, your legal team is giving you bad advice.

The regulation that already existed

The technology industry spent 2023 arguing that AI regulation needed to be invented. Italy proved it had already been invented in 2016, enacted in 2018, and was sitting on the shelf waiting to be enforced.

The GDPR was not written for artificial intelligence. It was written for personal data processing, and AI systems process personal data. That made every GDPR obligation immediately applicable, every enforcement mechanism immediately available, and every penalty schedule immediately relevant.

Twenty-nine days from emergency ban to restored service with new privacy controls. Three years and counting from AI Act proposal to full applicability. The numbers tell the story of which regulatory approach actually changes behavior.

When an Italian regulator proved that existing law could discipline the world’s most powerful AI company in under a month, the argument for waiting on purpose-built AI legislation stopped being a strategic position. It became an excuse.