Three AI Governance Frameworks in One Week, And Zero Actionable Compliance Requirements
On October 30, 2023, President Biden signed Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It was the most comprehensive AI directive any government had issued; 100-plus pages directing federal agencies to address everything from privacy and civil rights to national security and workforce disruption. Bruce Reed, White House Deputy Chief of Staff, called it “the strongest set of actions any government in the world has ever taken on AI safety, security and trust.”
The same day, the G7 released its Hiroshima AI Process International Code of Conduct, 11 voluntary principles for organizations developing advanced AI systems. Two days later, on November 1-2, the UK’s AI Safety Summit at Bletchley Park produced the Bletchley Declaration, signed by 28 countries including the United States, China, and the European Union.
Three governance frameworks. 48 hours. Global coverage. Headlines everywhere.
Here is the part that didn’t make the headlines: none of them contained a single enforceable compliance requirement for private sector enterprises. Not one. If you were a CISO at a Fortune 500 company on November 3, 2023, reading the morning coverage of three simultaneous governance initiatives, you could have gone back to sleep. Nothing about your compliance obligations had changed.
What the executive order actually said
The Biden Executive Order was the weightiest of the three, and the most widely misunderstood. Media coverage framed it as sweeping AI regulation. In practice, it was a directive from the executive branch to the executive branch.
The order directed federal agencies to develop guidelines, create standards, run pilots, and report back on timelines ranging from 90 days to a year. The Department of Commerce was tasked with developing AI safety standards. The National Institute of Standards and Technology was directed to create red-teaming guidelines. The Department of Homeland Security was told to assess AI risks in critical infrastructure. HHS was instructed to evaluate AI in healthcare.
The order invoked Defense Production Act authorities to require companies developing “dual-use foundation models”, models with capabilities that could pose serious risks to national security, to share safety test results with the federal government. This was the most aggressive private-sector obligation in the document, and it applied to a narrow set of frontier model developers, not to the thousands of enterprises deploying AI in production.
For a typical enterprise, a bank, a manufacturer, a retailer, a healthcare system, the executive order created no new compliance requirements. No reporting obligations. No audit mandates. No penalties for non-compliance. It was an ambitious policy document that directed government agencies to think about AI governance. It did not direct enterprises to do anything.
Clifford Chance’s analysis of the order noted the absence of binding obligations on the private sector. Brookings Institution published an assessment arguing that while the order was “a good start,” it was insufficient precisely because it lacked enforcement mechanisms.
The voluntary frameworks
If the executive order was a directive to government agencies, the G7 Code of Conduct and the Bletchley Declaration were something even softer: voluntary commitments with no enforcement mechanism at all.
The G7 Hiroshima AI Process produced 11 principles for “organizations developing advanced AI systems.” The principles included commitments to identify and mitigate risks, implement appropriate data governance, invest in security, and publish transparency reports. Each principle was stated at a level of generality that made compliance essentially self-defined. What constitutes “appropriate” data governance? What qualifies as “adequate” security investment? The document didn’t say.
The Bletchley Declaration, signed by representatives of 28 nations at the historic code-breaking site, was even more explicitly aspirational. The signatories agreed that AI posed risks, that international cooperation was necessary, and that further summits would be convened. They committed to establishing AI Safety Institutes, the UK and US versions launched in subsequent months, and to an international network of AI safety researchers.
Notably, the Bletchley Declaration included both the United States and China, which was presented as a diplomatic achievement. It was. But diplomatic achievements and enforceable compliance standards are different products with different utility to a CISO trying to build an AI governance program.
Anthropic CEO Dario Amodei presented the company’s Responsible Scaling Policy at Bletchley, a voluntary, self-imposed framework for managing the risks of increasingly capable AI models. This was, in some ways, the most honest contribution of the summit: a private company acknowledging that government frameworks didn’t yet provide meaningful guardrails and committing to its own.
The governance theater problem
The problem with three simultaneous governance frameworks was not that they were wrong. Each contained reasonable principles. The problem was the signal they sent to enterprise leaders.
The signal was: governance is happening. Regulations are coming. Smart executives will wait for the rules to be finalized before building internal programs.
This was a rational interpretation. If the White House, the G7, and 28 national governments were all converging on AI governance frameworks, it seemed reasonable to expect that enforceable standards would follow soon. Why invest in building a proprietary governance framework when a global standard might arrive in six months?
But the enforceable standards didn’t arrive in six months. The executive order’s agency-directed timelines stretched through 2024. The Bletchley follow-up summits occurred, in Seoul in May 2024, in Paris in February 2025, but produced additional declarations rather than regulations. The G7 principles remained voluntary. The EU AI Act, which was the closest thing to enforceable AI regulation, wouldn’t have its provisions become applicable until 2025 and 2026.
Enterprises that waited for external governance to tell them what to do lost 18 months. They deployed AI systems without formal risk taxonomies, without accountability structures, without incident response playbooks. They built AI capabilities faster than they built AI governance. And when the incidents started accelerating, the 56% year-over-year increase that Stanford HAI would later document, they had no framework to measure, manage, or report them.
The Italy precedent everyone ignored
The most instructive lesson from October 2023 wasn’t in any of the three frameworks. It had happened six months earlier.
On March 31, 2023, Italy’s data protection authority, the Garante, banned ChatGPT from the Italian market. The ban was not based on new AI regulation. It was based on GDPR, existing law, already enforceable, with real penalties attached. The Garante identified specific violations: lack of a legal basis for data processing, no age verification, no transparency about data use for model training.
By April 28, 29 days later, OpenAI had implemented the required changes: privacy notices, data processing disclosures, opt-out mechanisms, and the beginnings of age verification. The enforcement action produced concrete behavioral change in less than a month.
Compare this to the executive order’s timeline. The order was signed October 30. The agencies were given 90 to 365 days to develop guidance. The guidance, once developed, would inform standards. The standards, once finalized, would inform enforcement. The total cycle from directive to enforceable action was measured in years.
The lesson was clear: existing law with enforcement mechanisms produced faster results than new frameworks without them. GDPR, CCPA, SEC disclosure requirements, FTC enforcement actions. The regulatory tools that could actually compel enterprise behavior change already existed. They just hadn’t been consistently applied to AI contexts.
This lesson was reinforced in the months that followed. When the FTC investigated companies for deceptive AI practices, it didn’t use the Biden executive order as its authority: it used Section 5 of the FTC Act, which prohibits unfair or deceptive trade practices, a law from 1914. When the SEC required companies to disclose material cybersecurity incidents, including AI-related ones, it used existing securities regulations. When state attorneys general pursued companies for biased AI systems, they used existing civil rights law.
The existing regulatory infrastructure was imperfect for AI, but it was enforceable. The three October frameworks were elegant, but they were aspirational. For a CISO building a governance program, the distinction between enforceable and aspirational is the distinction between a compliance requirement and a reading list.
What enterprises needed instead
I was building AI governance into production systems while these frameworks were being announced, and what struck me was the disconnect between what the frameworks addressed and what I actually needed.
What I needed was specific. I needed a risk classification for LLM outputs that connected to our existing data classification scheme. I needed a decision matrix for when human review was required before an AI-generated action executed. I needed audit logging requirements that captured not just what the model output but what input triggered it. I needed accountability definitions that specified who owned the outcome when an AI system made a decision that crossed departmental boundaries.
None of the three frameworks addressed any of this. They operated at the altitude of principles: AI should be safe, trustworthy, transparent, fair, accountable. These are fine principles. They are also useless for engineering decisions. “AI should be trustworthy” does not tell me whether a model’s summarization of a customer complaint should be reviewed by a human before the response is sent. “AI should be accountable” does not tell me whether the product team or the security team owns the risk when a chatbot hallucinates a product capability.
Through my work with CoSAI and IETF AGNTCY, I’ve watched the standards community begin to fill these operational gaps, but the work is slow, and it started from a standing position because the high-profile frameworks consumed all the oxygen in the governance conversation. Enterprises pointed to the executive order and the Bletchley Declaration as evidence that governance was “being handled.” It wasn’t.
The standards community’s work differs from the October frameworks in a fundamental way: it starts with engineering requirements, not principles. When CoSAI members discuss AI security governance, the conversation begins with “what specific controls need to exist in production systems” rather than “what values should AI systems embody.” When IETF working groups discuss agent interoperability, the starting point is protocol specifications with defined security properties, not aspirational commitments to “appropriate safeguards.”
This bottom-up approach is slower to produce headlines but faster to produce implementable guidance. The challenge is that enterprise leaders, understandably, pay more attention to presidential executive orders than to IETF draft specifications. The result is a governance gap: the operational guidance exists (or is being built), but it isn’t reaching the executives who authorize budgets. The high-profile frameworks reach executives, but they don’t contain operational guidance. The two halves haven’t connected yet.
For the practitioner trying to build a governance program in late 2023, this disconnection was deeply frustrating. You could wave the executive order at your leadership team to justify the existence of an AI governance initiative. But when leadership asked “what specifically should we do?”, the order had no answer. The answer was in the growing ecosystem of operational frameworks, NIST AI RMF, OWASP LLM Top 10, emerging CoSAI guidance, that nobody in the C-suite had read.
What to do about it Monday morning
The governance frameworks provided political cover. They did not provide operational guidance. Here is what operational guidance looks like:
Stop treating the Biden Executive Order as a compliance framework. It directed federal agencies. It did not direct your enterprise. If your AI governance program references EO 14110 as its foundation, you have a press release, not a program. Use NIST AI RMF as operational scaffolding instead, it’s imperfect, but it’s the closest thing to a concrete framework that maps to enterprise operations.
Build an internal AI risk classification that is enforceable today. Don’t wait for external standards to tell you how to classify AI risks. Define three tiers: AI systems that inform human decisions (low risk), AI systems that take actions reviewable by humans (medium risk), and AI systems that take autonomous actions affecting customers, finances, or security (high risk). Map every AI deployment to a tier. Require proportional controls for each tier.
Assign accountability for AI governance to a specific executive. Not a committee. Not a working group. A named person with budget authority and reporting obligations. Committees produce reports. Accountable executives produce programs. If the title is “Chief AI Officer,” fine. If it’s “VP of Engineering who also owns AI risk,” fine. The title matters less than the accountability.
Conduct an AI inventory audit. You cannot govern what you haven’t mapped. Identify every AI tool, model, API, and integration in your environment; sanctioned and unsanctioned. For each, document: what data it accesses, what actions it can take, who is accountable for its behavior, and what the failure mode looks like. The audit will reveal gaps that no external framework will tell you about, because no external framework knows your environment.
Create an AI incident response playbook. The three governance frameworks collectively devoted thousands of words to AI principles and zero paragraphs to what happens when an AI system in your environment causes harm. You need a playbook that covers: containment procedures for shared AI services, forensics approaches for non-deterministic systems, regulatory notification triggers for AI-specific incidents, and communication templates for board and customer notification.
The week of October 30, 2023 produced declarations, principles, and directives. It did not produce governance. That is still your job.