The Framework Nobody Asked For: Why NIST’s AI Risk Management Framework Was Already Obsolete

NIST did important work with the AI RMF 1.0. However, the AI landscape was moving at a speed that no standards development process, no matter how well-executed, could match.
The Framework Nobody Asked For: Why NIST’s AI Risk Management Framework Was Already Obsolete

On January 26, 2023, the National Institute of Standards and Technology did something that should have been a landmark moment for enterprise AI governance. After two years of workshops, public comment periods, and cross-sector collaboration, NIST released its AI Risk Management Framework 1.0. The first comprehensive federal guide for managing AI risk in the United States. A voluntary, sector-agnostic blueprint for trustworthy AI.

Five days later, ChatGPT crossed 100 million monthly active users. And the framework was already outdated.

Elham Tabassi, who led NIST’s AI RMF development as Chief of Staff of the Information Technology Laboratory, described the framework’s ambition at the January 26 launch event: “The bigger goal of the AI RMF is to cultivate trust in technology, is to operationalize values in the technology, design and build technologies that are reflective of the values of our society.” She called for “operationalizing the AI RMF” through “test beds, benchmarks, and standards.”

Those were the right goals. But the people who needed AI governance most urgently were not building test beds. They were copying source code into ChatGPT.

A framework designed for a world that vanished overnight

Here is the uncomfortable truth about the NIST AI RMF. It was not wrong. The four core functions it proposes, Govern, Map, Measure, and Manage, represent a sound organizational approach to AI risk. But the framework was built for a world where enterprises controlled their AI deployments, where they knew where AI was being used, and where they could methodically assess risk before systems went into production.

ChatGPT destroyed all three of those assumptions in about sixty days.

The tool launched on November 30, 2022. By January, a UBS analysis reported by Reuters estimated it had reached 100 million monthly active users, the fastest adoption of any consumer application in history. TikTok took nine months to reach that number. Instagram needed two and a half years. ChatGPT did it before NIST’s framework had time to be read by the people it was written for.

The speed mattered because NIST’s framework rests on an assumption that AI adoption follows a governed lifecycle. You design a system. You assess its risks. You put controls in place. Then you deploy it. The four functions presuppose that someone in the organization is making deliberate decisions about AI at each stage.

What actually happened in the first quarter of 2023 was something entirely different. Employees across every industry started using ChatGPT for work on their own, without waiting for IT, without consulting legal, and without any governance framework at all.

The data was already leaking

Cyberhaven Labs, a data security firm that tracks data movement across 1.6 million workers, provided the first hard numbers on the problem. By March 2023, 8.2% of employees at companies using Cyberhaven’s products had used ChatGPT in the workplace. Of those, 6.5% had pasted company data directly into the tool. On March 1 alone, Cyberhaven detected 3,381 attempts to paste corporate data into ChatGPT per 100,000 employees.

The numbers kept climbing. By June 2023, Cyberhaven’s data showed that 10.8% of employees had tried ChatGPT at work and 8.6% had pasted company data into it. Of that data, 11% was classified as confidential. And 4.7% of employees had pasted specifically sensitive material, including source code, client information, and strategic documents.

Perhaps the most striking finding: less than one percent of employees, roughly 0.9%, were responsible for 80% of the data leaking into ChatGPT. The traditional security perimeter was not breached by sophisticated attackers. It was walked through by a handful of employees who just wanted to get their work done faster.

The NIST AI RMF’s Govern function talks about cultivating “a risk-aware organizational culture.” That is a fine aspiration. But culture building takes months or years. The ChatGPT data exfiltration problem took weeks to materialize and the framework had no answer for it.

Samsung proved the framework’s assumptions were wrong

If anyone needed proof that policies and warnings were insufficient, Samsung provided it in spectacular fashion.

In early April 2023, less than three months after the NIST AI RMF shipped, three separate employees at Samsung’s semiconductor division pasted confidential company information into ChatGPT. One engineer copied source code from a proprietary semiconductor database and asked ChatGPT to check for errors. Another uploaded code for identifying defective equipment and requested optimization suggestions. A third converted a recording of an internal company meeting to text and fed it to ChatGPT to generate meeting notes.

Samsung had only lifted its internal ban on ChatGPT three weeks earlier. Leadership had already warned employees to be careful with sensitive data. Those warnings did not matter. Samsung’s emergency response was to limit each employee’s prompt to ChatGPT to 1,024 bytes, a band-aid that addressed the symptom without touching the cause.

Think about what happened here. A multinational technology company, one with deep expertise in semiconductor security, could not prevent its own engineers from copying proprietary code into a third-party AI system. The NIST AI RMF’s Govern function talks about establishing “clear roles and responsibilities for AI risk management.” Samsung had clear roles. It had explicit warnings. It had policies. None of that stopped the leaks.

Samsung was not alone. JP Morgan and Verizon blocked ChatGPT access for employees entirely. Amazon’s lawyers warned employees not to share confidential information with the tool. Walmart issued internal memos about the risk. But blocking and warning were reactive measures against a problem that had already embedded itself in daily workflows.

The Forrester assessment was damning

Brandon Purcell, a principal analyst at Forrester, published a detailed assessment of the NIST AI RMF in March 2023 that identified problems more fundamental than timing.

“The conflicts of interest are evident,” Purcell wrote, noting that cross-community collaboration had brought “expertise and special interests together, leading to contradictions in the framework.” Some assertions in the document, he observed, were “technically true” but “may be disingenuous and inappropriately bring public arguments from areas such as the social media and advertising sectors front and center into a neutral guide.”

The framework’s treatment of measurement drew particular criticism. It acknowledges that measurement of AI risk will in some cases be “implausible,” while simultaneously insisting that mapping and measurement are critical competencies. Enterprises reading that guidance could reasonably conclude that the whole exercise was circular. As Purcell noted, “Enterprises may determine this as a gating factor for AI governance progress or innovation overall.”

Perhaps most damning was the absence of data governance. “Data governance does not have an explicit reference in the NIST framework, and data stewards are missing from the list of roles,” Purcell wrote. For a framework meant to guide trustworthy AI, the omission was hard to explain. Forrester’s own research had found that when chief data officers champion AI governance, they actively build on existing data governance practices. NIST’s framework offered no pathway for that integration.

The framework also suffered from a differentiation problem. Its list of AI governance considerations reads much like existing enterprise risk management guidance. The Palo Alto Networks cybersecurity team noted that “some organizations, particularly those with limited resources or AI expertise, may find it challenging to translate the framework’s principles into specific, actionable steps.” That is a polite way of saying the framework tells you what to think about without telling you what to do.

Three structural problems that no update can fix

I spend my days architecting AI-powered systems that serve large enterprise user bases. From that vantage point, the NIST AI RMF has three structural problems that make it inadequate for the world that emerged in early 2023.

The first is its assumption of enterprise-controlled AI. The framework’s lifecycle model starts with design and development. But the vast majority of enterprise AI risk in 2023 did not come from systems the enterprise designed. It came from employees using third-party tools that were never part of any formal deployment. ChatGPT is not an enterprise application. It is a consumer product that employees brought to work. The framework has no mechanism for managing AI tools that enter the organization from the bottom up rather than the top down.

This is not a minor gap. It is a fundamental category error. The framework treats AI risk as something that originates inside the organization’s development process. In reality, by early 2023, the primary source of AI risk was external tools being consumed by employees without any organizational involvement in their selection, configuration, or governance.

The second problem is that the framework is voluntary and toothless. NIST explicitly says the AI RMF is “intended for voluntary use.” There is no enforcement mechanism. No penalties for ignoring it. No regulatory body checking compliance. The Epstein Becker Green law firm noted in its analysis that while the framework is not legally binding, “industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments.” That was a generous reading. Most organizations in early 2023 were not performing AI risk assessments at all. They were too busy trying to figure out whether to block ChatGPT or embrace it.

The third problem is cadence. NIST planned to review the framework for updates no later than 2028. Five years. In AI years, that is a geological epoch. Between January and December 2023 alone, the landscape shifted multiple times. GPT-4 launched in March. Google released Bard. Microsoft embedded AI into Bing and then Office. Anthropic raised hundreds of millions in funding. Meta released Llama 2 as open source. By the time NIST gets around to a formal revision, the framework will have been overtaken by at least two or three more generations of AI capability.

To be fair, NIST did respond faster than the five-year schedule. In July 2024, NIST released NIST AI 600-1, a Generative AI Profile for the AI RMF, pursuant to President Biden’s Executive Order 14110. That profile addressed some of the generative AI-specific risks. But it arrived 18 months after ChatGPT’s enterprise explosion. In technology governance, 18 months of unaddressed risk is not a minor delay. It is an entire threat generation.

What enterprises actually needed in January 2023

The gap between what NIST published and what organizations actually needed was not a matter of refinement. It required a fundamentally different approach. Here is what I would have told any CISO or CIO holding the NIST AI RMF in early 2023 and wondering what to do with it.

Map your actual AI usage, not your intended AI usage. The NIST framework talks about mapping AI risks in context. Fine. But the first step was figuring out which AI tools your employees were already using, which ones they were pasting data into, and what categories of data were flowing out. This was a data loss prevention problem before it was a governance problem. Most organizations could not answer the basic question: how many of our employees used ChatGPT this week? JP Morgan reportedly could not determine “how many employees were using the chatbot or for what functions they were using it” even before blocking it.

Stop waiting for frameworks. Build internal AI governance now. The NIST AI RMF is a reference document, not a playbook. Treat it that way. Pull the pieces that are useful, specifically the Govern function’s emphasis on organizational accountability, and build your own policy. Every week spent waiting for external guidance was another week of unmonitored data flowing into third-party AI systems.

Assign AI risk ownership to a person, not a committee. The fastest way to ensure nothing happens is to assign responsibility to a committee. Identify one senior leader, someone with both technical depth and organizational authority, who owns AI risk. That person needs to be able to make policy decisions without convening a working group. When source code is leaking into ChatGPT on a Tuesday afternoon, a committee that meets on Thursday mornings is not going to help.

Create an internal AI risk taxonomy that accounts for generative AI. The NIST framework’s risk categories were designed for predictive and classification AI. They did not adequately address the unique risks of generative AI: data leakage through prompts, hallucinated outputs used for decision-making, intellectual property exposure through training data, and the creation of content that looks authoritative but is not. Your risk taxonomy needs to reflect the tools your organization is actually using, not the tools a standards body imagined you might use.

Treat AI governance as a real-time problem, not a document problem. Policies are necessary but insufficient. When an employee pastes source code into ChatGPT, a policy document stored in SharePoint does not help. You need technical controls: DLP tools that monitor AI tool usage, access policies that restrict sensitive data flows, and real-time alerting when confidential information moves toward external AI services. Governance without enforcement is aspiration, not protection.

What the NIST AI RMF got right

It would be unfair to end without acknowledging what NIST got right. The framework established a shared vocabulary for AI risk that did not previously exist in a coherent form. Its four-function structure, Govern, Map, Measure, Manage, provides a useful mental model even if its specific guidance lagged reality. And by releasing the framework as a voluntary, flexible tool rather than a rigid mandate, NIST made it adaptable to organizations of different sizes and maturity levels.

TIME Magazine later named Elham Tabassi to its list of the 100 most influential people in AI, recognizing her leadership in developing the framework. That recognition was earned. The AI RMF was an important first step. The problem was that the world needed a running start.

NIST did important work with the AI RMF 1.0. It created a structure that future regulation could reference, and both the Biden Executive Order on AI and the EU AI Act drew on its concepts. Those contributions matter. But the gap between what the framework offered and what enterprises needed in January 2023 was not something that better drafting could have solved. The AI landscape was moving at a speed that no standards development process, no matter how well-executed, could match.

The organizations that waited for the framework to catch up found that their data, their code, and their competitive advantage had already left the building. One ChatGPT prompt at a time.