AI Agent Standards Won’t Save You: Here’s What Will
I work in two rooms that rarely speak to each other.
In one, I’m helping create standards for AI agent identity and authorization through OASIS CoSAI and the IETF working group on AI agent identity. In this room, we spend our time debating protocol specifications, identity delegation models, and interoperability frameworks. Our discussions are methodical, detailed, and measured in months.
The other room is where I built a multi-agent AI system used by over 200,000 users. When an incident occurs in my production environment, it typically surfaces within minutes. No one asks “what should the standards say?” They ask, “What should we do now?”
NIST announced the launch of the AI Agent Standards Initiative on February 17th, 2026, which is the first U.S. Government framework for autonomous agents. Not AI Models. Autonomous Agents. The type of agent that has credentials, accesses enterprise systems, and operates autonomously without a human in the loop.
Twenty-one days later, the other room caught fire.
OpenClaw revealed 135,000 instances in 82 countries and 820 malicious skills in its marketplace. Microsoft fixed a CVSS 8.8 SSRF vulnerability in its Azure MCP Server that exposed managed identity tokens. And a simple XSS in Excel was weaponized to perform zero-click data exfiltration via the Copilot Agent. Twenty-one days. That’s it.
The pace of development and production is accelerating faster than the standards community can keep up with. Enterprise leaders must understand the gap.
A new standard matters, and why most leaders missed it
What sets the NIST AI Agent Standards Initiative apart from all prior AI frameworks is that it addresses agents that act and not models that generate. The EU AI Act, the Biden Executive Order, and the original AI RMF all dealt with the behavior of AI Models. This one deals with what happens after you provide an AI model with an API, credentials, and the authority to take some form of action.
Enterprise leaders should care about the NIST AI Agent Standards Initiative because of the NCCoE concept paper titled “Software and AI Agent Identity and Authorization.” The paper delves into how traditional identity standards apply to continuous operating agents versus those that exist in bounded sessions. Comments for the NCCoE Concept Paper close on April 2nd, 2026. Why does this matter? NIST is developing the initiative based on OAuth and SAML instead of creating a new standard. Therefore, the resulting guidance will be actionable and applicable to production environments and not abstract principles that hang on a wall.
From the perspective of the standards community, whoever shows up shapes the outcome. The organizations that comment during the April comment period will directly influence what is included in the guidance document. All others will be compared against a standard they never touched.
Conventional wisdom is already wrong
A common theme throughout many Enterprise Security publications and presentations, including on this website, is that controlling AI agents requires assigning each AI agent a unique, independent identity and managing it with the same discipline used to manage human identities.
Sounds reasonable. It’s wrong for most enterprise deployments.
If an AI agent schedules a meeting on your behalf, it shouldn’t be acting as itself. Instead, it should act under the authority of your identity, only for the specific purpose of scheduling the meeting, for only as long as necessary, and be revocable. There is a critical distinction between these two concepts. The distinction determines whether you can accurately answer the first question that regulators, legal counsel, and incident investigators will ask: Who authorized this action?
If you provide an AI agent with its own set of credentials, you get gaps in who’s responsible. An AI agent could steal sensitive information, and when the audit trail is followed, nobody’s name appears on it. On the other hand, if the same AI agent is provided with delegated human authority, the audit trail will be linked to an identifiable human and define the scope and timing of the actions performed.
This is clearly not a theoretical concern. Currently, only 22 percent of teams consider agents to be first-class identity subjects. Additionally, nearly half (or 45.6%) of teams still rely on shared API keys to authenticate agents to other agents. However, providing each agent with its own credentials misses the point; you need to implement delegation chains that let humans stay on the hook while allowing agents to accomplish their tasks.
How ready are organizations?
The data is bad.
Only 29 percent of organizations surveyed by Cisco felt prepared to secure their agentic AI deployments. The AIUC-1 Consortium reported that only 21 percent of organizations have visibility into the permissions their agents execute. Gravitee reported that 88% of respondents experienced either a confirmed or suspected AI-related security incident in the last year.
However, the statistic that should keep you awake at night is that only 14.4 percent of AI agents are deployed with full security approval. The remaining 85.6 percent of AI agents are deployed without full security sign-off. These AI agents are deployed via side doors, such as personal installations, team experimentation, and shadow deployments. CISOs did not approve the 135,000+ instances of OpenClaw; employees installed a helpful tool and then moved on.
None of this is new. SaaS adoption a decade ago. BYOD before that. Tools appear before policies. The breach follows the adoption curve.
Three things to do before the standards arrive
Decide your position on the delegation of authority for agent identities.
Don’t wait for NIST to determine whether agents should possess their own independent identities or carry delegated human authority. Decide on a delegation model today. Document the model. Enforce the model. The NCCoE Concept Paper is working through the very same issues. Regardless of what you decide today, you can change your mind later. However, failing to decide at all will result in an arduous and costly retrofit.
Agent actions should be treated like financial transactions.
Every agent action that touches production data should result in an immutable log entry identifying:
Who authorized the action?
What authority was delegated?
What occurred?
If you can’t reconstruct the chain of authorization following an incident, you have created a liability exposure that no compliance framework will cover after the fact. Create the audit trail now. You’ll need it.
Participate in the comment period.
Comments for the NCCoE Concept Paper close on April 2nd, 2026. CAISI Open Forums for Healthcare, Finance, and Education require attendance request submissions by March 20th, 2026. This may seem bureaucratic. It isn’t. The AI RMF went from Voluntary Guidance to appearing in procurement requirements and litigation in 18 months. What is published this year will appear in your vendor questionnaires next year.
The gap between the two rooms
Standards will eventually arrive.
However, the AI RMF required 18 months for organizations to begin incorporating its recommendations into their procurement processes. OpenClaw required only 18 days to expose 135,000+ instances. Organizations that view this as a speed problem, not a standards problem, will be the ones standing when the framework eventually arrives.
Fire codes matter. But you don’t wait for the fire code while the building burns.