The Death of the Service Account: Why Google and CoSAI Say AI Agents Need Human Identity
Service accounts are one of those infrastructure patterns so deeply embedded in enterprise IT that nobody questions them anymore. Need a system to call an API? Create a service account. Need an automation to access a database? Service account. Need an AI agent to interact with your CRM, ticketing system, and knowledge base? Service account.
That last use case is the one that breaks the model. And in January 2025, two of the most influential voices in AI security said so explicitly.
Google Cloud’s Secure AI Framework (SAIF) guidance recommended that organizations stop using service accounts for AI agents entirely. The guidance was direct: “Any actions taken by AI agents on a user’s behalf should be properly controlled and permissioned, and agents should be instructed to propagate the actual user’s identity and permissions to every backend tool they touch.”
The Coalition for Secure AI (CoSAI) published a strategic update in the same period that reinforced the message, calling for organizations to “define new identity and access paradigms for AI agents”, language that deliberately positioned service accounts as the old paradigm being replaced.
These weren’t suggestions from mid-level security researchers. This was coordinated guidance from an organization representing Google, Microsoft, Amazon, and dozens of other major technology companies. The service account model for AI agents was being deprecated in real time.
Why service accounts worked, until they didn’t
The logic behind service accounts is simple and, for traditional automation, sound. A batch job that runs every night to reconcile financial records doesn’t have a “user.” A cron job that cleans up log files doesn’t act on anyone’s behalf. These are system-level operations, and giving them a system-level identity makes perfect sense.
Service accounts also simplified permissions management. Instead of propagating individual user credentials through automated workflows, you create one account with the necessary access, and the automation runs under that identity. It’s clean. It’s auditable at the system level. It’s been the standard pattern for decades.
The problem emerges when the automated system isn’t performing a fixed, predetermined action but is instead acting as a proxy for a human user making dynamic decisions. An AI agent that helps an employee search the company knowledge base, draft an email, update a customer record, and schedule a meeting is not performing a system operation. It’s performing a user operation, with the user’s intent, affecting the user’s data, in the user’s context.
When that agent operates under a service account, the identity chain breaks. The backend systems see the service account’s identity, not the user’s. Permission checks evaluate the service account’s access, not the user’s. Audit logs record the service account’s actions, not the user’s. And if the service account has broad permissions, which it typically does, because the agent needs to serve many different users with different needs, every user effectively operates with escalated privileges.
ISACA’s analysis of the problem framed it in terms of what breaks: “Traditional IAM frameworks, OAuth 2.0, OpenID Connect, SAML, were designed for a deterministic era where applications follow predefined logic, make predictable API calls, and operate within tightly scoped permissions.” AI agents violate every one of those assumptions.
The shared-credential problem at scale
Consider a concrete scenario. An enterprise deploys an AI assistant that helps employees across multiple departments. The assistant can search internal documents, update CRM records, submit expense reports, and schedule meetings. It runs under a single service account with access to all these systems.
An engineer in the hardware division asks the assistant to find a document. Under the service account model, the assistant searches the entire document corpus with the service account’s permissions, not the engineer’s. If the service account has access to legal documents, HR records, or executive communications that the engineer shouldn’t see, the assistant can surface that content. The service account doesn’t know or care about the user’s actual authorization scope.
This isn’t a theoretical risk. It’s the default behavior of most enterprise AI agent deployments today. The service account’s broad permissions become a de facto privilege escalation for every user the agent serves. And because the agent is autonomous, making decisions about which tools to invoke and which data to access based on the user’s natural language request, the exposure surface is dynamic and unpredictable.
Google’s Vertex AI documentation made the design choice explicit, offering developers two modes: agents that use service accounts (the legacy pattern) and agents that use the end user’s identity. The documentation was careful to explain when each is appropriate, but the direction was clear: user identity propagation is the future, service accounts are the past.
What identity propagation actually requires
Replacing service accounts with user identity propagation sounds straightforward. In practice, it requires changes at every layer of the stack.
The Google SAIF guidance outlined three core components: front-end authentication (verifying the user is who they claim to be), identity propagation (carrying that verified identity through every hop in the agent’s workflow), and authorization (checking the user’s permissions at each backend system the agent touches).
The implementation mechanism is OAuth 2.0’s On-Behalf-Of (OBO) flow combined with OpenID Connect identity tokens. When a user authenticates to an application that includes an AI agent, the application obtains an access token scoped to that user. When the agent needs to call a backend service, it exchanges the user’s token for a new token scoped to the specific backend, maintaining the user’s identity and permissions at each hop.
This is not new technology. OBO flows have been available in Microsoft Entra ID (formerly Azure AD), Google Cloud IAM, and other identity platforms for years. The challenge is that most enterprise agent architectures weren’t designed to use them. The agent frameworks, middleware layers, and backend integrations all need to be modified to pass identity tokens through the entire chain rather than authenticating once at the edge with a service account.
For organizations with dozens of backend systems, that’s not a configuration change. It’s an architecture project.
The compliance dimension
The governance implications of service accounts for AI agents extend beyond security into regulatory compliance. Frameworks including SOX, GDPR, and HIPAA all require traceable identity for actions that modify regulated data.
When an AI agent updates a patient record in a healthcare system, HIPAA requires knowing which user initiated that action. A service account entry in the audit log doesn’t satisfy that requirement. When an agent modifies financial records, SOX requires traceable authorization. A service account doesn’t provide it.
The ISACA analysis went further, arguing that traditional IAM frameworks assume a “single authenticated principal”: one user, one session, one set of permissions. AI agents break this assumption because they operate in delegation chains: User A asks Agent B to invoke Tool C, which calls Service D. Each hop in that chain needs to carry the original user’s identity and evaluate permissions accordingly.
The proposed solution was a framework called Agent Relationship-Based Identity and Authorization, or ARIA, which tracks delegation relationships as cryptographically verifiable entities. Each agent action carries a chain of custody that traces back to the human who initiated it. It’s more complex than a service account. It’s also the only model that satisfies both security and compliance requirements for autonomous systems.
The shadow agent problem
There’s a complicating factor that makes the transition from service accounts to identity propagation even more urgent: shadow AI agents. Across the industry, developers and business users are deploying AI agents using service accounts they create themselves, outside the purview of identity governance teams.
A developer who builds an internal Slack bot that uses an AI model to answer questions about company documentation will typically create a service account for that bot. The bot runs with whatever permissions the developer granted. Nobody in IAM knows it exists. Nobody in security has reviewed its access scope. And if the developer leaves the company, the service account persists: an orphaned identity with active permissions, connected to an autonomous system.
This pattern is already widespread. The CoSAI strategic update’s call to “define new identity and access paradigms” was as much about establishing visibility into existing agent deployments as it was about designing future ones. You can’t govern what you can’t see. And most enterprises can’t see their AI agents.
What Monday morning looks like
The Google/CoSAI guidance crystallized a transition that many enterprises will need to make over the next 12-24 months. From my experience building AI-powered systems serving hundreds of thousands of users, I can tell you the transition is neither trivial nor optional. It requires a systematic approach.
Start with an audit. Inventory every AI agent, assistant, and automated AI workflow in your environment. For each one, document the identity model: service account, user identity propagation, or unknown. The “unknown” category will be larger than you expect.
Then prioritize by risk. Agents that access regulated data, financial records, patient information, personally identifiable information, need identity propagation first. Agents that perform read-only operations on non-sensitive data can be migrated later.
Implement OAuth 2.0 On-Behalf-Of flows for all agent-to-API connections. This is the technical foundation. If your identity provider doesn’t support OBO flows, that’s a gap that needs immediate attention.
Configure your SIEM to flag any AI-initiated action that lacks a linked human identity. This creates visibility into the gap between where you are and where you need to be. Ban new service accounts for AI agent use cases going forward. Grandfather existing deployments with a migration timeline, but stop the bleeding immediately.
The service account model served enterprise automation well for two decades. It wasn’t wrong: it was designed for a world where automated systems performed deterministic, predefined actions under system-level identity. AI agents operate in a fundamentally different model: dynamic, user-proxied, and autonomous. The identity framework needs to match.
Google and CoSAI didn’t kill the service account out of academic preference. They killed it because the alternative, anonymous, overprivileged bot accounts making autonomous decisions on behalf of unidentified users, is a compliance violation, a security exposure, and an audit nightmare all wrapped in one.