MCP Gets OAuth 2.1, Six Months Too Late, and Thousands of Servers Already Deployed Without It
On June 18, 2025, the Model Context Protocol specification received the update that security practitioners had been demanding since November 2024: mandatory OAuth 2.1 authentication. MCP servers were officially classified as OAuth 2.1 resource servers. Protected Resource Metadata became a MUST requirement. Token scoping via Resource Indicators (RFC 8707) was mandated. The word “SHOULD” was replaced with “MUST” in the places that mattered.
Seven months after launch. After 8 million SDK downloads. After 5,800 servers deployed. After OpenAI, Google, Microsoft, and dozens of other major platforms had already integrated MCP into their products. After critical security research had documented prompt injection attacks, tool impersonation, and data exfiltration vectors. After enterprises had connected AI agents to production databases, customer records, and financial systems through MCP channels that didn’t require the connecting client to prove who it was.
The specification update was necessary and well-engineered. The timing was a case study in what happens when adoption outpaces security.
What the June 2025 spec actually changed
The June 18, 2025 changelog introduced three categories of security improvements, each addressing gaps that had existed since launch.
First, MCP servers became formally classified as OAuth Resource Servers. This meant every MCP server was expected to serve a .well-known/oauth-protected-resource document, validate access tokens against an external authorization server, and return proper WWW-Authenticate headers when rejecting unauthenticated requests. Jessica Temporal, Senior Developer Advocate at Auth0, described the implications: while the reclassification might seem semantic, it had significant consequences for how MCP servers must handle discovery, token validation, and authorization flows.
Second, the specification mandated Resource Indicators (RFC 8707) in token requests. When an MCP client requests a token, it must now explicitly state which MCP server the token is intended for. The authorization server then scopes the token to that specific server. This prevents a compromised or malicious MCP server from taking a token it received and reusing it to access a different protected resource on the user’s behalf, what security engineers call a confused deputy attack.
Third, the specification introduced a dedicated security best practices page and significantly expanded the security considerations section. The previous specification had blurred the line between MCP servers as authorization servers and MCP servers as resource servers, a design choice that forced MCP server developers to implement complex token management logic. The June revision separated these concerns cleanly: the MCP server validates tokens, and an external authorization server handles issuance.
Christian Posta, who had been critical of earlier versions, acknowledged the improvement in his updated analysis: “I had been critical of the MCP Authorization Spec in the past… but recent revisions have corrected a lot of what I pointed out.”
The legacy problem the spec update can’t fix
Here’s the uncomfortable truth: a specification update only applies to implementations that adopt it. The thousands of MCP servers deployed before June 2025 don’t retroactively become secure because the specification changed. They continue operating under whatever security posture they were built with.
The ecosystem’s deployment timeline created a stratification of security maturity that will persist for years. MCP servers built after June 2025 must implement OAuth 2.1 to be spec-compliant. MCP servers built between November 2024 and June 2025 have no requirement to upgrade. Servers pulled from GitHub repositories, copied from tutorials, or forked from early reference implementations carry the security assumptions of the era they were built in.
This isn’t theoretical. Developer behavior follows predictable patterns. When a developer needs an MCP server to connect their AI application to a database, they search GitHub, find a popular implementation, fork it, modify it for their use case, and deploy it. They rarely check when the implementation was last updated or whether it conforms to the current specification version. The server works. That’s the test.
In my experience building enterprise systems that serve over 170,000 users, the most dangerous technical debt isn’t the code you know about. It’s the code that’s running in production because someone deployed it during a hackathon six months ago and it’s been quietly working ever since. MCP servers deployed during the authentication-optional period fit this profile precisely.
The enterprise audit challenge
For security teams, the June 2025 update created a new category of required work: inventorying and evaluating every MCP integration in the organization. This is harder than it sounds for three reasons.
First, MCP adoption was developer-driven, not centrally managed. Developers adopted MCP servers the way they adopt any promising open-source tool, quickly and often without informing security teams. Shadow MCP deployments exist in the same way shadow IT exists: because the tools are useful and the adoption friction is low.
Second, the variety of MCP deployment patterns makes discovery difficult. MCP servers can run locally on a developer’s machine, as a container in a cloud environment, as a standalone service, or embedded within a larger application. There’s no single registry that lists all MCP servers in an enterprise environment. The MCP Registry launched by the community was for public server discovery, not enterprise inventory.
Third, even when you find an MCP server, evaluating its security posture requires reading its implementation, not just its configuration. A server might claim to support OAuth 2.1 in its documentation but implement it incorrectly. A server might validate tokens but not check scopes. A server might enforce authentication for some endpoints but not others. The specification defines what conformance looks like, but verification requires code review.
What the spec still doesn’t solve
The June 2025 update was significant, but it left several enterprise-relevant security gaps open.
The community debate documented across multiple technical blogs highlighted that while the authorization framework was now well-defined, the implementation complexity remained high. Dynamic Client Registration (DCR), which allows MCP clients to automatically register with new authorization servers, was recommended but not required. Some authorization servers, Keycloak, for example, had documented challenges with DCR compliance, requiring workarounds that deviated from the specification.
The specification also didn’t address the tool redefinition attack that had been documented earlier in 2025. MCP servers can dynamically change the tools they expose after a client has connected and approved an initial set of tools. The June update added security guidance but no mandatory mechanism to prevent or detect tool redefinition. A server that passes your security review on Monday can change its behavior on Tuesday.
And the specification left monitoring and observability entirely to implementors. New Relic launched MCP observability support over the summer, but it was limited to Python applications and could only observe MCP traffic within applications the customer built. Enterprise-wide visibility into MCP traffic remained a gap that no vendor had fully addressed.
The remediation path for enterprises
Any enterprise that deployed MCP integrations before June 2025, which is most enterprises that deployed MCP integrations at all, needs a systematic remediation plan.
Start with discovery. Scan your environment for MCP server processes, MCP-related dependencies in application codebases, and MCP traffic patterns in network logs. Check developer laptops, container registries, cloud deployment configurations, and CI/CD pipelines. Anywhere a developer could have deployed an MCP server, assume one exists.
Assess each discovered server against the June 2025 specification. Does it implement Protected Resource Metadata? Does it validate tokens against an external authorization server? Does it enforce resource indicators in token requests? Does it support PKCE for all client types? If the answer to any of these is no, the server needs to be upgraded or decommissioned.
Implement MCP server allowlists at the network and application level. Only approved, audited MCP servers should be accessible from production environments. Block connections to unapproved servers the same way you’d block connections to unapproved APIs.
Enforce manifest pinning where possible. If your MCP client supports it, pin the set of tools approved at connection time and reject any server-initiated tool modifications. This mitigates the tool redefinition attack that the specification doesn’t fully address.
Establish ongoing monitoring. Log all MCP client-to-server interactions, including tool invocations, data volumes, and authentication events. Alert on anomalies: unexpected servers, unusual tool call patterns, large data transfers, or authentication failures.
The broader lesson about protocol adoption velocity
MCP’s security timeline, launch without mandatory auth, mass adoption, then mandatory auth after the ecosystem is built, reflects a structural problem in how the technology industry adopts connectivity protocols. The incentive structure rewards fast adoption. Market share goes to the protocol that developers actually use, and developers use the protocol that’s easiest to get started with. Security friction slows adoption. So security becomes a phase two concern.
From my work in IETF AGNTCY and CoSAI, I’ve seen this tension from the standards-making side. The arguments for shipping without mandatory security are always the same: we need adoption first, we can add security later, developers won’t build with a protocol that requires OAuth configuration before hello-world works. Those arguments are all true. They’re also all descriptions of how you end up with an ecosystem where the protocol specification is secure but the deployed infrastructure isn’t.
The June 2025 MCP update was the right thing at the wrong time. The specification is now on a credible security path. The ecosystem it governs has seven months of security debt that will take years to fully retire. And every enterprise that connected an AI agent to a production system through an unauthenticated MCP channel during those seven months carries risk that a specification update, however thorough, cannot retroactively eliminate.
The protocol is secure by default. The ecosystem is secure by exception. That gap is where the breaches will live.