OpenAI Just Adopted MCP, And the Protocol Still Doesn’t Mandate Authentication
On March 26, 2025, Sam Altman posted a sentence on X that would reshape the enterprise AI connectivity landscape overnight: “People love MCP and we are excited to add support across our products.”
Just like that, the Model Context Protocol, Anthropic’s open standard for connecting AI models to external tools and data, went from promising newcomer to inevitable standard. OpenAI announced MCP support across its Agents SDK immediately, with the ChatGPT desktop app and Responses API following shortly after. Google DeepMind’s Demis Hassabis confirmed Gemini support weeks later, calling MCP a “rapidly emerging open standard for agentic AI.”
The enterprise technology press treated this as a triumph of open standards. TechCrunch’s headline captured the sentiment: OpenAI was adopting its rival’s protocol. Interoperability had won. The “USB-C for AI” had arrived.
What almost nobody asked was the question that should have been first: is it secure?
The security gap nobody addressed
When OpenAI announced MCP support, the protocol was four months old. Anthropic had launched it in November 2024 with a specification that recommended but did not require authentication between MCP clients and servers. The specification used “SHOULD” where it needed “MUST.” Security best practices were suggested, not mandated.
OpenAI’s adoption didn’t change that. The company added support for connecting to MCP servers, but the protocol itself still didn’t require those servers to authenticate incoming connections or encrypt traffic. Every OpenAI integration that connected to an MCP server inherited the protocol’s authentication gap.
The scale of the exposure grew rapidly. Before OpenAI’s adoption, MCP’s ecosystem was growing steadily among developer tool companies like Zed, Replit, and Sourcegraph, plus early adopters like Block and Apollo. After OpenAI’s adoption, the growth curve went exponential. MCP server downloads surged past 8 million by April 2025, with the ecosystem expanding to over 5,800 servers and 300 clients. The MCP SDK was reaching 97 million monthly downloads across Python and TypeScript by year’s end.
Every one of those servers deployed between November 2024 and June 2025, when authentication was finally made mandatory, was operating under whatever security posture the individual developer chose to implement. For many, that was no security at all.
The vendor validation fallacy
OpenAI’s adoption created a specific kind of risk that I’ve watched play out across my career building enterprise systems: the vendor validation fallacy. When a major vendor adopts a technology, enterprises interpret that adoption as a security endorsement. The reasoning is intuitive. If OpenAI, a company with massive resources and significant security stakes, decided MCP was ready for their products, surely it’s safe for ours.
That reasoning is wrong. OpenAI’s adoption was a product and ecosystem decision, not a security certification. The company’s developer documentation for MCP integration was straightforward about the capability but said little about the security model. The protocol’s own specification left security implementation to individual developers.
This distinction matters because procurement decisions in large enterprises often follow a simple heuristic: if the major platform vendor supports it, our security review can be lighter. I’ve seen this pattern firsthand in standards body discussions at CoSAI and IETF AGNTCY, where the gap between vendor adoption velocity and security specification maturity is a recurring concern. The market moves at the speed of product announcements. Security specifications move at the speed of careful engineering review. Those two clocks are almost never in sync.
The security research arrived, two weeks later
In April 2025, security researchers published analysis that validated every concern the specification’s optional security model had raised. Wikipedia’s summary of the research noted multiple outstanding security issues: prompt injection attacks through MCP tool descriptions, tool permissions that allowed combining tools to exfiltrate data, and lookalike tools that could silently replace trusted ones.
The tool impersonation problem was particularly alarming. An MCP server could register a tool with a name and description that closely mimicked a legitimate tool. When an AI model needed to invoke that tool, it might connect to the impersonator instead, which could then intercept data, modify the model’s behavior through injected instructions, or exfiltrate information through its return values.
Combined with the lack of mandatory authentication, this created a supply chain attack vector that was trivially easy to exploit. Anyone could publish an MCP server. Anyone could name it anything. And any AI model connecting to it had no protocol-level mechanism to verify that the server was what it claimed to be.
The New Stack’s analysis of MCP’s first year documented how the enterprise implications multiplied as adoption broadened. Microsoft embedded MCP into Azure AI services and released a C# SDK in partnership with Anthropic. Amazon added MCP support within Amazon Bedrock. The protocol went from niche developer tool to enterprise infrastructure in months, all before the authentication question was resolved.
What OpenAI’s adoption actually changed
To be fair, OpenAI’s adoption wasn’t purely a risk story. It accomplished something genuinely valuable: it ended the standards fragmentation that had plagued AI tool connectivity. Before MCP, OpenAI had its own function-calling API and the ChatGPT plugin framework, Google had its own approach, and Anthropic had MCP. Each required vendor-specific integrations. OpenAI’s decision to adopt MCP instead of doubling down on its own approach signaled that the industry was converging.
As one analysis noted, “This coalescing of significant AI leaders, Anthropic, OpenAI, Google and Microsoft, caused MCP to evolve from a vendor-led spec into common infrastructure and essentially ensured MCP would continue to dominate the conversation about AI connectivity.” The speed of this convergence was remarkable. OAuth 2.0 took roughly four years to reach comparable cross-vendor adoption. MCP did it in five months.
But convergence and security aren’t the same thing. HTTP became a universal protocol too, and we spent the next two decades bolting security onto it, TLS, CORS, CSP, HSTS, because the original specification didn’t mandate it. MCP was repeating the same pattern at an accelerated timeline, except this time the protocol wasn’t connecting web browsers to static content. It was connecting AI models to databases, APIs, file systems, and financial services.
The enterprise deployment mistake
The enterprise deployment pattern that emerged after OpenAI’s adoption followed a predictable path. Development teams, excited about the possibilities, started connecting their OpenAI-based applications to MCP servers, both third-party and internally built. They used MCP to give their AI models access to internal databases, knowledge bases, customer data, and operational tools.
These weren’t rogue deployments. They were sanctioned projects, often with executive backing, built on platforms that major vendors endorsed. The security review, if one happened, focused on the AI model’s capabilities and the data it could access, not on the MCP connection layer’s authentication posture.
The result was that by mid-2025, many enterprises had AI models connected to sensitive internal systems through MCP channels that lacked mandatory authentication, operated without encryption requirements, had no mechanism to verify server identity, could be subject to dynamic tool redefinition without client notification, and had no standardized logging or audit trail.
Each of these gaps would be a serious finding in a traditional application security review. Collectively, they represented an entirely new attack surface that most security teams hadn’t even inventoried, let alone assessed.
What enterprises should have done differently
The lesson from OpenAI’s MCP adoption isn’t that enterprises should have avoided MCP. The standard convergence it represented was genuinely positive for the ecosystem. The lesson is that vendor adoption is not a substitute for security due diligence.
Any enterprise that deployed MCP integrations between March and June 2025 should have treated every MCP server connection as an untrusted channel, regardless of whether it was an internal or third-party server. They should have implemented their own authentication layer between MCP clients and servers, since the protocol didn’t require one. They should have deployed network-level controls to restrict which MCP servers each client could connect to. And they should have established monitoring for MCP traffic patterns, watching for anomalous tool invocations, unexpected data volumes, or connections to unregistered servers.
The organizations that did this were the exception. The organizations that treated OpenAI’s endorsement as permission to deploy without additional security controls were the norm.
When the first critical MCP CVEs arrived in July 2025, CVE-2025-49596 with a CVSS score of 9.4 and CVE-2025-6514 at 9.6, both in Anthropic’s own reference implementations, the enterprises that had deployed with proper security controls were patching. The enterprises that had deployed on trust were scrambling to understand their exposure.
The standard is maturing. The legacy isn’t.
By late 2025, MCP’s security posture had improved substantially. The June 2025 specification update mandated OAuth 2.1, classified MCP servers as OAuth resource servers, and introduced proper authorization discovery mechanisms. The November 2025 update added more: asynchronous operations, server identity verification, and enterprise-grade registry capabilities. In December, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, with OpenAI and Block as co-founders.
The protocol is now on a credible path toward enterprise-grade security. But the thousands of MCP servers and integrations deployed during the authentication-optional period didn’t automatically upgrade. They’re still running. Many haven’t been patched. And many enterprises don’t even know they exist, because developers adopted MCP the same way they adopt any promising open-source tool: quickly, quietly, and without filing a procurement request.
OpenAI adopting MCP was the right decision for the ecosystem. But the timeline, mass adoption first, mandatory security later, created a legacy exposure that will take years to fully remediate. The next time a major vendor endorses a protocol, the question shouldn’t be “does this validate it?” The question should be “does this secure it?”
Those are very different questions, and the answer to one doesn’t imply the answer to the other.