Anthropic Just Released the ‘USB-C for AI’, And It Ships Without Authentication
On November 25, 2024, Anthropic published a brief blog post announcing the Model Context Protocol, or MCP. The framing was appealing: MCP would be the “USB-C for AI,” a universal connector that would standardize how AI systems interact with external data sources, tools, and services. Instead of building custom integrations for every combination of AI model and enterprise tool, developers would implement MCP once and unlock an entire ecosystem.
The idea was elegant. The execution had a hole in it large enough to drive an entire supply chain attack through. MCP shipped without mandatory authentication between client and server.
The problem nobody flagged at launch
Before MCP, connecting an AI model to an external system meant building a bespoke integration. If you had 10 AI applications and 100 tools, you potentially needed 1,000 different integrations. MCP reduced that to a linear equation: each application implements the MCP client protocol once, each tool implements the MCP server protocol once, and they communicate using JSON-RPC 2.0. The protocol drew inspiration from the Language Server Protocol, which had successfully standardized how development tools interact with programming languages.
The architecture was straightforward: MCP servers expose data and tools from external systems, MCP clients are AI applications that connect to those servers, and the protocol handles the communication primitives between them. Anthropic launched with SDKs for Python and TypeScript and pre-built reference server implementations for systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.
Early adopters moved fast. Block and Apollo began integrating MCP into their workflows. Development tool companies like Zed, Replit, Codeium, and Sourcegraph started building MCP support. The momentum was real.
But buried in the specification’s security section was a telling choice of words. The protocol stated that implementors “SHOULD” follow security best practices, not “MUST.” Authentication, authorization, and encryption were recommended but not required. The protocol that was supposed to make AI connectivity safe and standardized had made security entirely optional.
Why optional security defaults to no security
Anyone who has spent time in enterprise systems architecture knows what happens when security is optional: it doesn’t happen. Developers under deadline pressure will skip the optional step. Pre-built reference implementations become production deployments. And the “we’ll add security later” promise turns into technical debt that never gets repaid.
This isn’t a theoretical concern. The Palo Alto Networks security overview of MCP published in June 2025 documented exactly what happened. MCP servers require credentials, API keys, database connection strings, authentication tokens, to connect to external systems. Those credentials are stored somewhere. The MCP server itself often requests broad permission scopes to provide flexible functionality. And the same server might require permissions to multiple external services simultaneously.
The attack surface is obvious. An MCP server connecting to your Salesforce instance, your Postgres database, and your GitHub repositories holds credentials for all three. If the server is compromised, or if it was malicious from the start, those credentials are exposed. And because MCP servers can be downloaded from the internet by any developer, supply chain attacks become trivial.
Satyajith Mundakkal, Global CTO of Hexaware Technologies, captured the tension in comments on MCP’s first anniversary: “AI is only as good as the data it can reach safely. That’s where MCP is proving essential: a standardized, governed way for assistants and agents to securely discover, request, and use enterprise data and tools.”
But then the caveat: “If done poorly, MCP becomes integration sprawl and a bigger attack surface. The lesson from year one is clear: pair MCP with strong identity, RBAC, and observability from day one.”
From day one. Not from month six, after 13,000 servers have already been deployed.
The scale of the exposure
The adoption numbers tell the story of how fast the exposure grew. By April 2025, MCP server downloads had grown from roughly 100,000 at launch to over 8 million. The ecosystem expanded to over 5,800 MCP servers and 300 MCP clients. Major deployments appeared at Block, Bloomberg, Amazon, and hundreds of Fortune 500 companies.
All of this growth happened before MCP mandated any form of authentication. Every one of those servers deployed between November 2024 and June 2025, when OAuth 2.1 was finally made mandatory, operated under whatever authentication the individual developer chose to implement. In practice, that often meant no authentication at all.
I’ve watched this pattern repeat across my career building enterprise systems. When you give developers a fast path that skips security, they’ll take it, not because they don’t care about security, but because the deadline is tomorrow and the security layer can always be added next sprint. Except next sprint never comes.
The rug pull attack, a new vector nobody anticipated
The security implications went beyond missing authentication. In April 2025, security researchers from Invariant Labs and others published analysis that identified attack vectors specific to MCP’s design that didn’t exist in traditional API integrations.
The most alarming was what became known as the “rug pull” attack. MCP servers can dynamically redefine the tools they expose after initial approval. A tool you approved on Monday, say, a database query function, could silently transform into a data exfiltration function by Tuesday. The tool definition changes server-side, and unless the client is explicitly checking for redefinition, the agent continues executing with the modified tool.
This is fundamentally different from a traditional API integration. When you approve an API endpoint, its behavior is defined by the server’s code, but the contract is relatively static. MCP introduced a dynamic contract where the server controls the tool definitions and can change them at will. An MCP server could also embed hidden instructions in tool descriptions, a technique called “line jumping”, that could alter agent behavior without the user’s knowledge.
The MCP specification itself acknowledged the risk, noting that “descriptions of tool behavior such as annotations should be considered untrusted, unless obtained from a trusted server.” But the protocol provided no mechanism for determining which servers were trusted. That determination was left entirely to the implementor.
The prompt injection multiplier
MCP didn’t just create its own attack vectors, it amplified existing ones. The protocol’s purpose is to give AI models access to external data and tools. That means data retrieved through MCP servers flows directly into the model’s context window. If an attacker can influence the data that an MCP server returns, they can perform prompt injection attacks that alter the agent’s behavior.
Researchers documented scenarios where a compromised MCP server containing malicious promptscould instruct a coding agent to write insecure code or modify database records without user permission. The Cato Networks research team later documented a “Living off AI” attack where a malicious support ticket submitted by an external user could inject harmful instructions when an internal user triggered an AI action through MCP. The AI executes with the internal user’s permissions, but the instructions come from the attacker.
Combined with missing authentication, these vectors created a perfect storm. An unauthenticated MCP server could serve poisoned tool definitions to any connecting client, and those definitions could include embedded instructions that alter agent behavior in ways invisible to the user.
What the specification should have required
The security posture MCP shipped with in November 2024 stands in stark contrast to what the protocol eventually adopted. The anniversary update in November 2025 added stricter security requirements for local server installations, default authorization scopes, enterprise identity-provider policy controls, and OAuth client-credentials flow for machine-to-machine authentication.
These are the controls that should have been in the initial specification. Not as SHOULD recommendations, but as MUST requirements. The argument against mandatory security at launch is always the same: it slows adoption. Developers won’t build with a protocol that requires them to implement an OAuth flow before they can test a hello-world example. And there’s truth to that. Friction kills adoption.
But the alternative, 8 million downloads and thousands of production deployments without mandatory authentication, creates a different kind of friction. The kind that shows up as CVEs, breach disclosures, and emergency patching campaigns.
By December 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation under the Linux Foundation, co-founded with Block and OpenAI. The governance changed. The protocol continued to mature. But the ecosystem’s security debt remained.
Lessons for the next universal standard
MCP’s trajectory offers a lesson that extends well beyond a single protocol. As the enterprise technology industry builds the infrastructure for agentic AI, security decisions made at the protocol level will compound across millions of deployments. Getting those decisions wrong at launch creates a legacy exposure that no amount of after-the-fact patching can fully remediate.
From my work in the IETF AGNTCY working group and CoSAI, I’ve seen the standards-making process struggle with this same tension between adoption speed and security rigor. The pressure to ship fast and iterate is enormous, especially when competitors are shipping their own approaches. But the cost of security debt at the protocol level is categorically different from the cost at the application level. Protocol-level debt scales with the ecosystem.
The immediate takeaway for any enterprise evaluating MCP in late 2024 was straightforward: do not deploy MCP servers in production without adding your own authentication layer. Inventory any MCP integrations already in use, because developers adopt open-source tools faster than security teams can evaluate them. Implement tool-call allowlists to restrict which tools each MCP client can invoke. Add monitoring for MCP server-to-client communication patterns. And maintain a healthy skepticism about the security maturity of any MCP server you didn’t build yourself.
The “USB-C for AI” analogy was clever marketing. But USB-C came with defined electrical safety standards from day one. MCP didn’t.