Industry Commentary

MCP as Public Infrastructure: What Changes When the Protocol Isn't Anyone's

By John Jansen · · 6 min read

Share

The Model Context Protocol crossed 97 million installs and is moving under Linux Foundation governance. That second fact matters more than the first. Install counts measure adoption; governance structure determines what the protocol becomes over the next decade.

For most of the last two years, teams building agent systems have been playing a familiar game: pick a vendor protocol, hedge against lock-in, write adapters, wait for the market to consolidate. OpenAI's function calling, Anthropic's tool use, LangChain's abstractions, various RAG frameworks — each had its own shape, and integrating across them meant writing glue code that was half translation layer, half prayer.

MCP moving to a foundation changes the question. It's no longer "which protocol wins?" It's "how do we build on a standard nobody controls?"

The shift from vendor protocol to public protocol

When a protocol belongs to a vendor, the architectural calculus is defensive. You assume the surface will change to serve commercial interests. You wrap it. You build abstraction layers so you can swap the vendor later. You treat the protocol as a liability that happens to be useful.

When a protocol becomes public infrastructure, the calculus inverts. The protocol becomes the stable thing — the most stable thing in your stack, often more stable than your own internal APIs. You stop wrapping it defensively and start building directly against it. The wrapping layer is where bugs live and velocity dies; removing it is a real win.

HTTP is the reference case. Nobody writes an HTTP abstraction layer anymore. You use the standard. The standard outlasts your framework, your language runtime, probably your company. MCP isn't there yet, but foundation governance is the structural precondition for getting there.

The practical consequence: if you're currently building MCP behind an abstraction layer so you can "swap it out later," stop. The abstraction is the risk now, not the protocol.

What agent architecture looks like when the protocol is a given

Once MCP is assumed infrastructure, the interesting design work moves up the stack. Three shifts become visible.

Servers become the unit of composition, not code. In a pre-MCP world, adding a capability to an agent meant writing a tool function, registering it, handling its errors, versioning it alongside your agent code. In an MCP-native world, capabilities are servers — separately deployed, separately versioned, discoverable at runtime. This is closer to how microservices work than how SDKs work. Your agent isn't a monolith that imports tools; it's a client that discovers and binds to servers. The mental model is less "library" and more "service mesh for context."

This has real implications for how teams organize. The team that owns your CRM doesn't ship a Python package for your agent team to import. They ship an MCP server. The contract is the protocol, not the language binding. That's a healthier boundary.

Capability discovery becomes a first-class concern. When you have five tools, you hardcode them. When you have five hundred MCP servers available across your organization, you need actual discovery — a registry, a permissions model, a way for agents to reason about which servers to attach for a given task. Most teams haven't built this yet because they haven't needed to. They will. The registry problem is going to eat a surprising amount of engineering time over the next two years, and the teams that solve it well will ship agents that feel categorically more capable.

Trust boundaries get redrawn around the protocol. An MCP server is an execution surface. If your agent can call any registered server, you've effectively extended your trust boundary to include everyone who publishes one. This is the same problem package ecosystems have — npm, PyPI, the supply chain attacks that follow — but with a worse blast radius because agents execute autonomously. Signing, attestation, capability-scoped permissions, audit logs of which server was called with what arguments: these stop being nice-to-haves. Foundation governance helps here because it creates a credible venue for security standards to emerge. It doesn't solve the problem.

The architectural patterns that start to make sense

A few patterns become more attractive under the "MCP is infrastructure" assumption.

Thin agents, thick servers. Put the logic in MCP servers, not in the agent. The agent becomes an orchestration loop — plan, call, observe, repeat — and most of the capability lives in servers that can be developed, tested, and deployed independently. This inverts the common pattern of agents-as-applications with tools bolted on. It also means your agent code gets dramatically smaller and easier to reason about.

Server-per-domain, not server-per-integration. The temptation is to build one MCP server per external system: a Salesforce server, a Stripe server, a Postgres server. The better pattern is usually server-per-domain: a "customer" server that happens to talk to Salesforce and Stripe and your data warehouse behind the scenes. Agents shouldn't need to know that customer data is spread across three systems. The server's job is to hide that.

Local-first where possible. MCP runs locally as happily as it runs over the network. For latency-sensitive loops — code execution, filesystem operations, anything in a tight agent cycle — local servers are meaningfully better. The protocol doesn't force a deployment model, and treating everything as remote is a mistake we've seen teams make.

Versioned capability contracts. Because servers are separately deployed, they'll drift from agent expectations. Version your capability schemas explicitly. Agents should be able to ask a server what version it speaks and adapt, or refuse. This is basic API hygiene, but it's easy to skip when you're moving fast.

What foundation governance doesn't fix

Foundation governance is necessary but not sufficient. It gives MCP a credible future, but it doesn't give you a good agent. Several hard problems remain squarely on implementers.

The semantics of tool composition — how an agent decides which server to call, in what order, with what arguments — is still a research problem. MCP standardizes the wire format, not the reasoning. Your agent is only as good as the model driving it and the prompts and scaffolding around it.

Observability across server calls is underdeveloped. When an agent makes twelve MCP calls to complete a task, understanding what happened — and debugging when it goes wrong — requires tooling that mostly doesn't exist yet. This is where we're spending a lot of our own time.

And the economics of running hundreds of MCP servers in production — cold starts, connection pooling, resource isolation — are going to surface operational problems that nobody's hit at scale yet.

The honest read

MCP becoming public infrastructure is the most important architectural shift in agent systems this year, and it's being under-discussed because install counts make better headlines than governance structures.

The teams that will build durable agent systems over the next few years are the ones treating MCP the way they treat HTTP: as a given, as infrastructure, as something to build on rather than around. That means fewer abstraction layers, more investment in server composition and discovery, and a real engineering answer to the trust problem that an open ecosystem creates.

The protocol question is settled. The architecture question is wide open, and it's the one worth working on.

Want to discuss this?

We write about what we're actually working on. If this is relevant to something you're building, we'd love to hear about it.