“It’s just JSON-RPC with extra steps.” That’s what the senior engineer said, glancing at the Model Context Protocol documentation for thirty seconds before closing the tab. I understood the dismissal. Another day, another protocol. We’ve all seen enough wrappers around HTTP to recognize the pattern - take REST, add some ceremony, declare victory. Except MCP isn’t wrapping APIs. It’s inverting the entire relationship between intelligence and capability.
The confusion is understandable because MCP uses familiar transport mechanisms. Yes, there’s JSON-RPC. Yes, there are request-response patterns. Yes, it looks like every integration layer you’ve built before. But watching MCP and seeing an API is like watching neurons fire and seeing electrical switches. The mechanism isn’t the meaning.
Traditional APIs express imperative relationships. Call this endpoint, get this data. Execute this function, receive this result. The caller knows what they want and how to get it. The integration is a contract written in endpoints and parameters, fixed at design time. When you write a Stripe integration, you’re not discovering what Stripe can do - you’re following a map someone drew years ago.
MCP expresses something fundamentally different: capability discovery. An agent connecting through MCP doesn’t receive a list of endpoints to call. It receives a space of possibilities to explore. The protocol doesn’t just transport requests; it negotiates understanding. When Claude Code connects to your file system through MCP, it’s not executing pre-programmed file operations. It’s discovering what files mean in your context, what patterns matter in your workflow, what capabilities emerge from the combination of tools available.
The industry understood this distinction faster than expected. March 2025: OpenAI adopts MCP across ChatGPT, their Agents SDK, and the Responses API. April 2025: Demis Hassabis calls it “rapidly becoming an open standard for the AI agentic era” while announcing Gemini support. Microsoft doesn’t just integrate - they build an entire C# SDK and wire MCP into Copilot Studio as the default bridge to external knowledge. These aren’t companies adding another protocol to their stack. They’re recognizing that MCP solves a problem APIs never could: how do you let intelligence discover what tools can do, rather than telling it what endpoints to call?
The difference becomes visceral when you watch it work. Traditional API: “Call create_user with these parameters.” MCP conversation: “I see you have user management capabilities. Given what we’re trying to accomplish, I could create a user, modify permissions, or perhaps there’s a batch operation that would better serve our needs. Let me understand the constraints.” The agent isn’t following a script. It’s reasoning about tools.
This inversion changes everything downstream. API integrations accumulate like scar tissue - each new service requiring custom code, specific error handling, unique retry logic. MCP servers compose like thoughts. Connect a file system server, a database server, and a browser automation server, and watch capabilities emerge that none of them individually express. The agent discovers workflows that cross server boundaries, solutions that require tool combinations no one explicitly programmed. Intelligence flows anywhere needed, not through predetermined channels but through discovered pathways.
The philosophical shift runs deeper. APIs assume the caller knows what they need. MCP assumes intelligence will figure out what’s possible. APIs optimize for predictable operations. MCP optimizes for emergent behavior. APIs deprecate versions and break integrations. MCP servers evolve capabilities while maintaining protocol compatibility.
The protocol itself keeps evolving without breaking this philosophy. OAuth 2.1 for secure agent-server communication. Streamable HTTP Transport keeping connections open for real-time bidirectional flow. Not features for features’ sake, but infrastructure for what MCP really enables: persistent conversations between intelligence and capability, not request-response transactions between client and server.
I watched this play out last week. A colleague needed to analyze performance metrics across three different systems - logs in Datadog, traces in Honeycomb, business metrics in our data warehouse. The traditional approach would involve three API integrations, custom correlation logic, probably a week of work. With MCP, they connected three servers and asked Claude Code to find performance anomalies. The agent discovered it could correlate timestamps across systems, identify patterns that spanned tool boundaries, even suggest metric relationships we hadn’t considered. No one programmed that workflow. It emerged from capability composition.
The inversion goes deeper than tools calling functions. MCP’s Sampling capability - what many call the protocol’s most revolutionary feature - lets tools themselves request AI completions without needing their own API keys. Imagine a database analysis tool that encounters an anomaly. With traditional APIs, it returns raw data. With MCP Sampling, that same tool could ask the connected AI: “This pattern looks unusual - what might explain it?” The tool becomes intelligent through the protocol, not through hardcoded logic. A debugging server that can hypothesize about root causes. A monitoring system that can narrate what it sees. Tools that think, not just execute.
This is intelligence on tap at the protocol level. Not just humans accessing AI when needed, but tools themselves turning the cognitive faucet when they encounter something requiring reasoning. The server doesn’t store intelligence or run models - it taps into the flow through MCP’s bidirectional channel. VS Code already supports this fully as of June 2025. Claude Code will follow. The capability exists in the protocol, waiting to transform every connected tool into something that can reason about its own operations. No API could enable this recursive intelligence. It requires the trust relationship and capability negotiation that only a protocol designed for cognitive partnership could provide.
The technical implications cascade from this inversion. Error handling becomes semantic rather than syntactic. Instead of catching HTTP 500s, agents understand capability failures and route around them. Rate limiting becomes a negotiation rather than a brick wall. Resource constraints become part of the conversation rather than hidden failures.
But the real power isn’t in any single technical improvement. It’s in how MCP changes the economics of tool integration. Every new MCP server immediately becomes available to every MCP-compatible agent. Not through individual integrations, but through protocol understanding. The cost of connecting new capabilities drops from days to seconds. The testing burden shifts from integration verification to capability expression.
This is why dismissing MCP as a glorified API misses the point entirely. APIs are contracts between systems. MCP is a language for capability expression. APIs assume fixed relationships. MCP assumes dynamic discovery. APIs connect functions. MCP connects intelligence to tools.
The senior engineer who dismissed it after thirty seconds eventually came back. Three hours later, they were building their own MCP server. Not because they needed another API, but because they finally understood - MCP isn’t about making API calls easier. It’s about making capability composition possible. The protocol that looks like plumbing but acts like philosophy.
They weren’t alone. Thousands of MCP server repositories have appeared on GitHub in just months. Block, Apollo, Replit, Codeium, Sourcegraph - each adding MCP support not as a checkbox feature but as fundamental infrastructure. The ecosystem isn’t growing; it’s exploding. Because once you understand that tools can describe themselves semantically, that capabilities can be discovered at runtime, that integrations can compose without coordination - you can’t go back to hardcoding endpoint URLs.
We’re still early in understanding what this means. Every week, someone publishes a new MCP server that enables capabilities we didn’t know we needed. Browser automation that composes with file systems. Databases that negotiate with web scrapers. Development tools that discover how to use each other. The semantic layer isn’t just transporting data; it’s enabling tools to understand what they can become together.
The comparison to APIs will persist because it’s the closest mental model we have. But MCP isn’t an evolution of APIs any more than HTTP was an evolution of FTP. It’s a different thing entirely, solving a different problem, enabling different futures. The engineers who grasp this early will build systems that seem like magic to those still thinking in endpoints and parameters.
P.S. The best evidence that MCP isn’t just an API? Try explaining an API integration to a non-technical person. Now watch them immediately understand when Claude Code uses MCP to automate their workflow. The simplicity isn’t abstraction - it’s the natural outcome when tools stop requiring imperative instructions and start accepting declarative intentions.
Updated: Testing vault-to-blog sync workflow (2025-08-28)