Opinion

MCP vs. CLI: The Protocol War

Industry trend · Apr 6, 2026

On March 11, 2026, Denis Yarats took the stage at the Ask 2026 conference and said something that caused a minor earthquake in the AI developer community. The Perplexity CTO announced that the company was deprioritizing the Model Context Protocol internally, walking back toward traditional REST APIs and command-line interfaces. The audience barely reacted. Social media exploded.

Within hours, Y Combinator CEO Garry Tan had quote-tweeted the news with three words: "MCP sucks honestly." His complaints were specific — MCP eats too much context window, authentication is broken, and he'd built a CLI wrapper in 30 minutes to replace it. By the time Hacker News finished chewing on it, a protocol that had been heralded as the universal standard for AI tool integration looked suddenly mortal.

What followed was a debate that revealed something deeper than a preference for one tool over another. The MCP-vs-CLI argument cuts to the heart of what AI agents actually need — and whether the industry has been building the right things.


The problem that started it all

MCP's original promise was elegant. Instead of every AI platform building custom connectors for every tool — bespoke auth, inconsistent data formats, constant maintenance — a standardized protocol would let any MCP client talk to any MCP server. Write it once, use it everywhere. By early 2026, the ecosystem had grown explosively, with server counts doubling weekly.

Then people started running it in production.

The core complaint, articulated by Yarats and amplified by Tan, is context window bloat. MCP loads all tool descriptions in full at session start, before the agent has done anything. Three servers — GitHub, Playwright, and an IDE integration — on a 200K-token model can consume 143K tokens in tool definitions alone. The agent starts already three-quarters full.

Metric Figure
Context consumed by tool schemas (3-server MCP setup) ~72%
MCP token inflation vs. CLI 4–32×
Tool selection accuracy drop as schema count grows 43% → 14%

Researchers call this last effect "context rot." The data is stark: the more unrelated tool definitions packed into context, the weaker the model's focus on what actually matters for the current task. This isn't a bug — it's a protocol-level design choice. MCP lines up every tool at the door and asks the model to pick. The cost scales with every tool you add.

Tan's solution — and, by implication, his preference for how agents should work — was a CLI wrapper. Build a tool that speaks the language the model already knows, skip the protocol layer, and get on with it. Perplexity's Yarats went further, launching an Agent API that absorbs tool complexity internally so developers don't have to manage it at all.

"MCP sucks honestly."

— Garry Tan, CEO of Y Combinator, March 2026


Why CLI has genuine advantages

The case for CLI-based agent tool integration isn't just about what MCP does wrong. CLI has real structural advantages rooted in how language models were trained.

Models have seen enormous amounts of terminal text during training — commands, outputs, errors, scripts, man pages. Tools like git, curl, grep, jq, and psql are part of the model's native vocabulary. An agent can run git --help to discover capabilities on demand, without preloading parameter definitions. Context cost is pay-as-you-go rather than upfront. For these well-known tools, the model can often one-shot the invocation without any additional instruction.

CLI also provides a natural progressive disclosure model. An agent runs --help, explores subcommands as needed, and expands its understanding of a tool organically. This mirrors how a skilled developer uses the terminal — not by memorizing every flag, but by knowing how to ask.

Bytedance saw this clearly. Their Lark/Feishu team open-sourced an official CLI with over 200 commands across 11 business domains, bundled with 19 built-in agent skills. Google shipped gws for Google Workspace. The CLI-plus-skills pattern is quickly becoming the enterprise default, not the alternative.


Why MCP defenders aren't wrong either

The backlash against MCP prompted a sharp counter-reaction from engineers who think the discourse is muddled. Charles Chen's widely-shared post "MCP is Dead; Long Live MCP" put it bluntly: he compared MCP critics to influencers spreading health misinformation, accusing them of "simple ignorance" about where and why the protocol matters.

His core argument is that the context bloat problem is real but solvable — and that critics are confusing a naive MCP implementation with MCP itself. A single-tool MCP architecture, where the agent exposes one tool that accepts code in a familiar language like Python or JavaScript, sidesteps the schema explosion entirely. The agent writes in a language it was trained on, maintains state across calls, and composes operations into reusable scripts.

"CLI tools were designed for humans, not agents. What a human can easily infer becomes a major source of failure for an AI agent."

— Runlayer Engineering Blog

The auth argument is similarly contested. Tan complained that MCP authentication is broken. MCP defenders point out that it uses OAuth 2.0 with scoped, revocable tokens — strictly more secure than the long-lived API keys baked into CLI config files. CLI tools grant an agent effectively full user access with no mechanism to limit specific commands or track request sequences. An untrusted input can prompt-inject a CLI-based agent and, once compromised, it can run any command with potentially disastrous consequences.

There's also a team dynamics argument that gets less attention. What works for Garry Tan individually — tinkering with a CLI wrapper in 30 minutes — doesn't necessarily translate to an engineering org with diverse experience levels shipping production software at scale. MCP's structured interfaces enforce consistency, enable telemetry, and make org-wide tool use auditable. Chen's conclusion is pointed: "For that, MCP is currently the right tool for orgs and enterprises."


The nuance the debate keeps missing

The sharpest critique of the MCP discourse is that it keeps treating this as a binary. It isn't. The CLI-vs-MCP argument dissolves almost immediately when you ask what kind of agent you're building and for whom.

For well-known, heavily-documented tools that models already understand deeply — git, docker, curl — CLI is almost always the better choice. The model needs no additional context, the overhead is minimal, and the risk of misuse is contained by the tool's own conventions. For internal or bespoke CLIs, that logic inverts: the model has no training familiarity with the tool, requiring documentation overhead that erases the token advantage.

For complex third-party integrations — Salesforce, Notion, a proprietary database — MCP's standardized discovery layer solves a real problem. It's the difference between asking an agent to figure out a REST API from scratch and handing it a structured interface that tells it exactly what's available and how to call it.

OpenClaw, the open-source personal agent platform that went viral in early 2026, illustrates the hybrid approach in practice: CLI-based skills for execution-heavy local tasks, MCP servers for structured third-party integrations, with the agent deciding which path to take based on the job at hand. It's a useful anecdote precisely because it didn't choose sides — it just shipped what worked.

Perplexity's position is also more nuanced than the initial wave of coverage suggested. The company still runs an MCP server for developers who want it. What Yarats walked away from wasn't MCP categorically — it was MCP as the default for every use case, which was probably never the right call anyway.


What actually matters

The MCP-vs-CLI debate has been useful in the way that most heated technical debates are useful: it forced practitioners to articulate their actual requirements instead of defaulting to the protocol with the most institutional backing. MCP had that backing from Anthropic and a rapidly growing ecosystem. It didn't escape the same scrutiny that any widely-adopted standard eventually faces.

What the debate reveals is that the agent tooling stack is still genuinely unsettled. Context efficiency matters, but so does security, interoperability, and team-scale engineering discipline. The answer isn't to pick the winner — it's to understand which tool fits which layer of the stack, and resist the pull toward a universal standard that solves all of them at once.

If Garry Tan can build a CLI wrapper in 30 minutes and ship something that works, that's a point in CLI's favor. If an enterprise org needs auditable, governed, cross-client tool access, that's a point for MCP. Both of those things can be true. The real question isn't which protocol wins. It's whether engineers are clear-eyed enough about their own constraints to make the right call — rather than following whoever tweets most confidently about it.