Two Architectures, One Problem
OpenCode hit the top of Hacker News this week with over a thousand points. That is not unusual for a developer tool launch, but the conversation around it was revealing. The comments were not about whether AI coding agents work — that question is settled. They were about which kind of AI coding agent to bet on.
The answer is splitting in two.
On one side: vertically integrated stacks. Claude Code bundles Anthropic's model with a Bun-compiled binary, MCP integrations, and (since the Astral acquisition) indirect ownership of Python's most popular toolchain. Codex bundles OpenAI's models with a Rust CLI, Promptfoo for evaluation, and now the Astral team building tooling. These are closed architectures where the model, the agent, and the surrounding infrastructure are designed to work together.
On the other side: model-agnostic agents. OpenCode supports 75+ model providers through Models.dev, including local models via LM Studio. Aider treats Git as a first-class citizen and works with any API-compatible backend. These tools decouple the agent interface from the model behind it. You bring your own subscription — ChatGPT Plus, GitHub Copilot, Anthropic API, or a local Llama running on your own hardware.
Both architectures are succeeding. Claude Code is nearly as widespread as GitHub Copilot was three years ago. OpenCode has 120,000 GitHub stars and five million monthly developers. The AI coding tools market is worth an estimated $12.8 billion in 2026, up from $5.1 billion in 2024. There is room for both. But the choice between them is more consequential than most teams realise.
What Vertical Integration Buys You
The closed stacks have a genuine advantage: deep integration between the model and the tooling around it.
When Claude Code calls a tool, it is calling into infrastructure that Anthropic controls end to end. The model knows the tool protocol. The agent knows the model's strengths and weaknesses. The binary is compiled with Bun, which Anthropic owns and can optimise specifically for this use case. The result is a tight feedback loop — the model generates code, the agent executes it, the tools validate it, and the cycle repeats with minimal latency.
This integration shows up in measurable ways. Opus scores 80.9% on SWE-bench Verified, partly because the agent scaffolding is tuned for the model's reasoning patterns. Codex has seen 5x usage growth since the start of 2026 because the agent experience is smooth enough that developers stay in it for entire workflows rather than dropping back to manual editing.
The acquisition strategy we wrote about last week — Anthropic buying Bun, OpenAI buying Astral — makes more sense in this light. These companies are not just collecting developer tools as trophies. They are building vertically integrated stacks where every layer is optimised for the others. The model knows the runtime. The runtime knows the package manager. The package manager knows the linter. Everything talks to everything.
If you have ever used an Apple product, you know what this feels like when it works. And you know what it feels like when you try to leave.
What Model Agnosticism Buys You
OpenCode's architecture makes a different bet. Instead of optimising one model's experience, it optimises for model independence.
The practical implications are significant. A team using OpenCode can route different tasks to different models — cheaper models for planning and conversation, expensive models for complex code generation. One Reddit user described this as the key benefit: "you can configure it such that it uses like a cheaper model when you are just conversing with it and planning on what to do and boom, switch to an expensive model when actually executing."
This is not a theoretical advantage. Model pricing changes every few months. Capabilities shift with each release. A model that is best for TypeScript today might not be best for TypeScript next quarter. A model-agnostic agent lets you follow the performance frontier without changing your workflow, your configuration, your muscle memory, or your CI integration.
There are structural advantages too. OpenCode stores no code or context data, which matters in regulated industries where data residency is not optional. It integrates with LSP servers for Rust, Swift, Terraform, TypeScript, and Python, giving the model richer code intelligence without requiring the model provider to build that integration themselves. And because it is open source with 800 contributors and 10,000 commits, the pace of feature development is set by the community rather than a product team's roadmap.
The trade-off is real though. A model-agnostic agent cannot optimise for any single model's quirks. It cannot tune its prompting strategy for Opus's reasoning patterns or GPT's code generation strengths. It gives you breadth at the cost of depth.
The Dependency You Are Actually Choosing
When engineering leaders evaluate AI coding tools, they tend to focus on capability: which tool writes the best code, which handles the largest files, which has the best test generation. These are reasonable questions but they miss the structural one.
The real question is: where do you want your dependency?
If you choose Claude Code, your dependency is on Anthropic. Your workflows, your muscle memory, your team's prompt patterns, and your CI integrations are all built around a specific model and a specific agent. If Anthropic raises prices, changes the API, or gets outcompeted on model quality, switching costs are high. The vertical integration that makes the experience smooth also makes it sticky.
If you choose OpenCode or a similar model-agnostic tool, your dependency is on the open-source project and on the model-provider ecosystem in general. If OpenCode's maintenance declines, you have a codebase you can fork. If one model provider becomes too expensive, you switch to another. The trade-off is that no single experience will be as polished as the vertically integrated alternative.
This is the same architectural decision that surfaces in every infrastructure choice. Managed database or self-hosted? Platform-specific framework or portable one? The answer depends on your organisation's relationship with vendor risk, your team's tolerance for operational complexity, and how much you believe the current market leaders will still be market leaders in three years.
Where This Goes
The AI coding market is large enough and growing fast enough that both architectures will coexist for years. But the dynamics between them will shape how software gets built.
The vertically integrated stacks will keep getting more integrated. Expect Claude Code to ship features that only work well with Anthropic models — not through artificial restriction, but through genuine optimisation. Expect Codex to do the same with OpenAI's models and with the Astral toolchain. The Apple playbook is compelling because it works.
The open-source agents will keep getting more capable. OpenCode's Zen service — curated models specifically benchmarked for coding agents — is an interesting move. It tries to solve the quality consistency problem that model agnosticism creates, without giving up provider independence. Expect more projects in this space, and expect the ecosystem tools (LSP integrations, MCP adapters, evaluation frameworks) to improve rapidly as 800+ contributors compound their effort.
The wild card is what happens to pricing. Claude Code and Codex cost roughly $200 per month. OpenCode is free if you bring your own API keys, and API costs for heavy coding usage can be lower or higher depending on the model and the workload. If the closed tools raise prices or the open-source alternatives close the experience gap, the market could shift fast.
For now, the honest answer is that both architectures work. The choice is not about which one is better today. It is about which set of dependencies you prefer to carry into a future that neither architecture can fully predict.