OpenAI is reportedly approaching an IPO at around $25B in annualised revenue. Anthropic is tracking near $19B. Whatever you think of those numbers — and we have our doubts about how they're being counted — the direction of travel is clear: the two labs most enterprises are standardising on are about to become answerable to public markets, index funds, and quarterly earnings calls.
That shift changes the risk profile of every multi-year commitment signed today. Not because the models get worse, but because the incentives shaping the roadmap, the pricing, and the deprecation calendar are about to look very different.
The procurement assumption that quietly expired
Most enterprise AI contracts we've reviewed in the last eighteen months were written under an implicit assumption: the vendor is growth-stage, loss-making, desperate for logos, and willing to absorb inference costs to win share. That assumption produced generous terms. Committed spend discounts of 40–60%. Price locks on specific model SKUs. Grandfathering clauses for deprecated models. Access to frontier models within days of release. Co-engineering support from applied teams.
Those terms reflect a specific moment in the capital cycle. Private AI labs funded by strategic investors with unusual risk tolerance could afford to treat enterprise contracts as distribution investments rather than profit centres. Microsoft's commitments to OpenAI, Amazon's to Anthropic, and Google's internal subsidies to its own models all pushed effective prices below what a standalone P&L could sustain.
Public markets don't price that way. Once a lab is trading on public exchanges, the cost of every enterprise discount becomes legible to analysts who will ask, on every earnings call, why gross margins aren't expanding. The answer cannot indefinitely be "we're investing in land-and-expand." At some point it becomes "we're renegotiating."
What actually changes on the roadmap
The second-order effects are more interesting than the pricing question. Consider what a public AI lab optimises for that a private one doesn't:
Predictable revenue recognition. Usage-based billing is lumpy and forecast-hostile. Expect harder pushes toward committed capacity contracts, minimum spends, and seat-based pricing for Enterprise tiers. These are easier to model for Wall Street but worse for customers whose usage is genuinely variable.
Gross margin discipline by workload. Not every inference path is profitable. Long-context requests, high-reasoning-budget calls, and batch workloads have very different unit economics. A private lab can average them out; a public one will price them separately. The flat per-token pricing that made capacity planning simple will fragment into tiered SKUs with materially different costs.
Faster model deprecation. Serving old checkpoints costs real GPU hours. When a private lab does it, it's a retention investment. When a public one does it, it's a line item that shows up in cost of revenue. We'd expect deprecation windows to shorten from 12–18 months toward 6–9, with more aggressive migration incentives.
Safety and alignment work becomes a quarterly negotiation. Capability investment and safety investment compete for the same engineering headcount. Private labs have argued — credibly — that the two are complementary. Public labs will face pressure to quantify the revenue contribution of alignment research, and the honest answer is uncomfortable.
Consolidation of Applied teams. The engineers who today sit inside enterprise accounts helping tune prompts and wire up evaluations are expensive, and they don't scale linearly with ARR. Expect those functions to narrow to the top quintile of accounts and to be replaced, for everyone else, by self-service tooling and partner channels.
Three procurement moves worth making now
We've been advising clients on this, and the specific recommendations have converged around three things.
First, shorten commitment windows and raise the bar on what justifies a multi-year deal. A three-year commitment to a specific model family made sense when the alternative was losing access. It makes much less sense when the roadmap is about to be shaped by investor relations. Twelve to eighteen months is the window where you can actually forecast what the vendor will offer; beyond that, you're paying for optionality the vendor may not deliver.
Second, insist on abstraction at the application layer. Every production system we've built in the last two years routes model calls through a thin internal gateway that handles retries, cost attribution, and — critically — provider substitution. The gateway costs a few weeks of engineering. The substitution right it gives you is worth multiples of that in any renegotiation. If your current architecture hard-codes openai.chat.completions.create across hundreds of call sites, you don't have a vendor relationship, you have a dependency. Fix that before the next renewal.
Third, evaluate open-weight fallbacks for your highest-volume, lowest-complexity workloads. Not as a primary path — the frontier labs are still meaningfully ahead on reasoning, tool use, and long-context reliability — but as a credible threat. Llama, Qwen, and DeepSeek models deployed on your own inference stack (or on a neutral provider like Together or Fireworks) put a ceiling on what the frontier labs can charge for commodity work. The classification, extraction, and summarisation tasks that account for maybe 70% of enterprise token volume don't need GPT-5-class reasoning. Moving them to open weights, even if only 20% actually migrate, changes the tone of every pricing conversation.
The exit-path question nobody wants to ask
The harder question is what happens if one of these labs — post-IPO, under margin pressure, perhaps with a disappointing quarter — decides to materially change its enterprise terms. Not deprecation, not a price increase at the margin, but a structural reset: committed capacity becomes mandatory, SLAs get rewritten, specific capabilities move behind enterprise-only tiers.
The uncomfortable truth is that most enterprises have no credible exit. The workflows built on top of a specific model's behaviour — its specific verbosity, its specific tool-use conventions, its specific failure modes — have been tuned over thousands of prompt iterations. Migrating isn't a config change; it's a re-evaluation of every eval suite, every prompt template, every downstream parser.
This is the part that public-market pressure will expose most sharply. The labs know how sticky their customers are. Their bankers know. Their future analysts will know. Enterprise procurement teams who haven't built actual migration playbooks — not decks, actual runbooks with measured quality deltas — are going to find themselves with very little negotiating leverage at renewal.
Where we land
We don't think this is a reason to slow down AI adoption. The capability curve is still steep enough that the cost of not building is higher than the cost of building on shifting foundations. But the terms under which enterprises have been buying AI for the last two years were anomalous, and they were going to normalise regardless of whether anyone IPO'd.
The IPO just sets a date.
Our working view is that enterprises should treat 2025 and 2026 as the last years of favourable frontier-model economics, use that window to build the abstraction and evaluation infrastructure that makes provider substitution real, and stop signing commitments that assume the vendor's current incentive structure will persist. The labs that go public will be good businesses. They will also be businesses. It's worth procuring from them accordingly.