$4.6M → $500M+ in five years. That's a 100× increase.
Training a frontier model now takes tens of thousands of GPUs running for months, dedicated power substations, and hundreds of engineers at peak-market salaries. By 2027 the cost crosses $1 billion. Only five organisations on Earth can write that cheque: OpenAI, DeepMind, Anthropic, Meta, xAI.
99.99% of companies cannot afford to train a frontier AI model.
NVIDIA alone captures over $100 billion of global AI spend — more than half the entire market. New data centres are being planned at 1–5 gigawatt scale: enough to power a mid-sized city, for one building.
The split that matters
Running a GPT-4-class query dropped 10× in cost between 2023 and 2025. Training and inference are moving in opposite directions: training concentrates into a handful of megacorps while inference commoditises into something anyone can access. Every startup, enterprise, and government using AI depends entirely on those five labs. If one pivots or restricts access, entire ecosystems feel it.
The entire AI economy runs on models made by five companies. That's not a market — it's a dependency.
Open-source models lag closed frontier ones by roughly 12 months — and that gap isn't closing. Power availability, not compute, may become the binding constraint first: AI labs are already signing deals directly with nuclear plants, bypassing the grid entirely.
Building AI is getting more expensive. Using AI is getting cheaper. Both are true simultaneously.