Tech

$4.6M

$500M+

projected AI infrastructure spend in 2026

AI training costs grew 100x in five years. Who can still afford to build?

From GPT-3's $4.6M bill to billion-dollar frontier models — the exponential cost curve reshaping AI.

28 March 2026 · 3 min

$191B

Projected spend

Annual compute growth

4%+

US electricity share

10×

Inference cost drop/yr

$191Bprojected AI infrastructure spend in 2026, up 67% year-on-year

annual growth rate in compute used for frontier AI training

4%+

share of US electricity now consumed by data centres

$500M+

estimated cost to train a single frontier model in 2026

10×

annual drop in inference costs for equivalent AI capability

2020

$4.6M

2026

$500M+

The cost curve

In 2020, training a frontier AI model cost $4.6 million.

By 2025, that number crossed $500 million.

By 2027, it could exceed $1 billion.

That's not linear. That's exponential — and it's reshaping who gets to build AI.

$4.6M → $500M+ in five years. That's a 100× increase.

What $500M+ actually buys

Training a single frontier model now requires:

  • Tens of thousands of GPUs running for months
  • Dedicated power substations — some labs are building their own
  • Industrial-scale cooling rivalling factory plants
  • Hundreds of engineers at top-of-market salaries

Only five organisations on Earth can write that cheque: OpenAI, DeepMind, Anthropic, Meta, xAI.

Five. That's it.

99.99% of companies on Earth cannot afford to train a frontier AI model.

$191 billion

That's the projected global AI infrastructure spend in 2026 — up 67% from 2025.

NVIDIA alone captures over $100 billion of that.

One company — NVIDIA — takes more than half of all global AI spend.

New data centres are being planned at 1–5 gigawatt scale. That's enough to power a mid-sized city. For one building.

The power bill

Data centres now consume over 4% of US electricity — up from 1.5% in 2018.

At current growth, that reaches 6–8% by 2030.

AI labs are now signing power deals directly with nuclear plants — bypassing the grid entirely.

The twist: inference is getting cheap

While training costs spiral up, using AI gets cheaper every year.

Running a GPT-4-class query dropped 10× in cost between 2023 and 2025. Techniques like quantisation, distillation, and speculative decoding are driving it.

This creates a split market:

  • Training → concentrated, expensive, megacorps only
  • Inference → commoditised, cheap, available to everyone

Building AI is getting more expensive. Using AI is getting cheaper. Both are true at the same time.

The concentration risk

Five companies train the models. Everyone else depends on them.

Every startup, every enterprise, every government using AI relies on this narrow group. If one pivots, stalls, or restricts access — entire ecosystems feel it.

The entire AI economy runs on models made by five companies. That's not a market — it's a dependency.

Three numbers to watch

  • Training FLOP costs — dropping ~2× per year, but scaling outpaces efficiency
  • Open-source gap — open models lag closed ones by ~12 months
  • Energy bottleneck — power availability may become the binding constraint before compute does

The $191 billion question isn't just who can afford to build. It's whether AI economics concentrate power faster than they distribute capability.

$191B

Projected spend

Annual compute growth

4%+

US electricity share