Written by Arbitrage • 2025-10-15 00:00:00
The 2023-2026 AI build-out is being propelled by a small set of extraordinarily profitable upstream vendors, an expanding constellation of downstream firms that are not yet profitable, and increasingly circular financing structures. When you run the numbers (capex, depreciation horizons, utilization, power, and realistic end-market demand), the revenue and margin required to make today's investments pencil look far beyond what the paying economy currently supports. That mismatch doesn't make AI "fake." But it does make the prevailing financing model fragile.
Executive summary:
1. Follow the profits: one node dominates
Strip the story back to where cash profits are actually being earned. Nvidia is the fulcrum. In a recent MacroVoices interview, Matt Barrie noted that Nvidia is running at ~$160B in revenue and ~$100B in EBIT, an outlier profitability anchor for the entire AI value chain. He also underlined extreme customer concentration (top two about 40% of revenue, next four about 40%) and a foundry single-point dependence on TSMC.
Why does that matter? Because the rest of the "AI compute" universe doesn't look anything like that. Barrie's back-of-envelope framing of the AI compute market as "sub-$40B" today hints at how small downstream monetization remains relative to the capital flowing in. If you accept the giant "bookings" figures being touted, a single hyperscaler or cloud challenger would need to expand the paid compute market - in just a few years.
That juxtaposition (one vendor with extraordinary profitability vs. a patchwork of aspirants that are still revenue-small and cash-burning) is the first tell that financing rather than earnings is propelling the machine.
2. How the circular money loop works
Let's walk the loop, using the recent "infinite money glitch" narrative as a guide:
This is not (necessarily) fraud. It is a financing architecture that turns future intentions into present-day capital - and it's not new. As hedge-fund manager Harris "Kuppy" Kupperman notes, we've seen similar patterns in late-1990s/2000 fiber-optic telecom: contracts, vendor financing, and bookings round-tripped to meet targets until the funding window narrowed, with painful consequences for names like Lucent and Nortel.
Barrie extends the analogy to the rise of "neo clouds" - CoreWeave, Lambda, Nebius - which he characterizes as the "WeWorks of GPUs." The model: collect pre-bookings ... raise debt/equity ... buy more gear ... repeat. It works while credit is easy; it is unkind when it tightens.
3. What the paying economy would need to deliver
Even if you accept the financing loop as a bridge to a bright future, the destination has to exist: paying customers at scale and with margins. Kupperman revisited his own model after speaking with data-center operators, lenders, and engineers. Their consensus critique was simple and devastating: the math doesn't work on a 10-year asset life. Buildings, racks, cooling, and GPUs age out fast; practical asset lives cluster at 3-5 years. That one change doubles to triples the annual depreciation charge. On that basis, he argues his prior break-even revenue estimate (~$160B) for the 2025 capex wave was "woefully inadequate." A more realistic break-even range is $320B-$480B of annual revenue - just to cover the capital being deployed this year.
Meanwhile, run-rate revenue today is nowhere near that. Kupperman's tally: more than $30B/month (~$400B in 2025) of capex going out, vs ~$15B-$20B/year of revenue coming in. Add the 2026 builds and "you would need ~$1T in cumulative revenue across 2025-2026 to hit break-even, with "many trillions more" to earn an attractive return.
Barrie's "sub-$40B compute market today" framing dovetails here. If you believe the bookings narratives, one counterparty (Oracle) tied to one customer (OpenAI) would have to grow the paid compute market by about 5 times in five years, leapfrog AWS by 2029, and do it with real cash - despite the customer's limited current revenues and the vendor's own cash constraints.
Implication: even generously assuming strong end-market adoption, the revenue ramp and margin structure required are extreme. The economics likely only close at very high utilization, minimal price compression, and favorable power costs - three assumptions that rarely coexist for long in computing.
4. A simple model you can sanity-check
To make the break-even math concrete, consider a stylized single-campus model:
Under those (arguably generous) assumptions, required annual revenue is roughly:
Scale that across $400B of 2025 capex industry-wide (about 40 such "$10B campuses"), and you're back near the $200B+ of annual revenue just to tread water, with plausible ranges that align with the $320-$480B Kupperman posits once you add the realities of idle time, price compression, and overheads. (This calculator is illustrative; the order of magnitude is the point.)
5. The physical-world brake: power, parts, and time
Money can be printed; transformers, turbines, and electrons can't. Even if the financing "works," power and grid constraints will meter how quickly capacity can be monetized, while assets depreciate rapidly.
Even if a CFO is comfortable with an aggressive revenue forecast, a slip of 12-18 months in energizing a campus can destroy the NPV on a 3-5-year asset-life assumption. That's before you consider the annual GPU refresh cadence that compels further spend just to remain competitive.
6. Macro linkages: why this matters beyond tech
Kupperman's macro thought experiment is worth sitting with. Start with ~$400B of 2025 capex tied to AI and adjacent infrastructure. Layer in associated R&D, grid upgrades, and ancillary buildout, and you plausibly get ~1.5% of U.S. GDP directly from AI-linked spending, maybe ~2% with multipliers. Now add a wealth-effect boost from equity gains in the AI complex. On the way up, that pads consumption; on the way down, it reverses.
His conclusion is unambiguous: if funding slows, buildout slows, beneficiary equities fall, consumption softens, and you risk a feedback loop that spills into the broader economy. Unlike railroads (long-lived assets), AI data centers risk going technologically obsolete fast, making the eventual write-downs larger and swifter.
7. "We've seen this movie" historical rhyme, not copy-paste
Skeptics aren't claiming a carbon-copy of 2000. Rather, they recognize rhymes: late-stage booms often feature big round numbers, mutual investments, vendor financing, and bookings that later fail to translate into cash. Kupperman's warning is measured: when the economics don't work, doing it at massive scale doesn't fix them, but rather it amplifies the eventual adjustment. Timing is the hard part.
8. What would have to go right (the bull case as a checklist)
From a skeptical lens, it is helpful to articulate what success requires - not to dunk on it, but to make the hurdle transparent:
Is that impossible? No. Is it heroic? Yes.
9. Plausible bust paths (and why they rhyme with past cycles)
10. A skeptic's monitoring list
If you're writing for investors or operators, close with an empirical checklist:
11. What skepticism is - and isn't - saying
None of this argues that AI is a mirage. The use-cases are real; the technology is improving; the need for compute is rising. Skepticism is about finance and pacing: are we building capacity faster than paying demand can absorb, and are we using financing to bridge a gap that is wider than most models admit? Today, the profit center is effectively one vendor upstream; the downstream services remain small relative to the sums discussed. Financing round-trips future intention into today's capex; that's a feature of late-cycle booms.
Meanwhile, depreciation clocks are short, power is scarce, and macro linkages are significant. If the funding impulse slows, the gear keeps aging while revenues lag. That combination has an unhappy history in railroads, power plant booms, and especially fiber/telecom. The mechanics rhyme; the timeline is unknowable.
Bottom line: for the current trajectory to work without a painful reset, paying demand must explode in both scale and quality (margin), quickly. Until we see the mix shift from bookings and vendor credit to billings and cash, prudence argues for treating the "AI ouroboros" as a financing phenomenon first and a monetization story second. If the math doesn't start working in cash, the eventual adjustment will be as much a macro event as a tech one.
Source notes: