Enterprises running AI at proof-of-concept scale are pricing their future wrong. When workloads move to production, inference costs stop being a research line item and become a variable operating expense with no natural ceiling. The organizations with no path to infrastructure ownership will negotiate from weakness. XTX's vertical integration strategy is the playbook enterprises should be reading now, before the bill arrives.
Every vendor conversation I have eventually lands on the same question, and most of the time I don't get a real answer. The question is simple: what is your path to controlling inference costs at scale? Not today's costs. The costs when this moves from pilot to production across your organization. When inference isn't a line item in a research budget but a variable operating expense tied directly to every workflow, every agent, every automated decision your business makes. The silence that follows that question is more informative than most product demos.
The silence matters because most enterprises are still pre-bill. They're running proofs of concept where inference costs are either subsidized by vendors eager to show adoption, too small at current volumes to trigger procurement scrutiny, or simply not tracked as a distinct cost category. That window is closing. The organizations treating inference as a later problem are making the same category error that enterprise buyers made about cloud costs a decade ago, and with less time to correct it.
Outside the Enterprise Conversation Is Where the Answer Was
XTX Markets is not an enterprise software buyer. It's a London-based algorithmic trading firm that executes roughly $250 billion in daily trading volume across equities, bonds, currencies, derivatives, and crypto. Alex Gerko, its founder, holds a mathematics doctorate and built the firm on a single thesis: machine learning can forecast price moves better than speed alone. That thesis requires compute, enormous and sustained and continuously growing, not as a capital expenditure with a defined endpoint but as the engine the entire business runs on.
XTX's solution to that problem, detailed this week in a Wall Street Journal profile of Gerko, is instructive precisely because it comes from outside the enterprise software conversation entirely. The firm is building a €1 billion data center complex spanning 478 acres in Kajaani, Finland — five facilities in total, the first scheduled for completion this year. It already operates a research cluster of more than 25,000 Nvidia GPUs with 650 petabytes of usable storage. XTX CTO Joshua Leahy described the strategic logic in terms that any CIO should be able to translate directly: the firm's compute needs had outgrown available leasing options, and the only way to deploy increased computing power on its own terms, cost-effectively, was to build the infrastructure itself.
"Our need for compute has outgrown available leasing options. We are building ahead of our needs to establish a backbone for future growth of the business." — Joshua Leahy, CTO, XTX Markets
XTX is not building data centers because it can afford to. It's building them because remaining at the mercy of third-party infrastructure pricing, at the volumes it operates, is a risk the business decided it couldn't carry.
The Inference Bill Arrives in Phases
Most enterprises will not hit XTX's compute volumes. That's beside the point. The structure of the problem scales even when the absolute numbers don't.
Right now, most organizations are running AI at volumes where inference costs don't trigger procurement scrutiny. Vendor pricing at this stage is often subsidized to encourage adoption. The meter is running; nobody is watching it. That changes the moment AI moves from pilot into production workflows at meaningful scale. Inference stops being a research cost and starts appearing in departmental operating budgets. It scales with usage, not with headcount or seat licenses. The pricing structures that looked reasonable at pilot volumes look different when every customer interaction, every document processed, every automated decision runs through an inference call billed by the token.
By the time an organization reaches the volumes where that cost is material, the switching costs are also material. Retooling at scale is expensive. The vendors who own the compute infrastructure understand this sequencing better than most enterprise buyers do.
XTX already lives on the other side of that transition. Its infrastructure decision was the answer to a cost control problem it saw coming before the rest of the market had named it.
The Spectrum Most Buyers Aren't Using
The answer isn't that every enterprise should build data centers in Finland. Most can't and shouldn't. But the strategic question XTX answered is one that every organization deploying AI at scale needs to answer in some form: how close to the foundation layer can we realistically get, and what does it cost us to stay entirely at the application layer?
The options exist on a spectrum that most enterprise procurement teams haven't mapped. At one end sits pure consumption pricing: pay as you go, no commitment, full exposure to infrastructure pricing decisions made by someone else. At the other end sits vertical ownership, which is XTX's answer. Between those poles there are reserved capacity agreements with committed spend discounts, on-premise inference hardware for high-volume workloads where the math supports it, sovereign cloud arrangements that trade flexibility for cost predictability, and hybrid architectures that push routine inference workloads toward owned or committed capacity while retaining cloud burst for spikes.
None of these options require a billion-dollar data center investment. All of them require having the conversation before the bill arrives rather than after.
Most enterprises are not having it.
What the Trading Firm Understood First
Algorithmic trading has been running production AI at scale longer than most enterprise software buyers have been running pilots. The firms that survived that transition understood something that enterprise buyers are still learning: the performance of an AI-dependent business model is inseparable from the cost structure of the inference layer that runs it. Gerko built XTX on machine learning as the core business driver, not as a feature layered over an existing business. The infrastructure decision followed directly from that commitment.
Enterprise AI adoption is following a similar arc, just on a longer cycle. Most organizations are still treating AI as a capability layered over existing operations. That means the cost control question gets deferred until the cost is already embedded and the leverage has already shifted. XTX didn't defer it. The 478-acre site in Finland is what asking the question early looks like at scale.
Map your AI workloads by inference volume and growth rate, then ask your infrastructure team what percentage of that cost is locked into consumption pricing with no committed discount, no reserved capacity, and no on-premise alternative. If they can't answer, your AI budget is being built on an assumption that someone else is managing. Find out who that someone else is, and what happens to your margin when they decide to change the price.
- Osipovich, Alexander. "The Billionaire Math Geek Who Made a Money-Printing Machine Out of AI." The Wall Street Journal, 25 Apr. 2026, wsj.com.
- "Billionaire Alex Gerko's XTX to Build €1 Billion Data Hub in Machine-Learning Bet." Bloomberg, 22 Jan. 2025, bloomberg.com.
- "XTX Markets Commits €1 Billion to Finnish Data Centre Complex." A-Team Insight, Feb. 2026, a-teaminsight.com.
- "XTX Markets to Build Data Center Campus in Kajaani, Finland." Data Center Dynamics, Mar. 2026, datacenterdynamics.com.
