The Infrastructure Layer for AI: Where We Believe Durable Value Accrues

Every major technology cycle creates a period where capital concentrates in the most visible layer of the technology stack. Today that focus is on the compute required for foundational models, LLMs, notably those produced by OpenAI, Anthropic, Google and others.
Our view has been that structural opportunity sits one layer below, in the infrastructure required to operationalise AI at scale.
The Shift That Matters: From Training to Inference
The first phase of AI was defined by training.
The next phase - the one that matters economically - is defined by inference.
Every production use of AI is an inference event:
every query,
every workflow automation,
every enterprise deployment.
As AI moves from experimentation to embedded infrastructure, inference workloads scale orders of magnitude faster than training.
This shift has two implications:
- Unit economics become critical - cost per query, latency, and energy efficiency determine adoption.
- Incumbent architectures are not necessarily optimal - hardware and systems built for training do not map cleanly to inference at scale.
This is the gap where new infrastructure layers are emerging.
Where We See the Value Accruing
We think about the AI stack in three investable layers:
- Compute Infrastructure
The transition to inference-dominated workloads creates demand for architectures optimised for:
- low latency,
- deterministic performance,
- cost efficiency at scale.
Our investment in Groq was a clear expression of this: purpose-built inference hardware delivering materially better performance on enterprise workloads.
The key point is not any one company, it is that compute is fragmenting by workload, and inference is becoming its own category.
- Data Infrastructure
As model capabilities converge, data becomes the primary source of differentiation.
The most valuable companies will not be those with marginally better models, but those with:
- proprietary datasets,
- vertically integrated data pipelines,
- embedded positions in critical workflows.
This is already visible across sectors like healthcare, financial data, and geospatial intelligence, where data ownership creates defensibility that models alone cannot replicate.
- Enterprise Deployment Layer
The deployment layer, software, tooling, and workflow integration, is large, but bifurcating:
- commoditised tooling (low barriers, high competition)
- deeply embedded workflow systems (high switching costs, durable value)
We focus on the latter - companies that:
- sit inside decision-making processes,
- capture economic value from outcomes,
- and become part of enterprise infrastructure, not just tooling.
The Sovereign Dimension
A critical and underappreciated dynamic is the rise of sovereign AI.
Governments globally are actively seeking:
- control over compute,
- independence from hyperscalers,
- domestically aligned AI infrastructure.
This is not cyclical demand, it is policy-driven and budget-backed.
Companies that can serve sovereign requirements, across performance, compliance, and trust are positioned in a in a structurally advantaged segment of the market.
What This Means for Capital Allocation
The key insight is simple:
AI is not a single market, it is a stack.
And the distribution of returns across that stack is highly uneven.
We believe:
- the model layer will capture attention,
- the application layer will capture adoption,
- but the infrastructure and data layers will capture the most durable value.
Our approach is therefore concentrated, not thematic in the broad sense:
- high-conviction positions,
- in structurally advantaged parts of the stack,
- with real barriers to entry and limited access.
About the Author
Gavin Ezekowitz is Co-Founder and CIO of BFA Global Investors, bringing with him more than 30 years of experience in senior capital markets and investment banking roles.
General information only. Not financial advice. Wholesale clients only.
