The AI revolution is here, and while so far, the effects of this groundbreaking technology have not been fully felt by our society and economy, that will change in the next few years.
However, for AI to fully reach its promise, we need to build more data center infrastructure than humans have ever built before!
Because AI is a hungry beast that eats computer chips and electricity like clockwork.
Thus, the demand for AI infrastructure is projected to continue growing at an accelerated pace for decades as businesses and governments begin to embed AI applications in their daily operations.
This is a significant opportunity not only for hyperscalers such as Google, Microsoft, and Amazon, but also for smaller specialized providers of dedicated AI infrastructure.
This year, we saw that play out in Nebius, whose stock has risen 362% YTD on the cusp of a major $19.4B 5-year deal with Microsoft!
I wrote a report on Nebius in June, but admittedly, I was not very bullish because I underestimated the demand for AI infrastructure, thus modeling worse unit economics than will be achieved. Since then, I have studied the industry more to learn what drives AI demand and how it will be fulfilled.
And today, I present to you, WhiteFiber!
An AI infrastructure business with data centers in Canada, and a little cousin of Nebius, with a unique model to quickly and cheaply build AI data centers.
Trading for a $1.3B market cap, despite a pipeline of AI data centers that could generate billions of dollars in revenues in just a few years.
McKinsey projects that AI infrastructure demand will push the AI Cloud industry towards $363B in revenues by 2030, and WhiteFiber is a unique opportunity to capture that!
Let’s look at this exciting AI business!
1. Business Model
2. Financials
3. Unit Economics
4. Valuation
5. Conclusion
1. Business Model
WhiteFiber is a cloud infrastructure services company that specializes in AI workloads.
General-purpose cloud data centers operated by Google, Amazon, and Microsoft were designed years ago and thus are ill-suited for AI workloads without costly upgrades.
This is because AI computing requirements differ greatly from general computing.
Fewer CPUs
More GPUs
Larger memory and storage
Faster networking speeds
More electricity
Better cooling
AI is basically a large math model that does trillions of parallel computations to calculate the answer. GPUs that were originally built for video game graphics processing are perfect for this task, thus, the demand for them exploded, enabling Nvidia to become a $4T company.
But having thousands of the best Nvidia and AMD GPUs is not enough, as those AI calculations require a massive amount of memory and storage, because AI training and inference (running of AI models) require vast databases of petabytes of data. Additionally, as this data needs to be accessed constantly, it is not deleted as often as the usual cloud computing data.
We are still not done, as AI also requires that this data move between GPUs, CPUs, and memory much faster than before. As I said, AI does trillions of parallel calculations per second. So if they would be done at the usual cloud networking pace, they would take ages. This necessitates faster and better networking equipment.
All this additional equipment requires more electricity and generates lots of heat, demanding better cooling.
All these factors together mean that purpose-built AI data centers are orders of magnitude better for AI workloads!
This is a significant opportunity for the AI data center providers, such as WhiteFiber.