GPT-4 can solve 13% of Math Olympiad problems. And OpenAI’s new o1 nails 83% of them. OpenAI’s new model o1 comes equipped with this superpower. In short, the more it “thinks,” the smarter it gets. And not just a little smarter, A LOT smarter.
To boost performance, AI companies must use ever-greater amounts of computing power. Over the past decade, the “compute” needed to train the latest AI models has surged 1 billion %!
“Scaling laws” are the cheat code for making AI smarter. It’s why GPT-4 runs circles around GPT-3.
With AI, bigger is better.
We knew “scaling laws” apply to training AI models—all the work that goes on behind the scenes before the AI is released into the wild. What we discovered with o1 is this law also applies to “inference,” too.
Before: The more data you feed AI, the better the answers (training).
Now: The harder AI thinks, the better the answers (inference).
When AI companies discovered scaling laws, they hoovered up every crumb of data they could find and used it to train their models.
We know what happened next. Spending on AI infrastructure went exponential.
The original ChatGPT cost roughly $10 million to train. GPT-4 cost $100 million. The next generation of models will likely cost a billion dollars or more to train.
Meanwhile, Nvidia’s (NVDA) AI revenues have surged 3,600%.
So far, roughly 90% of AI spending has gone toward training the models. OpenAI’s new model marks a turning point. The most important investment conclusion from o1 is spending on inference is set to surge.
Until a month ago, the largest cluster of graphics processing units (GPUs) in the world was 32,000. That’s 32,000 Nvidia chips all linked together to form one big AI brain.
Meta Platforms (META) announced it’s putting the final touches on its own 100,000-strong army of Nvidia chips to train its next AI model. Price tag: $3 billion.
And cloud giant Oracle (ORCL) revealed it’s crafting a data center packed with 130,000 of Nvidia’s new Blackwell GPUs!
As AI becomes widespread, the amount of “compute” we need will continue to surge.
The models will be trained in giant data centers packed to the rafters with powerful GPUs. And we’ll all have personalized AI assistants running on our iPhones that know us better than we know ourselves.
Investors calling AI a bubble are making a mistake. Tech adoption is moving faster than ever. OpenAI recently announced 200 million people now use ChatGPT at least once a week. Nobody had heard of ChatGPT two years ago!
We’re still in the early stages of the AI opportunity.






