Investors are selling Nvidia shares Thursday, sending the stock down more than 3% after the artificial intelligence giant’s better-than-expected quarterly results the prior evening. What, exactly, is causing the selling? It could be the guidance for its current fiscal 2026 first quarter. While strong, it wasn’t as strong as investors got used to in recent years. It could be that it’s expecting some margin pressure in the short run as its next-generation Blackwell chip platform launches. It’s also an angsty moment on Wall Street overall as investors grapple with President Donald Trump ‘s tariffs and wider tariff threats and question the health of the U.S. economy. The trade policy uncertainty could be hurting Nvidia , too, because it does sell AI chips to China, albeit lower-powered ones due to U.S. restrictions on semiconductor exports. Indeed, shares of Nvidia were initially higher Thursday before Trump clarified more tariffs are coming next month. In any case, we liked the quarter and continue to believe Nvidia is an “own it, don’t trade it” stock. “It was a monumental quarter,” Jim Cramer said during Thursday’s Morning Meeting. Here’s a closer look at three bullish aspects of Nvidia’s quarter and conference call beyond the headlines — things that make us even more confident in Nvidia’s future. Blackwell pain for long-term gain The Blackwell chip platform encountered various manufacturing and installation challenges in its rollout, primarily a full server rack version known as the GB200 NVL72 . That product is much more than a single chip that someone could hold in their hands — each GB200 rack has 1.5 million components, according to Nvidia CEO Jensen Huang. Despite all the hand-wringing on Wall Street about Blackwell’s challenges, it may not all be for naught. On Wednesday night’s earnings call, Huang acknowledged the “hiccup,” saying it “probably cost us a couple of months.” Still, Nvidia booked a greater-than-expected $11 billion in Blackwell revenue in the reported November-to-January fiscal 2025 fourth quarter, and supply is increasing to meet the strong customer demand. “The team did an amazing job recovering,” he said. The lessons learned in that recovery could help Nvidia improve execution on its annual roadmap for new data-center AI chips — an important competitive advantage over rivals, if the company can live up to such an aggressive timeline. The Blackwell lineup succeeded the Hopper family, which was first released in 2022. Nvidia is preparing to launch the Blackwell Ultra, sometimes called the GB300, later this year. “Between Blackwell and Blackwell Ultra, the system architecture is exactly the same. It’s a lot harder going from Hopper to Blackwell because we went from an NVLink 8 system to an NVLink 72-based system. So the chassis, the architecture of the system, the hardware, the power delivery, all of that had to change. This was quite a challenging transition. But … Blackwell Ultra will slot right in.” It doesn’t stop there: Nvidia already is getting its partners across the supply up to speed on Blackwell’s successor, which is going to be called Vera Rubin. In a note to clients Thursday, analysts at Morgan Stanley said they believe Nvidia will improve its execution for Rubin. “To the extent that the Blackwell ramp might have been slightly too ambitious, they will course correct with [Blackwell Ultra] and Rubin. Which we think will have more of a focus on manufacturability — but even if GB200 was not as manufacturable initially, it is ramping now which will pressure the competition,” the analysts wrote. In other words, the short-term Blackwell pain could help Nvidia maintain its technology leadership over the long term. ‘Vast majority’ in inference Nvidia’s status as the dominant maker of chips to train AI models has been well-established ever since the launch of ChatGPT in late 2022 sparked the generative AI boom. ChatGPT from Microsoft -backed OpenAI used Nvidia chips to create its model. The other part of AI computing is called inference . Training is basically the process of feeding an AI model a lot of data and getting it ready for prime time. Inference is prime time; it is the model being put into action and used by people on a day-to-day basis. We continue to get new information that shows Nvidia’s foothold in inference is quite substantial. That helps cut against a long-standing Nvidia bear case that went something like this: As inference becomes a bigger piece of the AI computing pie, Nvidia will cede a lot of ground to competitors such as Advanced Micro Devices and large technology companies that make their own chips such as Club name Amazon . AMD and the threat of custom chips have not been vanquished, but there’s little doubt that Nvidia is a serious threat in inference. Last year, Nvidia said it estimated that about 40% of its data center revenue was driven by inference. On Wednesday night’s call, Huang said the “vast majority of our compute today is actually inference.” That was an important revelation. “Blackwell has great demand for inference,” he added. “Many of the early GB200 deployments are earmarked for inference, a first for a new architecture.” Historically speaking, inference was viewed as being less compute-intensive than training, which is why the competitive risks to Nvidia seemed more significant there. Recent innovations in AI — specifically, a burgeoning class of AI “reasoning models,” including one made by Chinese AI startup DeepSeek — are challenging that notion. Reasoning modes in the inference stage require dramatically more compute than the models of old , according to Huang, because they take time to “reason” before spitting out an answer to a user’s prompt. That’s why Huang believes DeepSeek’s emergence late last month — despite spooking investors and causing Nvidia shares to plunge — is a positive development for AI adoption and demand for Nvidia chips. Shares of Nvidia, while still below their pre-DeepSeek levels, have clawed back a lot of those losses. “While training demand will still be an important vector for growth … the inference market should be a substantially larger opportunity for Nvidia,” analysts at William Blair said in a note to clients Thursday. Are custom chips a threat? The threat of custom chips is not new, but Huang’s comments on the topic Wednesday night were instructive in understanding why Nvidia’s chips are still so coveted by customers who are also pursuing their own chips. That includes Amazon and a trio of other Club names: Microsoft, Meta Platforms , and Alphabet . Broadcom, which co-designs chips for Meta and Alphabet, also is a Club holding. Shares of Broadcom , which reports its earnings next Thursday, have yet to recover all of their DeepSeek-driven losses. Within the semiconductor industry, Nvidia’s chips are known as “merchant silicon,” while custom chips fall into the category of an application-specific integrated circuit or ASIC. “We build very different things than ASICs; in some ways, completely different; in some areas, we intersect,” Huang said on the call. Huang argued that Nvidia’s chip architecture is capable of handling a variety of AI model types — some are better for text-based data, others are optimal for images or video — and that is a general approach advantage is a more narrowly designed custom chip. Huang also said the fast-moving nature of AI software plays into Nvidia’s hands. “Building an ASIC is no different than what we do… to build a new architecture. And the ecosystem that sits on top of our architecture is 10 times more complex today than it was two years ago. And that’s fairly obvious because the amount of software that the world is building on top of our architecture is growing exponentially and AI is advancing very quickly. … Just because the chip is designed doesn’t mean it gets deployed.” “Our technology is not only more advanced, more performant, it has much, much better software capability. And very importantly, our ability to deploy is lightning fast,” Huang said, in an apparent reference to the aforementioned annual product cadence. “These things are not for the faint of heart, as everybody knows now. And so, there’s a lot of different reasons why we do well, why we win.” (Jim Cramer’s Charitable Trust is long NVDA, AVGO, MSFT, GOOGL, AMZN, META. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
Jensen Huang, co-founder and chief executive officer of Nvidia Corp., holds up the company’s AI accelerator chips for data centers as he speaks during the Nvidia AI Summit Japan in Tokyo, Japan, on Wednesday, Nov. 13, 2024.
Akio Kon | Bloomberg | Getty Images