AI chip king Nvidia is doing everything it can to defend its crown. That was much clear from CEO Jensen Huang’s GTC keynote address Tuesday. Whether it’s enough to restore the shine to the company’s stock in the coming days remains to be seen. During his roughly two-hour presentation at Nvidia’s premier showcase event, Huang announced several new hardware and software products — some predictable, some surprising — that reinforced Nvidia’s leadership position in the fast-moving artificial intelligence race. It is undeniable that Nvidia is not satisfied with what it has accomplished on its way to becoming one of the world’s most valuable companies. “Jensen delivered a terrific roadmap for his next-generation technology,” Jim Cramer said Tuesday night. Still, it wasn’t enough to give Nvidia’s stock a boost, with shares down 3.4% as part of an ugly day for tech stocks overall. After a much-needed 8% rally last week, Nvidia shares are down 5% over the past two days. The up-and-down performance reflects the volatility that has defined the Club stock so far this year, amid concerns about tougher restrictions on chip sales to China, tariffs, competition from custom chips, and the sustainability of heavy AI spending. Nothing that Huang could have realistically announced or said Tuesday at what’s been described as “Woodstock for AI” would have been enough to make those overhangs vanish overnight. As the AI trade has matured over the past few years, the market has understandably become more difficult to please. That became more apparent after the emergence in late January of DeepSeek, the Chinese startup that claimed to have made a more inefficient AI model. There was much consternation about whether big tech companies, Nvidia’s clients, could run AI with less computing power — and thus, slow their massive spending plans. That didn’t happen. One after another, these so-called hyperscalers confirmed their 2025 capital expenditure outlooks. As far as we’re concerned, Nvidia’s standing as the dominant player in AI has not been diminished. “In this market, nobody wants to wait for what’s around the corner,” Jim acknowledged. But at the Club, we’re willing to do just that — even as Nvidia shares remain about 19% below their pre-DeepSeek level. Here is a look at what Huang unveiled at GTC on Tuesday – starting with the must-have announcements that investors needed to see and then touching on some of the unexpected new offerings. NVDA 1Y mountain Nvidia’s stock performance over the past 12 months. Major boxes checked Huang provided investors with long-awaited details on the updated version of Blackwell, its recently launched AI chip platform. It is admittedly hard to believe we’re discussing improvements to Blackwell so early in its life cycle — production started to ramp last fall and just started contributing major revenue in the three months that ended in January. But that’s how it goes now that Nvidia has adopted an annual product lunch cadence. “Once a year, like clock ticks,” Huang said. It previously released new data center chips every two years. The new Blackwell Ultra is set to be released in the second half of 2025, and it includes more memory than the version on the market now. Memory is a very important part of AI computing, given all the data involved. The full data center rack version of Blackwell Ultra — a collection of 72 AI chips, known as graphics processing units (GPUs), and 36 traditional central processing units (CPUs) — delivers 1.5 times more artificial intelligence performance than the current rack-scale Blackwell product, according to the company. After some stumbles in the initial Blackwell rollout, because it used a different type of architecture than its predecessor Hopper generation chips, Huang made a point of emphasizing that the company will be able to “gracefully glide into” Blackwell Ultra. Huang shared information on Vera Rubin , the architecture set to succeed Blackwell in the second half of 2026. Among the details: It will include a next-generation Vera CPU as a follow-up to Nvidia’s Grace CPU released a few years ago. Huang said the Vera CPU has twice the performance of Grace. The Rubin GPU also will include HBM4, the latest type of high-bandwidth memory. The Blackwell GPU has what is called HBM3. The company said that the full server rack version of Vera Rubin will deliver 3.3 times the compute performance of Blackwell Ultra. “Basically, everything is brand new except for the chassis,” Huang said, in a comment designed to assuage concerns that the Vera Rubin rollout would experience similar hurdles to Blackwell. “This way, we can take a whole lot of risk in one direction and not risk a whole bunch of other things related to the infrastructure.” In the second half of 2027, Huang also said Nvidia plans to release Vera Rubin Ultra . Nvidia also confirmed Wall Street chatter that the company would announce a new type of networking product that helps stitch together large clusters of GPUs in data centers and make communication between the chips better. As the size of GPU clusters gets bigger to meet AI computing demands, the need for better networking technology grows. The new Nvidia product is built using silicon photonics rather than the traditional use of copper. “We use copper as much as we can” in scaling up the number of GPUs located near each other, Huang said. “On scale out, where the data centers are now the size of the stadium, we’re going to need something much long-distancing running, and this is where silicon photonics comes in.” Unexpected news Perhaps the most notable unexpected announcement Tuesday came on the software side with Nvidia Dynamo . We have talked for years about Nvidia’s software suite being an underappreciated aspect of its long-term competitiveness, and Dynamo underscores that. Dynamo is open-source software designed for the “inference” portion of AI computing — a model is first “trained” on massive amounts of data and the day-to-day use is known as inference. The announcement also reflects Nvidia’s growing presence in inference computing , complementing its long-established dominance in training. Dynamo is designed to provide efficiency gains for an emerging class of AI models known as “reasoning models,” which take time to think through a user’s prompt before generating an answer and require more computing power than the models that powered ChatGPT when it was first released in late 2022 by Microsoft -backed OpenAI. Subsequently, ChatGPT went viral and ushered in the current AI boom. Just over two years later, DeepSeek made a splash with its top-end RI model, which is a reasoning model. Another surprise announcement: Huang disclosed the name of the successor to Vera Rubin — yes, this is two generations down the road. Nvidia is naming its latest GPU architecture Feynman , and it’s set to debut in 2028. That’s a reference to the physicist Richard Feynman, continuing Nvidia’s tradition of naming its chips after famous scientists. Vera Rubin, for her part, was instrumental in the discovery of dark matter . The significance of Feynman GPUs, at this point, is primarily what it says about Nvidia’s pace of innovation as it faces questions about the sustainability of AI spending and the threat posed by customers such as Amazon who also are developing their own AI chips. During the presentation, Huang said the company likes to provide a clear public roadmap so everybody knows what’s coming, given all the complexities required in planning for new products. “This isn’t like buying a laptop,” he said. On a related note, though, Nvidia announced DGX Station , a new high-performance desktop computer that uses a Grace CPU and Blackwell GPU. The target audience is AI developers, data scientists, and researchers, according to the company. Finally, Nvidia disclosed a slew of partnerships with other companies. In general, a wave of press releases touting collaborations is commonplace at GTC, but the specifics each year are always interesting to see and further reinforce Nvidia’s technology leadership. A big one this year is Nvidia’s tie-up with General Motors on advanced driving systems and more . Nvidia touted work with subsidiaries of fellow Club name Alphabet on robotics and drug discovery. CrowdStrike , another portfolio stock, on Tuesday highlighted its own AI advancements accomplished using Nvidia software. (Jim Cramer’s Charitable Trust is long GOOGL, GEHC, CRWD, MSFT, AMZN, and NVDA. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
Nvidia CEO Jensen Huang delivers the keynote for the Nvidia GPU Technology Conference (GTC) at the SAP Center in San Jose, California, U.S. March 18, 2025.Â
Brittany Hosea-Small | Reuters
AI chip king Nvidia is doing everything it can to defend its crown.