HomeArtificial IntelligenceAccording to Nvidia, 20,000 GenAI startups are currently constructing on its platform

According to Nvidia, 20,000 GenAI startups are currently constructing on its platform

In his first quarter 2025 conference call on Wednesday, Nvidia CEO Jensen Huang highlighted the explosive growth of generative AI (GenAI) startups leveraging Nvidia's accelerated computing platform.

“There is a protracted line of generative AI startups, about 15,000 to twenty,000 startups in every little thing from multimedia to digital characters to design to application productivity to digital biology,” Huang said. “The AV industry moving to Nvidia in order that they can train end-to-end models to expand the scope of self-driving cars – the list is just extraordinary.”

Huang emphasized that demand for Nvidia's GPUs is “incredible” as corporations compete to bring AI applications to market using Nvidia's CUDA software and Tensor Core architecture. Consumer web corporations, enterprises, cloud providers, automotive corporations and healthcare organizations are all investing heavily in “AI factories” based on hundreds of Nvidia GPUs.

The Nvidia CEO said the shift to generative AI drives a “fundamental shift within the full-stack computing platform” as data processing shifts from information retrieval to generating intelligent output.

“(The computer) now generates contextual, intelligent answers,” Huang explained. “This will change computer systems all around the world. Even the PC's computer system might be revolutionized.”

To meet increasing demand, Nvidia began shipping its H100 GPUs with the “Hopper” architecture in the primary quarter and announced its next-generation “Blackwell” platform, which offers 4-30x faster AI training and inference than Hopper. Over 100 Blackwell systems from major computer manufacturers will launch this 12 months to enable widespread adoption.

Huang said Nvidia's end-to-end AI platform capabilities give the corporate an enormous competitive advantage over more narrowly focused solutions as AI workloads evolve rapidly. He expects demand for Nvidia's Hopper, Blackwell and future architectures to outstrip supply well into next 12 months because the GenAI revolution takes hold.

The demand for AI chips can hardly be met

Despite Nvidia's record first-quarter revenue of $26 billion, the corporate said customer demand far outstripped its ability to deliver GPUs for AI workloads.

“We’re in a race every single day,” Huang said of Nvidia’s efforts to meet orders. “Customers are putting a whole lot of pressure on us to deliver and arrange the systems as quickly as possible.”

Huang noted that demand for Nvidia's current flagship H100 GPU will outstrip supply for a while, whilst the corporate ramps up production of the brand new Blackwell architecture.

“Demand for H100 has continued to extend this quarter… We expect demand to exceed supply for a while as we move to H200 now as we move to Blackwell,” he said.

Nvidia's CEO attributed the urgency to the competitive advantage gained by those corporations that were first to bring groundbreaking AI models and applications to market.

“The next company to achieve the following major plateau gets to announce breakthrough AI, and the second company after that gets to announce something that’s 0.3% higher,” Huang explained. “The time to coach could be very necessary. The difference between three months of previous training time is every little thing.”

As a result, Huang says, cloud providers, enterprises and AI startups are feeling enormous pressure to secure as much GPU capability as possible to beat the competition and achieve milestones. He predicted that the availability shortage for Nvidia's AI platforms will last well into 2024.

“Blackwell is well above supply and we expect demand to proceed to exceed supply well into next 12 months,” Huang said.

Nvidia GPUs deliver compelling returns for cloud AI hosts

Huang also explained how cloud providers and other corporations can achieve high financial returns by hosting AI models on Nvidia's accelerated computing platforms.

“For every dollar spent on Nvidia’s AI infrastructure, cloud providers have the chance to generate $5 in revenue from hosting GPU instances inside 4 years,” Huang explained.

Huang provided the instance of a language model with 70 billion parameters using Nvidia's latest H200 GPUs. He claimed that a single server could generate 24,000 tokens per second and support 2,400 concurrent users.

“This implies that for each $1 spent on Nvidia H200 servers at current token prices, an API provider (providing tokens) can generate $7 in revenue over 4 years,” Huang said.

Huang added that ongoing software improvements from Nvidia proceed to spice up the inference performance of its GPU platforms. Last quarter, optimizations resulted in a three-fold acceleration of the H100 and enabled customers to realize a three-fold cost reduction.

Huang claimed that this strong return on investment is fueling soaring demand for Nvidia silicon from cloud giants resembling Amazon, Google, Meta, Microsoft and Oracle, that are vying to offer AI capabilities and recruit developers.

Combined with Nvidia's unmatched software tools and ecosystem support, he argued that these economics make Nvidia the platform of selection for GenAI implementations.

Nvidia is aggressively pushing into Ethernet networking technology for AI

While Nvidia is best known for its GPUs, the corporate can be a significant player in data center networking with its Infiniband technology.

In the primary quarter, Nvidia reported strong year-over-year growth in networking, driven by the launch of Infiniband.

However, Huang emphasized that Ethernet is an enormous recent opportunity for Nvidia to bring AI computing to a broader market. In the primary quarter, the corporate began shipping its Spectrum-X platform, optimized for AI workloads over Ethernet.

“Spectrum-X opens an entire recent marketplace for Nvidia networks and enables pure Ethernet data centers to accommodate large-scale AI,” said Huang. “We expect Spectrum-X to grow right into a billion-dollar product line inside a 12 months.”

Huang said Nvidia is “all in on Ethernet” and can provide a comprehensive roadmap for Spectrum switches to enhance its Infiniband and NVLink interconnects. This three-pronged networking strategy will allow Nvidia to deal with every little thing from single-node AI systems to massive clusters.

Nvidia also began testing its 51.2 terabits per second Spectrum 4 Ethernet switch throughout the quarter. According to Huang, leading server manufacturers like Dell are counting on Spectrum-X to bring Nvidia's accelerated AI networks to market.

“If you spend money on our architecture today without doing anything, it is going to move to an increasing number of clouds and an increasing number of data centers and every little thing will just run,” Huang assured.

Record first quarter results driven by data center and gaming

Nvidia reported record first-quarter revenue of $26 billion, up 18% quarter-over-quarter and 262% year-over-year, comfortably beating its $24 billion forecast.

The data center business was the essential driver of growth, with revenues increasing to $22.6 billion, up 23% quarter-on-quarter and an astonishing 427% year-on-year. CFO Colette Kress highlighted the incredible growth in the information center segment:

“Computing revenue grew greater than five-fold 12 months over 12 months and networking revenue grew greater than three-fold. Strong sequential growth in data centers was driven by all customer types, led by enterprise and consumer web corporations. Large cloud providers proceed to drive strong growth as they deploy and construct out Nvidia AI infrastructure at scale.”

Gaming revenue was $2.65 billion, down 8% quarter-over-quarter but up 18% year-over-year. This was according to Nvidia's expectations of a seasonal decline. Kress noted, “Market adoption of the GeForce RTX SUPER GPU is robust and end demand and channel inventory remain healthy across the lineup.”

Professional visualization revenue was $427 million, down 8% quarter over quarter but up 45% 12 months over 12 months. Automotive sales reached $329 million, growing 17% sequentially and 11% 12 months over 12 months.

For the second quarter, Nvidia expects revenue of roughly $28 billion, plus or minus 2%, with sequential growth expected across all market platforms.

Image courtesy of ThinkorSwim

Nvidia shares rose 5.9% to $1,005.75 in after-hours trading after the corporate announced a 10-for-1 stock split.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read