HomeNewsCoreWeave's $1.1 billion capital increase shows that the marketplace for alternative clouds...

CoreWeave's $1.1 billion capital increase shows that the marketplace for alternative clouds is booming

The appetite for alternative clouds has never been greater.

Case in point: CoreWeave, the GPU infrastructure provider that started off as a cryptocurrency mining company, raised $1.1 billion in recent funding this week from investors including Coatue, Fidelity and Altimeter Capital. The startup will reportedly be valued at $19 billion after funding. The recent financing brings CoreWeave's total raising to $5 billion in debt and equity – a remarkable number for an organization that has only been around for lower than a decade.

It's not only CoreWeave.

Lambda Labs, which also offers a spread of cloud-hosted GPU instances, secured a “special funding vehicle” of as much as $500 million in early April after closing a $320 million Series C round for months. Non-profit organization Voltage Park, backed by crypto billionaire Jed McCaleb, announced last October that it was investing $500 million in GPU-powered data centers. And Together AI, a cloud GPU host that also conducts generative AI research, raised $106 million in a funding round led by Salesforce in March.

So why all the thrill concerning the alternative cloud space – and the cash flowing into it? In three words: generative artificial intelligence.

As generative AI continues to boom, so does the demand for hardware to run and train generative AI models at scale. Architecturally, GPUs are the logical selection for training, tuning, and running models because they contain hundreds of cores that may work in parallel to execute the linear algebra equations that make up generative models.

But installing GPUs is pricey. Therefore, most developers and organizations select the cloud as a substitute.

Cloud computing incumbents—Amazon Web Services (AWS), Google Cloud, and Microsoft Azure—offer no shortage of GPU and dedicated hardware instances optimized for generative AI workloads. But for not less than some models and projects, alternative clouds can ultimately be cheaper – and offer higher availability.

At CoreWeave, renting an Nvidia A100 40GB — a well-liked selection for model training and inference — costs $2.39 per hour, which equates to $1,200 per 30 days. On Azure, the identical GPU costs $3.40 per hour or $2,482 per 30 days; At Google Cloud it's $3.67 per hour or $2,682 per 30 days.

Since generative AI workloads typically run on GPU clusters, cost deltas grow quickly.

“Companies like CoreWeave take part in a market that we call specialized 'GPU as a Service' cloud providers,” Sid Nag, vp of cloud services and technologies at Gartner, told TechCrunch. “Given the high demand for GPUs, they supply an alternative choice to the hyperscalers where they’ve adopted Nvidia GPUs and created a unique path to market and access those GPUs.”

Nag points out that even some large tech firms have began counting on alternative cloud providers as they face computing capability challenges.

Last June, CNBC reported that Microsoft had signed a multi-billion dollar cope with CoreWeave to be sure that OpenAI, the maker of ChatGPT and an in depth Microsoft partner, had sufficient computing power to coach its generative AI models. Nvidia, the supplier of the vast majority of CoreWeave chips, sees this as a desirable trend, perhaps for leverage reasons; There are said to have been some alternative cloud providers priority access to its GPUs.

Lee Sustar, principal analyst at Forrester, sees the success of cloud providers like CoreWeave partly because they don't have the infrastructure “baggage” that incumbents struggle with.

“Given the hyperscaler dominance of the whole public cloud market, which requires huge investments in infrastructure and a spread of services that generate little to no revenue, challengers like CoreWeave have a chance with a give attention to premium AI Services to succeed without the burden of hypercaler-level investments,” he said.

But is that this growth sustainable?

Sustar has his doubts. He believes that the expansion of different cloud providers is determined by their ability to (1) proceed to bring GPUs online in large quantities and (2) offer them at competitively low prices.

Price competition could change into difficult in the long term as incumbents like Google, Microsoft and AWS increase their investments in custom hardware to run and train models. Google offers its TPUs; Microsoft recently unveiled two custom chips, Azure Maia and Azure Cobalt; and AWS has Trainium, Inferentia and Graviton.

“Hypercalers will leverage their custom chips to scale back their dependencies on Nvidia, while Nvidia will depend on CoreWeave and other GPU-centric AI clouds,” Sustar said.

Additionally, while many generative AI workloads run best on GPUs, not all workloads support them – especially in the event that they should not time-critical. CPUs can perform the mandatory calculations, but are typically slower than GPUs and custom hardware.

Among the existential concerns of different cloud providers is the danger of the generative AI bubble bursting, which might lead to providers having mountains of GPUs and never nearly enough customers demanding them. In the short term, nevertheless, the longer term looks shiny, say Sustar and Nag, each of whom expect a gentle stream of rising clouds.

“GPU-focused cloud startups will bring loads of competition to incumbents, especially amongst customers who’re already multi-cloud capable and may handle the complexities of management, security, risk and compliance across multiple clouds,” Sustar said. “These kinds of cloud customers can easily check out a brand new AI cloud if it has credible leadership, solid financial backing, and nil wait GPUs.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read