Off-Prem

PaaS + IaaS

Nvidia and chums inject $160M into Applied Digital to keep GPU sales rolling

Datacenters are the lifeline for its $30B ML-fueled boom


AI has made GPUs one of the hottest commodities on the planet, driving more than $30 billion in revenues for Nvidia in Q2 alone. But, without datacenters, the chip powerhouse and its customers have nowhere to put all that tech. 

With capacity in short supply, it's no wonder that VC and chipmakers alike are pumping billions of dollars into datacenters to keep the AI hype train from stalling.

The latest example includes a $160 million investment by Nvidia and partners in Dallas, Texas-based bit-barn operator Applied Digital, which offers a variety of datacenter and cloud services built around Nvidia's GPUs. As one financial journal noted on Thursday, the DC operator will use the cash injection to accelerate development of a datacenter complex in North Dakota and support additional debt financing schemes to pay for the costly accelerators.

With bleeding-edge GPUs commanding as much as a car these days — $30,000 to $40,000 a piece in the case of Nvidia's upcoming Blackwell chips — many datacenter operators have taken to using them as collateral to secure the massive loans.

Applied Digital isn't even the biggest example lately. In July, AI datacenter outfit CyrusOne scored another $7.9 billion in loans to pack its facilities with the latest accelerators. That's on top of the $1.8 billion in capital the firm bagged this spring.

CyrusOne isn't an isolated instance either. CoreWeave, arguably the biggest name in the rent-a-GPU racket, talked its backers into a $1.1 billion series-C funding round back in May. Only a few weeks later, CoreWeave had convinced them to shell out another $7.5 billion of debt financing. 

While multi-billion-dollar loans may grab headlines, most don't rise quite to that level. AI cloud upstart Foundry, for instance, managed to pick up $80 million in series-A and seed funding ahead of its launch in August.

Even some chipmakers have been vying for their share of the funding while it lasts. Groq, which is unique in that its inference cloud isn't based on off-the-shelf GPUs and instead uses its custom language processing units (LPUs), scored $640 million to expand its offering last month.

Meanwhile, Lambda, one of the original GPU-cloud operators, started the year with a $320 million funding round. Along with another $500 million in loans secured this spring, it now plans to add tens of thousands of Nvidia GPUs to its compute clusters.

Unsurprisingly, there are a number of bit-barn operators looking to replicate this strategy. TensorWave is working to scale out compute clusters based on AMD's MI300X accelerators, while Voltage Park is following Lambda and others' lead and sticking with Nvidia GPUs.

Those are just the ones that spring to mind, but the takeaway here is that it's a good time to be in the datacenter business, especially if those plans include renting out GPUs.

Alongside the usual cast of VC firms, like BlackRock, Magnetar Capital, and Coatue, Nvidia has also got behind some of these endeavors, having previously thrown its weight behind CoreWeave.

Nvidia's motivation in financing these projects is obvious. It can only sell as many GPUs as there is capacity for them. Once deployed, each of these accelerators also have the potential to generate $1/hour of subscription revenues if it can convince customers its Enterprise AI suite is worthwhile.

A buck an hour might not sound like much, but, as we've previously discussed, it adds up pretty quickly when you're talking about clusters with 20,000 or more GPUs.

It's not a bad deal for the datacenter operators or their financiers, either, so long as their revenues are enough to cover their loan payments anyway.

That shouldn't be too much of a problem, according to our sibling site The Next Platform, which found that an investment of $1.5 billion to build, deploy, and network a cluster of roughly 16,000 H100s today would generate roughly $5.27 billion in revenues within four years. ®

Send us news
1 Comment

Nvidia admits Blackwell defect, but Jensen Huang pledges Q4 shipments as promised

The setback won't stop us from banking billions, CFO insists

AI-pushing Adobe says AI-shy office workers will love AI if it saves them time

knowledge workers, overwhelmed by knowledge tasks? We know what you need

DoJ reportedly advances Nvidia antitrust probe

Uncle Sam apparently worried GPU giant may be punishing customers who shop around

Buying a PC for local AI? These are the specs that actually matter

If you guessed TOPS and FLOPS, that's only half right

Nvidia's growth slows to a mere 122 percent but it’s still topping expectations

Still growing in China, ramping Hopper prods and predicting Blackwell billions soon

Tenstorrent's Blackhole chips boast 768 RISC-V cores and almost as many FLOPS

Shove 32 of 'em in a box and you've got nearly 24 petaFLOPS of FP8 perf

AMD's Victor Peng: AI thirst for power underscores the need for efficient silicon

Moore's Law may be running out of steam, but there are still knobs to turn and levers to pull

Copper's reach is shrinking so Broadcom is strapping optics directly to GPUs

What good is going fast if you can't get past the next rack?

LiquidStack says its new CDU can chill more than 1MW of AI compute

So what’s that good for? Like eight of Nvidia’s NVL-72s?

AI's thirst for water is alarming, but may solve itself

Its energy addiction, on the other hand, only seems to get worse

Canadian artist wants Anthropic AI lawsuit corrected

Tim Boucher objects to the mischaracterization of his work in authors' copyright claim

Benchmarks show even an old Nvidia RTX 3090 is enough to serve LLMs to thousands

For 100 concurrent users, the card delivered 12.88 tokens per second—just slightly faster than average human reading speed