pwshub.com

Nvidia and co inject $160M into Applied Digital's GPU cloud

AI has made GPUs one of the hottest commodities on the planet, driving more than $30 billion in revenues for Nvidia in Q2 alone. But, without datacenters, the chip powerhouse and its customers have nowhere to put all that tech. 

With capacity in short supply, it's no wonder that VC and chipmakers alike are pumping billions of dollars into datacenters to keep the AI hype train from stalling.

The latest example includes a $160 million investment by Nvidia and partners in Dallas, Texas-based bit-barn operator Applied Digital, which offers a variety of datacenter and cloud services built around Nvidia's GPUs. As one financial journal noted on Thursday, the DC operator will use the cash injection to accelerate development of a datacenter complex in North Dakota and support additional debt financing schemes to pay for the costly accelerators.

With bleeding-edge GPUs commanding as much as a car these days — $30,000 to $40,000 a piece in the case of Nvidia's upcoming Blackwell chips — many datacenter operators have taken to using them as collateral to secure the massive loans.

Applied Digital isn't even the biggest example lately. In July, AI datacenter outfit CyrusOne scored another $7.9 billion in loans to pack its facilities with the latest accelerators. That's on top of the $1.8 billion in capital the firm bagged this spring.

CyrusOne isn't an isolated instance either. CoreWeave, arguably the biggest name in the rent-a-GPU racket, talked its backers into a $1.1 billion series-C funding round back in May. Only a few weeks later, CoreWeave had convinced them to shell out another $7.5 billion of debt financing. 

While multi-billion-dollar loans may grab headlines, most don't rise quite to that level. AI cloud upstart Foundry, for instance, managed to pick up $80 million in series-A and seed funding ahead of its launch in August.

Even some chipmakers have been vying for their share of the funding while it lasts. Groq, which is unique in that its inference cloud isn't based on off-the-shelf GPUs and instead uses its custom language processing units (LPUs), scored $640 million to expand its offering last month.

Meanwhile, Lambda, one of the original GPU-cloud operators, started the year with a $320 million funding round. Along with another $500 million in loans secured this spring, it now plans to add tens of thousands of Nvidia GPUs to its compute clusters.

Unsurprisingly, there are a number of bit-barn operators looking to replicate this strategy. TensorWave is working to scale out compute clusters based on AMD's MI300X accelerators, while Voltage Park is following Lambda and others' lead and sticking with Nvidia GPUs.

Those are just the ones that spring to mind, but the takeaway here is that it's a good time to be in the datacenter business, especially if those plans include renting out GPUs.

  • AI-pushing Adobe says AI-shy office workers will love AI if it saves them time
  • Global powers sign AI pact promising to preserve human rights, democracy
  • OpenAI co-founder's Safe Superintelligence startup inhales $1B in funding
  • Amazon congratulates itself for AI code that mostly works

Alongside the usual cast of VC firms, like BlackRock, Magnetar Capital, and Coatue, Nvidia has also got behind some of these endeavors, having previously thrown its weight behind CoreWeave.

Nvidia's motivation in financing these projects is obvious. It can only sell as many GPUs as there is capacity for them. Once deployed, each of these accelerators also have the potential to generate $1/hour of subscription revenues if it can convince customers its Enterprise AI suite is worthwhile.

A buck an hour might not sound like much, but, as we've previously discussed, it adds up pretty quickly when you're talking about clusters with 20,000 or more GPUs.

It's not a bad deal for the datacenter operators or their financiers, either, so long as their revenues are enough to cover their loan payments anyway.

That shouldn't be too much of a problem, according to our sibling site The Next Platform, which found that an investment of $1.5 billion to build, deploy, and network a cluster of roughly 16,000 H100s today would generate roughly $5.27 billion in revenues within four years. ®

Source: theregister.com

Related stories
2 days ago - tales from the near future — "We’re going to have supervision," says billionaire Oracle co-founder Ellison. On...
3 weeks ago - What good is going fast if you can't get past the next rack? In modern AI systems, using PCIe to stitch together accelerators is already too slow. Nvidia and AMD use specialized interconnects like NVLink and Infinity Fabric for this...
6 days ago - 'Chain of thought' techniques mean latest LLM is better at stepping through complex challenges OpenAI on Thursday introduced o1, its latest large language model family, which it claims is capable of emulating complex reasoning.…
1 month ago - Things are getting Tensor between Cupertino and Nv Apple has detailed in a research paper how it trained its latest generative AI models using Google's neural-network accelerators rather than, say, more fashionable Nvidia hardware.…
1 month ago - Samsung Electronics' eight-layer HBM3E chips have been approved by Nvidia for use in its artificial intelligence processors, marking a significant achievement as Samsung seeks to catch up with SK Hynix in the supply of advanced memory...
Other stories
9 minutes ago - Experts at the Netherlands Institute for Radio Astronomy (ASTRON) claim that second-generation, or "V2," Mini Starlink satellites emit interference that is a staggering 32 times stronger than that from previous models. Director Jessica...
9 minutes ago - The PKfail incident shocked the computer industry, exposing a deeply hidden flaw within the core of modern firmware infrastructure. The researchers who uncovered the issue have returned with new data, offering a more realistic assessment...
9 minutes ago - Nighttime anxiety can really mess up your ability to sleep at night. Here's what you can do about it right now.
9 minutes ago - With spectacular visuals and incredible combat, I cannot wait for Veilguard to launch on Oct. 31.
9 minutes ago - Finding the perfect pair of glasses is difficult, but here's how to do so while considering your face shape, skin tone, lifestyle and personality.