Racks packing Nvidia's newest and shiniest AI supercomputer Blackwell Ultra cards have just been deployed by CoreWeave

The GB300 NVL72 racks were assembled and delivered by Dell.

Jul 8, 2025 - 11:30
 0  4
Racks packing Nvidia's newest and shiniest AI supercomputer Blackwell Ultra cards have just been deployed by CoreWeave

Some of the newest and most powerful AI supercomputers in the world have just been brought in to serve as part of CoreWeave's cloud AI service. According to Tom's Hardware, racks packed with Nvidia Blackwell Ultra GPUs built on the GB300 NVL72 platform have been deployed at the company in conjunction with Switch, its data center host.

The rollout is a partnership between CoreWeave and Dell, the latter having delivered the integrated racks just last week. The racks are said to be a densely packed configuration of 72 Blackwell Ultra cards, all of which is prebuilt and tested by Dell before delivery so they require the minimum amount of setup on site. Given CoreWeave has already deployed the cluster, it seems like everything went to plan.

Still, I'd have been worried as there are a lot of parts to these racks. Not only are you looking at the Blackwell Ultra cards – which end up equating to 36 Arm-based 72-core Grace CPUs, and 36 BlueField DPUs per rack – but there's also the excessive amounts of liquid cooling to keep up with that power generation. That's up to about 1,400W per GPU to keep cool, without having to worry about any leaks on top of that.

"It reflects the trust our customers and partners continue to place in our expertise. By seamlessly engineering the compute, the network and the storage under one roof and fine-tuning with integration and deployment services, we help our customers move at unprecedented speed and scale." reads a statement from Dell

"It is the continued innovation and speed that only we can execute that is allowing us to empower incredibly cool customers and accelerate work with partners like CoreWeave, Nvidia and many others."

CoreWeave is looking to bolster its cloud AI services with this set up which could deliver around 50% higher performance over the previous architecture. Per rack these new configs can put out 1.1 ExaFLOPS of dense FP4 inference and 0.36 ExaFLOPS of FP8 training performance.

The aim is to be able to train large language models faster and more effectively as well as help with the reasoning abilities of AI, and how they interface. The new Blackwell Ultra powered racks should also run about twice as fast for scale-out connections reaching speeds up to 14.4 GB/s.

More racks mean more power, so CoreWeave can always upscale to get even greater flops. But given these racks are arriving just seven-months after the GB200 were deployed, the question here doesn't feel like it's about power, it's about longevity. Having to upgrade from Blackwell to the Blackwell Ultra line this early must surely put some pain on CoreWeave's pockets, and how long until Blackwell Ultra Super Extreme replaces these?