As enterprises embrace new uses for artificial intelligence, data access bottlenecks have been a limiting factor in the throughput and scalability of compute-intensive workloads. Through a major collaboration with Nvidia Corp., Dell Technologies Inc. has addressed this issue with the certification of its PowerScale portfolio for Nvidia’s DGX SuperPOD environments.
The joint initiative, which became official in March, has made Dell PowerScale the world’s first Ethernet-based storage solution to be certified on the Nvidia DGX SuperPOD platform. This certification means that users can leverage PowerScale and ubiquitous Ethernet technology for AI training, checkpointing and inferencing.
The SuperPOD initiative between the two industry powerhouses highlights how the demands of AI are leading to innovative new solutions that span chip, server and storage technologies in the modern IT portfolio.
“Organizations are rushing to experiment with AI, but there are many challenges to achieving a return on investment. Data sovereignty issues, legal and compliance concerns and data quality are all top of mind,” said theCUBE Research co-founder and Chief Analyst Dave Vellante. “Our research shows that companies are turning to industry leaders like Dell and Nvidia to help provide AI expertise and services to lower risk and get to ROI sooner.”
This feature is part of SiliconANGLE Media’s exploration of Dell’s market impact in enterprise AI. Be sure to watch theCUBE’s analyst-led presentation of “Making AI Real With Data,” a joint event with Dell and Nvidia, on October 15, along with theCUBE’s discussion of SuperPOD with Dell executives. (* Disclosure below.)
Leveraging Ethernet to accelerate generative AI workloads
Why is Ethernet an important element of the collaboration between Dell and Nvidia?
Ethernet is emerging as the preferred backbone for AI fabrics, designed to enable high-performance and interoperable architectures. In November, Nvidia launched its super-fast Spectrum-X Ethernet to accelerate generative AI workloads, and Dell was among the first hardware companies to integrate it in its server lineup. Spectrum-X is purpose-built for AI workloads and can deliver 1.6 times higher networking performance than traditional Ethernet for AI.
“We believe as AI workflows increase and become the predominant workflow throughout the data centers that customers are going to need this high bandwidth,” said Darren Miller, director of vertical industry solutions at Dell, in a conversation with theCUBE prior to the October 15 event. “They’re going to need these new high-performance Ethernet infrastructures.”
Technologies such as Nvidia Magnum IO and NFS over RDMA are natively integrated into Nvidia ConnectX NICs and Nvidia Spectrum switches, accelerating network access to storage. These advanced features further minimize data transfer times to and from PowerScale storage, ensuring faster storage throughput for AI training, checkpointing and inferencing tasks.
SuperPOD solution powers AI applications
Why is this SuperPOD certification an important milestone for Dell?
Dell’s SuperPOD certification and its partnership with Nvidia underscores the importance of a strategy that brings the GPU, networking and storage together. This approach is being driven by the demands of AI and the need for a holistic solution that is continuing to evolve.
“GPUs are getting larger and more demanding, and the network has to keep up,” explained Varun Chhabra, senior vice president of product marketing for Dell’s infrastructure solutions group, in a recent interview with SiliconANGLE.
By leveraging PowerScale as the world’s first Ethernet storage solution certified on Nvidia DGX SuperPOD, Dell customers will be able to realize a number of benefits. These include storage that exceeds benchmark performance requirements for DGX SuperPOD and an ability to power AI applications with a fully validated and tested reference architecture from Dell and Nvidia.
In addition, customers will be able to design, deploy and manage AI workloads for improved performance and take advantage of PowerScale’s scaling for Nvidia’s DGX SuperPOD. The solution also includes the ability to run AI workloads on SuperPOD while leveraging PowerScale’s comprehensive suite of security features.
“The DGX SuperPOD scales incrementally by group sets of 32 DGX servers,” Miller said. “The scaling plays perfectly into PowerScale’s core fundamental: scalability.”
As a reference architecture from Nvidia, DGX SuperPOD is specifically designed for generative AI infrastructure. Dell’s turnkey solution with SuperPOD will enable deployment of generative AI workloads while taking advantage of Ethernet’s robust and widely used networking technology for high-speed communication.
Dell’s integration of compute, storage and networking for AIOps is part of its overall strategy to navigate the wave of innovation being driven by AI. With GPUs powering AI deployment, Dell’s partnership with Nvidia demonstrates how the world’s major tech players are building new architectures to support enterprise models that are continuing to evolve.
Be sure to tune in for theCUBE and SiliconANGLE Media’s analyst-led presentation of “Making AI Real with Data,” a joint production with Dell and Nvidia on October 15, along with theCUBE’s discussion of SuperPOD with Dell executives.
(* Disclosure: TheCUBE is a paid media partner for the “Making AI Real with Data” event. Neither Dell Technologies, the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)