Cisco Systems Inc. is expanding its hardware portfolio with two data center appliance lineups optimized to run artificial intelligence models.
The systems debuted today at a partner event the company is hosting in Los Angeles.
The first new product line, the UCS C885A M8 series, comprises servers that can each accommodate up to eight graphics processing units. Cisco offers three GPU options: the H100 and H200, which are both supplied by Nvidia Corp., and Advanced Micro Devices Inc.’s rival MI300X chip.
Every graphics card in a UCS C885A M8 machine has its own network interface controller, or NIC. This is a specialized chip that acts as an intermediary between a server and the network to which it’s attached. Cisco offers a choice between two Nvidia NICs: the ConnectX-7 or the BlueField-3, a so-called SuperNIC with additional components that speed up tasks such as encrypting data traffic.
Cisco also ships its new servers with BlueField-3 chips. Those are so-called data processing units, or DPUs, likewise made by Nvidia. They speed up some of the tasks involved in managing the network and storage infrastructure attached to a server.
A pair of AMD central processing units perform the computations not relegated to the server’s more specialized chips. Customers can choose between the chipmaker’s latest fifth-generation CPUs or its 2022 server processor lineup.
Cisco debuted the server series alongside four so-called AI PODs. According to TechTarget, those are large data center appliances that combine up to 16 Nvidia graphics cards with networking equipment and other supporting components. Customers can optionally add more hardware, notably storage equipment from NetApp Inc. or Pure Storage Inc.
On the software side, the AI Pods come with a license to Nvidia AI Enterprise. This is a collection of prepackaged AI models and tools that companies can use to train their own neural networks. There are also more specialized components, such as the Nvidia Morpheus framework for building AI-powered cybersecurity software.
The suite is complemented by two other software products: HPC-X and Red Hat OpenShift. The former offering is an Nvidia-developed toolkit that helps customers optimize the networks that power their AI clusters. OpenShift, in turn, is a platform that eases the task of building and deploying container applications.
“Enterprise customers are under pressure to deploy AI workloads, especially as we move toward agentic workflows and AI begins solving problems on its own,” said Cisco Chief Product Officer Jeetu Patel. “Cisco innovations like AI PODs and the GPU server strengthen the security, compliance, and processing power of those workloads.”
Cisco will make the AI Pods available for order next month. The UCS C885A M8 server series, in turn, is orderable now and will start shipping to customers by the end of the year.