
HPE ships NVIDIA-powered AI system with liquid cooling
Hewlett Packard Enterprise (HPE) has announced the shipment of its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72, which aims to facilitate the deployment of large AI clusters.
The HPE rack-scale system is specifically crafted to aid service providers and large enterprises in deploying expansive and intricate AI clusters. These clusters feature advanced direct liquid cooling technology designed to enhance both efficiency and performance.
Key components of the NVIDIA GB200 NVL72 system include the integration of 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, interconnected through high-speed NVIDIA NVLink. The system supports up to 13.5 TB total HBM3e memory with a bandwidth of 576 TB/sec.
Joseph Yang, General Manager of HPC and AI, APAC and India, stated: "As the demand for faster and more efficient AI workload processing surges across Asia, the need for advanced liquid cooling technology has never been greater to support the region's rapidly growing power and computing requirements. The new NVIDIA Grace Blackwell system is designed to help scale AI workloads, maximise performance, and unlock AI's full transformative potential—while addressing critical infrastructure challenges and energy efficiency needs. With over five decades of expertise in liquid cooling, HPE is uniquely positioned to support the evolving computing demands of AI and power every GenAI use case the region seeks to explore."
Trish Damkroger, Senior Vice President and General Manager of HPC & AI Infrastructure Solutions, HPE, commented: "AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment. As builders of the world's top three fastest systems with direct liquid cooling, HPE offers customers lower cost per token training and best-in-class performance with industry-leading services expertise."
The NVIDIA GB200 NVL72 system is constructed with a shared-memory, low-latency architecture and includes the latest GPU technology, designed for AI models exceeding a trillion parameters. It facilitates seamless integration of various NVIDIA components, including CPUs, GPUs, compute and switch trays, networking, and software, to enhance performance on heavily parallelisable workloads.
Bob Pette, Vice President of Enterprise Platforms at NVIDIA, said: "Engineers, scientists and researchers need cutting-edge liquid cooling technology to keep up with increasing power and compute requirements. Building on continued collaboration between HPE and NVIDIA, HPE's first shipment of NVIDIA GB200 NVL72 will help service providers and large enterprises efficiently build, deploy and scale large AI clusters."
Having five decades of experience in liquid cooling, HPE is positioned to deliver fast deployment and infrastructure support for liquid-cooled environments, contributing to its recognition for energy-efficient supercomputers. This includes being associated with eight of the top 15 supercomputers on the Green500 list, as well as constructing seven of the top 10 fastest supercomputers worldwide.
The system features advanced characteristics such as 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, with high-speed integration through NVIDIA NVLink. The inclusion of HPE's direct liquid cooling technology is also highlighted as a pivotal feature.
HPE provides comprehensive AI solutions on a global scale, supporting tailored AI clusters with extensive serviceability enhancements, such as expert on-site support and sustainability initiatives. This includes the availability of custom HPC & AI support services, performance and benchmarking engagements, and sustainability services aimed at reducing environmental impact.
The recent dispatch of NVIDIA GB200 NVL72 by HPE expands the company's portfolio of high-performance computing systems, designed to meet diverse needs in areas such as GenAI and scientific research.