DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Msi expands nvidia rtx pro server lineup  accelerated by rtx pro 6000 blackwell server edition gpu

MSI unveils new modular server & AI platforms for data centres

Thu, 20th Nov 2025

MSI has launched a range of next-generation server and artificial intelligence platforms designed to address evolving demands in data centres. The new products are built on modular architecture with an emphasis on scalability, performance, and energy efficiency for large-scale compute, analytics, and cloud environments.

Collaboration partners

MSI's new solutions have been developed in conjunction with established partners across the processor and AI hardware industry, including AMD, Intel, and NVIDIA.

"Through close collaboration with industry leaders AMD, Intel, and NVIDIA, MSI continues to drive innovation across the data center ecosystem. Our goal is to deliver scalable, energy-efficient infrastructure that empowers customers to accelerate AI development and next-generation computing with performance, reliability, and flexibility at scale," said Danny Hsu, General Manager of Enterprise Platform Solutions, MSI.

DC-MHS architecture

The company's servers are based on the DC-MHS (Datacenter Modular Hardware System) architecture. This design standardises compute nodes, hardware modules, and management systems, aiming to reduce operational complexity. MSI's use of the DC-MHS platform enables the reconfiguration of core compute, open compute, and AI servers while maintaining thermal efficiency. These servers include support for EVAC CPU heatsinks, helping maintain temperature control under heavy workloads often found in large language model training and analytics.

Rack-level density

The new ORv3 21-inch 44OU rack integrates sixteen CD281-S4051-X2 2OU DC-MHS servers in a space-optimised enclosure. Power delivery, cooling, and networking are centralised within the rack, with features such as 48V power shelves and accessible front-facing I/O. According to MSI, these improvements are intended to increase density and simplify maintenance in hyperscale deployments.

Core Compute servers

MSI's Core Compute range includes various configurations powered by AMD EPYC 9005 Series and Intel Xeon 6 processors. The servers offer options such as a 2U chassis with either two or four nodes. AMD-based configurations provide up to 12 DDR5 DIMM slots per node and flexible storage across high-speed NVMe bays, while Intel-based equivalents scale memory up to 16 DDR5 DIMM slots per node and offer the same storage architecture. These designs are tailored for high-density environments and high thermal performance, supporting processors up to 500W TDP.

Enterprise servers

The enterprise servers are also built on DC-MHS principles, designed for cloud, virtualisation, and storage workloads. AMD single-socket systems come in 1U and 2U variants and can accommodate up to 24 DDR5 DIMM slots and multiple NVMe bay options. The Intel dual-socket models extend memory support to 32 DDR5 DIMM slots. These systems are suitable for additional use cases requiring both memory and throughput scalability.

AI platforms

MSI's AI portfolio includes rackmount servers and workstation solutions based on NVIDIA's MGX and DGX reference architectures. The new offerings are compatible with the latest NVIDIA Hopper GPUs, as well as NVIDIA Blackwell platforms, and can support up to 600W GPUs. Applications range from large-scale AI training and deep learning to edge inferencing and graphical workloads.

Among the available AI optimised servers, the dual-processor models provide up to eight PCIe x16 GPU slots, support for up to 32 DDR5 DIMMs, and several high-speed Ethernet ports. A 2U edge computing server is also available for smaller-scale inference and deployment needs. The desktop AI Station employs the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, delivering up to 784GB of unified memory at the developer's desktop for model training and inference.

Data centre flexibility

MSI states that the modular and scalable nature of these new platforms is intended to help data centre operators rapidly adapt to emerging application needs in artificial intelligence and cloud workloads. The company points to improved thermal designs and simplified maintenance as central pillars of the new range.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X