NVIDIA launches next-gen AI 'data center in a box'
FYI, this story is more than a year old
NVIDIA has unveiled details of its new ‘data center in a box’, which includes an artificial intelligence (AI) system designed to support organisations across research facilities, offices, laboratories, and home offices.
The NVIDIA DGX Station A100 is, according to NVIDIA, a petascale workgroup server fitted with four NVIDIA A100 Tensor Core graphics processing units (GPUs), which are interconnected with NVIDIA NVLink. It can provide up to 320 gigabytes of GPU memory.
Furthermore, the workgroup server supports NVIDIA’s multi-instance GPU (MIG) technology, which means that it can provide up to 28 separate GPU instances that support parallel jobs and multiple users without affecting overall performance.
NVIDIA states that while the DGX Station A100 does not require power or cooling to data center levels, it is able to be managed remotely in the same way that NVIDIA data centers offer.
The server-class system enables system administrators to perform management tasks over a remote connection.
“DGX Station A100 brings AI out of the data center with a server-class system that can plug in anywhere,” explains NVIDIA DGX Systems vice president and general manager Charlie Boyle.
“Teams of data science and AI researchers can accelerate their work using the same software stack as NVIDIA DGX A100 systems, enabling them to easily scale from development to deployment."
NVIDIA also states that the workgroup server is more than four times faster than the previous generation of DGX station.
Organisations including NTT Docomo, Lockheed Martin, and BMW Group Production are using NVIDIA DGX station for their operations.
DGX Station A100 is available with four 80GB or 40GB NVIDIA A100 Tensor Core GPUs, providing options for data science and AI research teams to select a system according to their unique workloads and budgets.
NVIDIA DGX Station A100 and NVIDIA DGX A100 640GB systems will be available this quarter through NVIDIA Partner Network resellers worldwide.
NVIDIA will also provide an upgrade option for NVIDIA DGX A100 320GB customers.
For those looking for more advanced data center workloads, DGX A100 systems will be available with the new NVIDIA A100 80GB GPUs, doubling GPU memory capacity to 640GB per system. The United Kingdom’s Cambridge-1 supercomputer and the University of Florida’s HiPerGator AI supercomputer will receive the first instalment of the 640GB system.
NVIDIA DGX A100 640GB systems can also be integrated into the NVIDIA DGX SuperPOD Solution for Enterprise, for building, training, and deployment of AI models on turnkey AI supercomputers available in units of 20 DGX A100 systems.