DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image
NVIDIA smashes MLPerf benchmarks for AI
Fri, 8th Nov 2019
FYI, this story is more than a year old

NVIDIA has achieved its fastest results yet for its AI inference workloads in data centers and the edge.

MLPerf Inference 0.5 is the industry's first independent suite of AI benchmarks for inference. The benchmarks cover a range of form factors and inferencing scenarios for AI operations such as image classification, object detection, and translation.

NVIDIA Turing GPUs for data centers and NVIDIA Xavier system-on-a-chip for edge computing topped all five MLPerf benchmark tests, the company reports.

NVIDIA was the only AI platform company to submit results across all five MLPerf benchmarks.

Turing GPUs reportedly provided the highest performance per processor amongst commercially available entries; while Xavier performed highest amongst commercially available edge and mobile SoCs under both edge-focused scenarios (single-stream and multi-stream).

All of NVIDIA's MLPerf results were achieved using NVIDIA TensorRT 6, which is a high-performance deep learning inference software that optimizes and deploys AI applications easily in production from the data center to the edge. New TensorRT optimizations are also available as open source in the GitHub repository.

NVIDIA's general manager and vice president of accelerated computing, Ian Buck, says AI is now at a tipping point as it moves from research to large-scale deployment for real applications.

“AI inference is a tremendous computational challenge. Combining the industry's most advanced programmable accelerator, the CUDA-X suite of AI algorithms and our deep expertise in AI computing, NVIDIA can help data centers deploy their large and growing body of complex AI models.

NVIDIA says that GPUs accelerate large-scale inference workloads in the world's largest cloud infrastructures, including Alibaba Cloud, AWS, Google Cloud Platform, Microsoft Azure and Tencent. AI is now moving to the edge at the point of action and data creation.

NVIDIA also announced Jetson Xavier NX, which is a small and powerful AI supercomputer for robotic and embedded computing devices at the edge.  It joins other solutions in the Jetson family, including the Jetson Nano, Jetson AGX Xavier series, and the Jetson TX2 series.

The Xavier NX is designed to help create embedded edge computing devices that demand increased performance but are constrained by size, weight, power budgets or cost. These include small commercial robots, drones, intelligent high-resolution sensors for factory logistics and production lines, optical inspection, network video recorders, portable medical devices and other industrial IoT systems.

The Jetson Xavier NX module will be available in March from NVIDIA's distribution channels for companies looking to create high-volume production edge systems.