DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

GTC18 - NVIDIA ready to go all out on inferencing

Wed, 28th Mar 2018
FYI, this story is more than a year old

NVIDIA has announced a series of new technologies and partnerships that expand its potential inference market while lowering the cost of delivering deep learning-powered services.

An inference engine is the aspect of AI that allows a device to figure out new information based on a set of rules and what it already knows.

"GPU acceleration for production deep learning inference enables even the largest neural networks to be run in real-time and at the lowest cost," says NVIDIA accelerated computing vice president and general manager lan Buck.

"With rapidly expanding support for more intelligent applications and frameworks, we can now improve the quality of deep learning and help reduce the cost for 30 million hyperscale servers."

TensorRT 4, the latest iteration of the inference optimiser, offers highly accurate INT8 and FP16 network execution and can be used to optimise, validate and deploy trained neural networks in hyperscale data centers.

NVIDIA Tesla GPU-accelerated servers can replace several racks of CPU servers for deep learning inference applications and services, freeing up rack space and reducing energy and cooling requirements.

The company says that the new software delivers up to 190x faster deep learning inference compared with CPUs for common applications such as computer vision, neural machine translation, automatic speech recognition, speech synthesis and recommendation systems.

Google and NVIDIA engineers have also integrated TensorRT into TensorFlow 1.7, making it easier to run deep learning inference applications on GPUs.

"The TensorFlow team is collaborating very closely with NVIDIA to bring the best performance possible on NVIDIA GPUs to the deep learning community," says Google engineering director Rajat Monga.

"TensorFlow's integration with TensorRT now delivers up to 8x higher inference throughput (compared to regular GPU execution within a low latency target) on NVIDIA deep learning platforms with Volta Tensor Core technology, enabling the highest performance for GPU inference within TensorFlow."

NVIDIA engineers have worked with Amazon, Facebook and Microsoft to ensure developers using ONNX frameworks such as Caffe 2, Chainer, CNTK, MXNet and Pytorch can now deploy to NVIDIA deep learning platforms.

NVIDIA partnered with Microsoft to build GPU-accelerated tools to help developers incorporate more intelligent features in Windows applications.

GPU acceleration for Kubernetes was also announced, which will facilitate enterprise inference deployment on multi-cloud GPU clusters.

NVIDIA is contributing GPU enhancements to the open source community to support the Kubernetes ecosystem.

In addition, MathWorks announced TensorRT integration with MATLAB.

Engineers and scientists can now automatically generate high-performance inference engines from MATLAB for Jetson, NVIDIA Drive and Tesla platforms.

TensorRT can also be deployed on NVIDIA Drive autonomous vehicles and NVIDIA Jetson embedded platforms.

Deep neural networks on every framework can be trained on NVIDIA DGX systems in the data center and then deployed into all types of devices for real-time inferencing at the edge.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X