DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

NVIDIA's Tesla P100 delivers massive leap in data center performance

Fri, 12th Aug 2016
FYI, this story is more than a year old

NVIDIA has recently introduced the Tesla P100 GPU accelerator for PCIe servers.

Aimed to deliver massive leaps in performance and value, NVIDIA says the demand for supercomputing cycles is higher than ever.

According to the company, high performance computing (HPC) technologies are increasingly required to power computationally intensive deep learning applications, while researches are applying AI techniques to drive advances.

The company claims that the Tesla P100 GPU accelerator for PCIe is on point to meet the computational demands through unmatched performance and efficiency.

The product is also optimized to power the most computationally intensive AI and HPC data center applications.

According to NVIDIA, key features of the Tesla P100 for PCIe include:

  • Unmatched application performance for mixed-HPC workloads - Delivering 4.7 teraflops and 9.3 teraflops of double-precision and single-precision peak performance, respectively, a single Pascal-based Tesla P100 node provides the equivalent performance of more than 32 commodity CPU-only servers.
  • CoWoS with HBM2 for unprecedented efficiency - The Tesla P100 unifies processor and data into a single package to deliver unprecedented compute efficiency. 
  • PageMigration Engine for simplified parallel programming - Frees developers to focus on tuning for higher performance and less on managing data movement, and allows applications to scale beyond the GPU physical memory size with support for virtual memory paging. 
  • Unmatched application support - With 410 GPU-accelerated applications, including nine of the top 10 HPC applications, the Tesla platform is the world's leading HPC computing platform.

Ian Buck, vice president of accelerated computing at NVIDIA, says accelerated computing is the only path forward to keep up with the insatiable demand for HPC and AI supercomputing.

"Deploying CPU-only systems to meet this demand would require large numbers of commodity compute nodes, leading to substantially increased costs without proportional performance gains," says Buck.

"Dramatically scaling performance with fewer, more powerful Tesla P100-powered nodes puts more dollars into computing instead of vast infrastructure overhead."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X