dcn-as logo
Story image

NVIDIA unveils its first GPU featuring Ampere architecture

NVIDIA has today announced that the very first GPU based on the company’s Ampere architecture is generally available and shipping to customers worldwide.

The GPU, called NVIDIA A100, unifies AI training and boasts a performance level up to 20 times more powerful than its predecessors. 

It comes as the world's demand for data sees an unprecedented surge as people from across the globe stay home, relying on tools powered by cloud.

The NVIDIA A100 contains multi-instance GPU capability, allowing for it to be partitioned into as many as seven independent instances for inferencing tasks, while third-generation NVIDIA NVLink interconnect technology allows multiple A100 GPUs to operate as one giant GPU for ever-larger training tasks.

NVIDIA says almost all major cloud providers expect to incorporate the GPU into the offerings, including Azure, AWS, Google Cloud, Alibaba Cloud, Oracle, and more.

A universal workload accelerator, the A100 is also built for data analytics, scientific computing and cloud graphics.

“The powerful trends of cloud computing and AI are driving a tectonic shift in data centre designs so that what was once a sea of CPU-only servers is now GPU-accelerated computing,” says NVIDIA founder and CEO Jensen Huang. 

“NVIDIA A100 GPU is a 20x AI performance leap and an end-to-end machine learning accelerator — from data analytics to training to inference.

“[It] will simultaneously boost throughput and drive down the cost of data centres.”
 

NVIDIA says its newest GPU proves its innovation in 5 key breakthroughs. They are:

  1. NVIDIA Ampere architecture — At the heart of A100 is the NVIDIA Ampere GPU architecture, which contains more than 54 billion transistors, making it the world’s largest 7-nanometer processor.
     
  2. Third-generation Tensor Cores with TF32 — The Tensor Cores are now more flexible, faster and easier to use. Their expanded capabilities include new TF32 for AI, which allows for up to 20x the AI performance of FP32 precision, without any code changes. Tensor Cores also now support FP64, delivering up to 2.5x more compute than the previous generation for HPC applications.
     
  3. Multi-instance GPU — MIG, a new technical feature, enables a single A100 GPU to be partitioned into as many as seven separate GPUs so it can deliver varying degrees of compute for jobs of different sizes, providing optimal utilisation and maximising return on investment.
     
  4. Third-generation NVIDIA NVLink — Doubles the high-speed connectivity between GPUs to provide efficient performance scaling in a server.
     
  5. Structural sparsity — This new efficiency technique harnesses the inherently sparse nature of AI math to double performance.
     

Cloud providers are onboard 

Microsoft will be one of the first companies to take advantage of the A100, using it to enable better training and bolster Azure’s performance and scalability.

“Microsoft trained Turing Natural Language Generation, the largest language model in the world, at scale using the current generation of NVIDIA GPUs,” says Microsoft Corp corporate vice president Mikhail Parakhin. 

“Azure will enable training of dramatically bigger AI models using NVIDIA’s new generation of A100 GPUs to push the state of the art on language, speech, vision and multi-modality.”

Story image
Fivetran opens Sydney office, announces APAC data centre
“Fivetran is growing and investing in areas that are most important to our customer’s success. In this instance, that means timely, substantive, in-region support and a product that just works for customers in Australia, New Zealand and beyond.”More
Story image
TCO puts sustainability data back in the hands of IT purchasers
TCO Certified has extended its online product finder to included measurable sustainable performance data for certified products.More
Story image
Advanced Energy develops 48v power rack for Open Compute Project
Traditional data center racks use 12-volt power shelves. However, higher performance compute and storage platforms now demand more power, which results in very high current. More
Story image
Dell and Google Cloud deepen integration to bolster hybrid cloud storage
Dell and Google Cloud have announced the launch of their new hybrid cloud storage system, with the capability of moving as much as 50 petabytes of high-performance workloads.More
Story image
Schneider Electric launches public API for cloud-based software
As the first public API for the software, it enables IT solution providers and end users to integrate a power and critical infrastructure monitoring platform into their preferred management system.More
Story image
Aussie edge data centre provider receives $20m in funding
Leading Edge Data Centres has received a $20 million cash injection from Washington H. Soul Pattinson which will be used to fund growth of its regional data centre network in Australia.More