DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

AWS EC2 strengthens compute power for ML applications

Tue, 24th Sep 2019
FYI, this story is more than a year old

Amazon Web Services now offers Amazon Elastic Compute Cloud (EC2) G4 Instances for organisations that require cloud compute for machine learning and graphics-intensive applications.

Amazon EC2 will leverage the G4 instances to boost resources required to tackle computationally demanding tasks that could benefit from additional GPU acceleration.

The G4 instances feature the latest-generation NVIDIA T4 GPUs, as well as custom 2nd Generation Intel Xeon Scalable (Cascade Lake) processors, up to 100 Gbps of networking throughput, and up to 1.8 TB of local NVMe storage.

According to Amazon Web Services (AWS), G4 instances can provide cost-effective machine learning for applications, such as adding metadata to an image, object detection, speech recognition and more.

Additionally, G4 instances can support graphics-intensive applications such as photorealistic design, video transcoding, and cloud-based game streaming.

AWS explains, "Machine learning involves two processes that require compute – training and inference.

"Training entails using labelled data to create a model that is capable of making predictions, a compute-intensive task that requires powerful processors and high-speed networking. Inference is the process of using a trained machine learning model to make predictions, which typically requires processing a lot of small compute jobs simultaneously, a task that can be most cost-effectively handled by accelerating computing with energy-efficient NVIDIA GPUs.

AWS compute services vice president Matt Garman says the company focuses on delivering services that help customers take advantage of compute-intensive applications.

 "AWS offers the most comprehensive portfolio to build, train, and deploy machine learning models powered by Amazon EC2's broad selection of instance types optimized for different machine learning use cases. With new G4 instances, we're making it more affordable to put machine learning in the hands of every developer. And with support for the latest video decode protocols, customers running graphics applications on G4 instances get superior graphics performance over G3 instances at the same cost.

Customers with machine learning workloads can launch G4 instances using Amazon SageMaker or AWS Deep Learning AMIs, which include machine learning frameworks such as TensorFlow, TensorRT, MXNet, PyTorch, Caffe2, CNTK, and Chainer. G4 instances will also support Amazon Elastic Inference in the coming weeks. AWS says this will allow developers to dramatically reduce the cost of inference by up to 75% by provisioning just the right amount of GPU performance.

Customers with graphics and streaming applications can launch G4 instances using Windows, Linux, or AWS Marketplace AMIs from NVIDIA with NVIDIA Quadro Virtual Workstation software preinstalled.

A bare metal version will be available in the coming months. G4 instances are available in the Asia Pacific (Seoul and Tokyo), US East (N. Virginia, Ohio), US West (Oregon, N. California), and Europe (Frankfurt, Ireland, London) Regions, with availability in additional regions planned in the coming months.

G4 instances are available to be purchased as On-Demand, Reserved Instances, or Spot Instances.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X