fbpx
News Hub

AWS touts ‘most cost effective’ instance for machine learning inference and training

Written by Mon 23 Sep 2019

Amazon announces general availability of competitively-priced T4 GPU instances

While the cloud has enabled companies of all stripes to harness the power of machine learning, AI workloads like machine learning training and inference are computationally demanding and expensive.

According to some estimates, machine learning inference, the process of using a trained model to make predictions, can represent up to 90 percent of overall operational costs for running ML workloads. While cloud-based GPU accelerators have risen as the solution to this problem, they typically come at a significant expense. Enter AWS.

Amazon AWS, which has a track record of undercutting rivals’ cloud pricing, today announced the general availability of Amazon EC2 G4, an instance that the cloud giant claims provides the industry’s most cost-effective compute for machine learning inference and other graphics-intensive applications.

The new G4 instances feature the latest generation Nvidia T4 GPUs, custom 2nd Generation Intel Xeon Scalable (Cascade Lake) processors, up to 100 Gbps of networking throughput, and up to 1.8 TB of local NVMe storage. This all equates to 65 TFLOPs of mixed-precision performance, a rising form of ML training where precision is cut to boost training time while preserving accuracy.

While the press release announcing the instances speaks at length about their affordability compared to other cloud offerings, we had to dig deeper to discover the actual costs involved.

Pricing for the g4dn.xlarge, four virtual core instance with one GPUand 16GB memory starts at $0.526 per hour. While Google’s one GPU v100 instance, also with 16GB of memory, costs $0.74 under the company’s pre-emptive GPU usage model.

The AWS G4 eight virtual core instance with 32GBs of RAM will set you back $0.752 per hour, significantly less than the £2.2807 per hour Microsoft asks for its six core NC6s v3 instance on Azure (although it comes with 112GB of memory).

In addition to supporting ML tasks like object detection and speech recognition, AWS said the new instances provide a cost-effective means of building and running graphics-intensive applications, such as photo-realistic design and game streaming in the cloud. AWS said the instances offer up to a 1.8x increase in graphics performance and up to 2x video transcoding capability over the previous generation.

“With new G4 instances, we’re making it more affordable to put machine learning in the hands of every developer,” said Matt Garman, VP of Compute Services at AWS. “And with support for the latest video decode protocols, customers running graphics applications on G4 instances get superior graphics performance over G3 instances at the same cost.”

Written by Mon 23 Sep 2019

Tags:

gpus machine learning
Send us a correction Send us a news tip