Google makes bare metal data centre GPUs available for machine learning
Wed 22 Feb 2017
In a boost for the hybrid space, Google is making GPUs in its data centres directly available for researchers and companies engaged in machine learning and other computationally intensive tasks.
The move follows similar offerings from Amazon Web Services and Microsoft over the last year, and allows users to connect up to 8 GPUs on 4 K80 boards to the Google Compute virtual engine of their choice.
Nvidia-based GPU VMs are available in the asia-east1, us-east1, and europe-west1 regions, with Cloud Console facilitation coming online next week.
The move towards leveraging spare capacity is of increasing interest in particular to the science and developmental laboratory communities, since the prospect of assembling – and most especially cooling – a local GPU cluster is often an inflexible and expensive solution in a potentially wider-ranging array of projects.
Last September AWS launched cloud GPU offerings over P2 cloud VM instances, also on Tesla K80 GPUs – and in December Microsoft made the same hardware available over Azure.
Costing for GPU usage in one of Google’s data centres runs at $0.70 USD in the U.S., and $0.77 USD in Europe and Asia. The granular pricing is a challenge to Microsoft, whose own GPU availability can only be hired by the month, at $700. Likewise is it a costing challenge to Amazon, whose comparable services begin at $0.90 USD.
Fields of research and business likely to avail themselves of dedicated GPU time include seismic analysis, genomics, molecular modelling, fluid dynamics and computational chemistry.
Though some have noted that IBM SoftLayer was first to market with the idea, launching its own GPU services in summer 2015, recent research indicates that its IBM SoftLayer offering is the least effective in the market.
The Tesla units which have effortlessly dominated this trend contain (in the case of Google’s product) 12 GB of GDDR5 memory and 4992 cores producing 480 GB/s of overall bandwidth.