New IBM Linux servers custom-made for AI, deep learning and data centre efficiency
Thu 8 Sep 2016
IBM has launched a new range of Linux-based servers which have been specifically engineered for high performance in tasks related to artificial intelligence, deep learning and advanced analytics – with a central mission to increase data centre efficiency. Early tests with Tencent reveal a threefold performance increase, even at 2/3rds deployment.
The perhaps inelegantly named IBM Power Systems S822LC for High Performance Computing represents a collaboration with Nvidia, with the latter’s NVLink high-speed interconnect facilitating a far greater CPU/GPU throughput than is currently possible over a PCIe bus.
The servers have been developed in collaboration with the OpenPower Foundation (OPF), which was set up in 2013 as an accelerator for innovation of custom data centre systems, with a particular remit for workload acceleration via GPU, FPGA or advanced I/O, as well as platform optimisation for SW appliances and advanced hardware technology exploitation.
IBM conducted preliminary tests on the new OpenPOWER servers in China with Tencent and was able to run workloads three times faster than on its previous x86-based infrastructure even whilst running a third fewer total number of servers.
“The user insights and the business value you can deliver with advanced analytics, machine learning and artificial intelligence is increasingly gated by performance,” comments Doug Balog, General Manager of POWER at IBM. “Accelerated computing that can really drive big data workloads will become foundational in the cognitive era.”
The S822LC features a cluster of new interconnect innovations grouped under the name PowerAccel. The newly designed IBM Power8 processor features open architecture and uses Nvidia’s NLink to achieve a high-speed, energy-efficient bidirectional interconnect to an array of on-board P100 Pascal GPUs.
Ian Buck, VP of Accelerated Computing at NVIDIA, comments:
“NVIDIA NVLink provides tight integration between the POWER CPU and NVIDIA Pascal GPUs and improved GPU-to-GPU link bandwidth to accelerate time to insight for many of today’s most critical applications like advanced analytics, deep learning and AI.”
Early customers for the new servers include Lawrence Livermore National Laboratory (LLNL) and the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL), as well as an unnamed ‘large multinational retail corporation’ (unlikely to be anyone besides Amazon, given Amazon’s considerable investment in deep learning and AI).
IBM claims that despite offering 80% more performance per dollar spent than previous analogous X86 products, outfitting with the new range could cost 30% less in certain configurations.