News Hub

Magnetic circuits could radically reduce AI energy consumption

Written by Tue 14 Apr 2020

Ditching silicon for magnetic wires could slash AI energy output by 20-30x

The combination of AI and large data sets has profoundly improved our ability to model the world around us, predict its next move and recognize its images and patterns.

Underpinning all of this data-driven innovation, though, are servers and accelerators that can devour astronomical amounts of energy, depending on the task.

Last year, research indicated that training a single AI algorithm can require up to 284 tonnes of carbon dioxide – five times the lifetime emissions of an average car.

This not only raises concerns about the environment, but also affordability. As AI becomes the core of business evolution, and capitalism itself, how can we ensure as many organisations as possible have access to its power?

The effort to improve the energy efficiency of data processing is nothing new, but has been a continuous goal of computer science since computers were conceived. The advent of AI and the proliferation of device data from IoT, though, has sharpened this imperative. To help drive forward innovation in the area, researchers have honed in on a new technique that might make the next generation of smart computers radically more energy-efficient data chompers.

Their method involves swapping out a fundamental building block of computing, a material virtually synonymous with computers themselves – silicon – and replacing it with magnetic circuits. The researchers, from the University of Texas, Austin, say magnetic wires, placed a suitable distance away from each other, can lead to a 20-30x reduction in the amount of energy needed to run neural network training algorithms.

Jean Anne Incorvia, assistant professor at the University’s Cockrell School’s Department of Electrical and Computer Engineering and who led the study published this week in IOP Nanotechnology, said the findings will pave the way for new systems that “reduce training effort and energy costs”.

Her team’s method applies a new approach in computer science called neuromorphic computing, which essentially involves designing computer chips that think like brains, by wiring them up with artificial neurons. Chip giant Intel released its first neuromorphic chip, Loiri, last year.

By building a “new-era” system with integral nerve cells, Incorvia said her team were able to reproduce the phenomenon of “lateral inhibition”, where the best performing neurons prevent slow neurons from firing. When applied to computers, this means artificial neurons outcompete slower ones, expediting the time it takes to complete a task and radically reducing the energy required to process data.

Incorvia said their method provides an energy reduction of 20 to 30 times the amount used by a standard back-propagation algorithm when performing the same learning tasks.

Like with most experimental work such as this, we shouldn’t get carried away. The initial research focused on interactions between two magnetic neurons and initial results on the interactions between multiple neurons. The next step involves applying the findings to larger sets of multiple neurons as well as experimental verification of their findings.

Written by Tue 14 Apr 2020


energy efficiency neural networks neuromorphic computing science
Send us a correction Send us a news tip