Researchers in the Netherlands have made a groundbreaking advancement in the field of artificial intelligence by developing a technique that significantly reduces power consumption in AI chips through on-chip training. This innovative approach, spearheaded by experts at Eindhoven University of Technology, eliminates the need to transfer trained models to the chip, paving the way for more energy-efficient AI chips in the future.
The development utilizes a neuromorphic architecture but has been tailored for mainstream AI frameworks rather than the traditional spiking networks. Training neuromorphic networks has historically been a cumbersome and energy-intensive process, often involving initial training on a computer followed by model transfer to the chip.
Yoeri Van de Burgt, an associate professor at the Department of Mechanical Engineering at TU/e, explains the significance of memristors in neuromorphic chips, stating that these circuit devices play a crucial role in mimicking how brain neurons store and communicate information. This breakthrough allows for on-chip training, eliminating the need for external training and model transfer.
One of the key challenges faced by the researchers was integrating essential components for on-chip training onto a single neuromorphic chip. This included incorporating electrochemical random-access memory (EC-RAM) components that replicate the electrical charge storage and firing mechanisms observed in biological neurons.
The researchers successfully fabricated a two-layer neural network using EC-RAM components made from organic materials and tested the hardware with an evolved version of the backpropagation training algorithm. The hardware implementation of the algorithm updates each layer progressively using in situ stochastic gradient descent, bypassing the need for extensive storage.