220 Views

Focusing on Optical Neural Networks

LinkedIn Facebook X
August 08, 2024

Get a Price Quote

As the field of digital artificial intelligence continues to expand, the energy requirements for training and deploying AI systems are also on the rise, along with the associated carbon emissions. Recent studies indicate that if the current pace of AI server production is maintained, the annual energy consumption of these systems could surpass that of a small country by 2027. Deep neural networks, which draw inspiration from the intricate architecture of the human brain, are particularly energy-intensive due to the vast number of connections between layers of neuron-like processors.

To address the escalating energy demands of AI systems, researchers are focusing on the development of optical computing systems, a concept that has been explored experimentally since the 1980s. These systems leverage photons for data processing, offering the potential for faster and more efficient computations compared to traditional electronic methods. However, a significant challenge has impeded the ability of optical systems to outperform current electronic technologies.

Christophe Moser, who leads the Laboratory of Applied Photonics Devices at EPFL’s School of Engineering, explains the intricacies of data classification in neural networks. Each node, or 'neuron', in a network must make decisions based on weighted input data, leading to nonlinear transformations of the data. While digital neural networks can easily handle these nonlinear computations using transistors, optical systems require powerful lasers for this crucial step.

Moser collaborated with students Mustafa Yildirim, Niyazi Ulas Dinc, and Ilker Oguz, as well as Demetri Psaltis, the head of the Optics Laboratory, to devise an energy-efficient approach for performing nonlinear computations optically. Their innovative method involves encoding data, such as image pixels, in the spatial modulation of a low-power laser beam. By reflecting the beam multiple times, a nonlinear multiplication of the pixels is achieved, paving the way for more efficient optical neural networks.

Psaltis highlights the promising results of their research, stating, "Our image classification experiments across various datasets have demonstrated the scalability of our method, which is up to 1,000 times more power-efficient than current deep digital networks. This breakthrough positions optical neural networks as a compelling platform for the future of AI." The team's findings, supported by a Sinergia grant from the Swiss National Science Foundation, have been published in the prestigious journal Nature Photonics, marking a significant advancement in the realm of energy-efficient AI technologies.

Recent Stories