NTT and the University of Tokyo Develop World’s First Optical Computing AI Using an Algorithm Inspired by the Human Brain

NTT Corporation (President and CEO: Akira Shimada, “NTT”) and the University of Tokyo (Bunkyo-ku, Tokyo, President: Teruo Fujii) have devised a new learning algorithm inspired by the information processing of the brain that is suitable for multi-layered artificial neural networks (DNN) using analog operations. This breakthrough will lead to a reduction in power consumption and computation time for AI. The results of this development were published in the British scientific journal Nature Communications on December 26th.

Researchers achieved the world’s first demonstration of efficiently executed optical DNN learning by applying the algorithm to a DNN that uses optical analog computation, which is expected to enable high-speed, low-power machine learning devices. In addition, they have achieved the world’s highest performance of a multi-layered artificial neural network that uses analog operations.

Figure 1: (Top) Comparison of this result with other methods. (Bottom) Overview of optical deep learning achieved with this method. (Graphic: Business Wire)

In the past, high-load learning calculations were performed by digital calculations, but this result proves that it is possible to improve the efficiency of the learning part by using analog calculations. In Deep Neural Network (DNN) technology, a recurrent neural network called deep reservoir computing is calculated by assuming an optical pulse as a neuron and a nonlinear optical ring as a neural network with recursive connections. By re-inputting the output signal to the same optical circuit, the network is artificially deepened.

DNN technology enables advanced artificial intelligence (AI) such as machine translation, autonomous driving and robotics. Currently, the power and computation time required is increasing at a rate that exceeds the growth in the performance of digital computers. DNN technology, which uses analog signal calculations (analog operations), is expected to be a method of realizing high-efficiency and high-speed calculations similar to the neural network of the brain. The collaboration between NTT and the University of Tokyo has developed a new algorithm suitable for an analog operation DNN that does not assume the understanding of the learning parameters included in the DNN.

The proposed method learns by changing the learning parameters based on the final layer of the network and the nonlinear random transformation of the error of the desired output signal (error signal). This calculation makes it easier to implement analog calculations in things such as optical circuits. It can also be used not only as a model for physical implementation, but also as a cutting-edge model used in applications such as machine translation and various AI models, including the DNN model. This research is expected to contribute to solving emerging problems associated with AI computing, including power consumption and increased calculation time.

In addition to examining the applicability of the method proposed in this paper to specific problems, NTT will also promote large-scale and small-scale integration of optical hardware, aiming to establish a high-speed, low-power optical computing platform for future optical networks.

Speak Your Mind

*