Self-Learning Analog System Tackles Problems Beyond The Reach Of Previous Systems

Self-Learning Analog System Tackles Problems Beyond The Reach Of Previous Systems

Scientists from the Université Libre de Bruxelles in Brussels, Belgium have developed a neuro-inspired analog system. This new analog system has the ability to train itself to become better at whatever tasks it performs. It can perform difficult computing tasks very well. Thus, it represents major benefit such as self-learning hardware with potential for high energy-efficiency and ultrafast speeds. This new system is based on the artificial intelligence algorithm known as reservoir computing.

Piotr Antonik said, “From the past decade, artificial intelligence made remarkable progress. This progress is largely based on the use of error backpropagation. Also, there is growing interested, both in academics and industry in analog, brain-inspired computing as a possible route to circumvent the end of Moore’s law.”

“Our work shows that the backpropagation algorithm can be implemented using the same hardware used for the analog computing under certain conditions. This could enhance the performance of these hardware systems,” he added.

Backpropagation algorithm is at the heart of the recent advances in artificial intelligence. The basic idea of back propagation is that the system performs thousands of iterative calculations. Each time, this reduce a little bit error. It brings the computed value closer and closer to the optimal value And at last, the repeated computations teach the system an improved way of computing a solution to a problem.

Researchers performed the first proof-of-concept experiment in 2015. At that time, the algorithm was in combination with the reservoir computing paradigm. But now, researchers have demonstrated that the algorithm can perform three much more complicated task. Those are as follows: 1.speech recognition task (TIMIT), 2. An academic task often used to test reservoir computers (NARMA10), and 3. A complex nonlinear task (VARDEL5). It tackles this third task suggests that the new approach to self-training, that has the potential for expanding the computing territory of neuromorphic systems.

Implementing both the reservoir computer system and the backpropagation algorithm on the same photonic setup is the key to this demonstration of the analog system. Although, this analog system depends on a photonic setup. The information is coded as the intensity of light pulses propagating in an optical fiber.

The new approach is robust against various experimental imperfections. Its setup is limited by the speed of some of the data processing and data transfer. The experiment was implemented by using a rather slow system. The neurons inside this were processed one after the other.

Antonik said, “We are trying to broaden as much as possible the range of problems to which experimental reservoir computing can apply. For example, writing up a manuscript in which we show that it could use to generate periodic patterns and emulate chaotic systems.”

“We are currently testing photonic systems in which the internal variables processed simultaneously. We call this a parallel architecture. This can provide several orders of magnitude of speedup. Further in the future, we may revisit physical error backpropagation, but in these faster, parallel, systems,” he continued.


Please enter your comment!
Please enter your name here