
TDK Corporation has jointly developed a prototype of a reservoir AI chip using an analogue electronic circuit that mimics the cerebellum with Hokkaido University. At CEATEC 2025 in Japan from
Reservoir computing is a computational model capable of processing simple time varying, time-series data, tasks with low power consumption and high-speed operation. A concept that contrasts with reservoir computing is the deep learning model. With the development of AI and the use of big data in recent years, the challenges of computational processing of huge amounts of data and increasing power consumption have become apparent, and the rapid spread of generative AI has made AI processing increasingly dependent on the cloud.
Traditional deep learning models consist of an input layer, a hidden layer, and an output layer. The input layer receives the information first, and the hidden layer performs various and huge number calculations. The final output layer shows the learning results. The more hidden layers there are, the more complex computations can be performed. However, this leads to massive data processing, resulting in increased power consumption and latency.
Reservoir computing, on the other hand, consists of an input layer, reservoir layer, and output layer. The reservoir layer does not necessarily require calculations and uses natural phenomena that propagate over time. For example, in the input layer, the natural phenomenon of multiple water surface waves is used as the input value. The next reservoir layer sends the results of propagation of surface waves and their mutual interference to the output layer. The last output layer properly reads the state of the reservoir layer and deduces the characteristics of how the waves on the surface of the water moved. In the reservoir layer, the results of natural phenomena are sent without the need for calculations, so the number of parameters to be adjusted during training is significantly reduced, and tasks can be processed at low power and at high speed.
Therefore, analogue reservoir AI is expected to be utilised in tasks that require information processing tailored to individual situations at the edge, such as robots and human interfaces that demand high-speed processing without the need for large-scale computation.
Traditionally, it has been considered difficult to put reservoir computing devices into practical use. This is because reservoir computing is not a universal AI like deep neural networks, but an AI that specialises in time-series data processing. In addition, it was difficult to obtain the benefits of low power when reservoir computing devices were implemented in digital computing, and there were no specific reservoir computing devices that used physical phenomena to consider power consumption and high-speed operations.
TDK will further advance research on reservoir computing in collaboration with Hokkaido University and will contribute to the development of the “AI ecosystem market” by collaborating with its Sensor Systems Business Company and TDK SensEI, which develops sensor solutions business in the edge area.
| Tel: | +27 11 923 9600 |
| Email: | [email protected] |
| www: | www.altronarrow.com |
| Articles: | More information and articles about Altron Arrow |
© Technews Publishing (Pty) Ltd | All Rights Reserved