Robotic systems rely on spatio-temporal information to solve control tasks. With advancements in deep neural networks, reinforcement learning has significantly enhanced the performance of control tasks by leveraging deep learning techniques. However, as deep neural networks grow in complexity, they consume more energy and introduce greater latency. This complexity hampers their application in robotic systems that require real-time data processing. To address this issue, spiking neural networks, which emulate the biological brain by transmitting spatio-temporal information through spikes, have been developed alongside neuromorphic hardware that supports their operation. This paper reviews brain-inspired learning rules and examines the application of spiking neural networks in control tasks. We begin by exploring the features and implementations of biologically plausible spike-timing-dependent plasticity. Subsequently, we investigate the integration of a global third factor with spike-timing-dependent plasticity and its utilization and enhancements in both theoretical and applied research. We also discuss a method for locally applying a third factor that sophisticatedly modifies each synaptic weight through weight-based backpropagation. Finally, we review studies utilizing these learning rules to solve control tasks using spiking neural networks.
Keywords: Control problem; Neuromorphic computing; R-STDP; Reinforcement learning; Spike-timing-dependent plasticity; Spiking neural networks.
© Korean Society of Medical and Biological Engineering 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.