CARLsim
6.1.0
CARLsim: a GPU-accelerated SNN simulator
|
Axonal plasticity describes the biological phenomenon in which the myelin sheath thickness and the amplification of a signal change due to experience. Recent studies show this to be important for sequence learning and synchronization of temporal information. In spiking neural networks (SNNs), the time a spike travels from the presynaptic neuron along the axon until it reaches a postsynaptic neuron is an essential principle of how SNNs encode information. Axonal plasticity learns by adjusting the axonal delays and is therefore fundamentally different compared to other SNN learning algorithms like STP or STDP, which alter synaptic weights (i.e., synaptic plasticity).
In simulators for large scale SNN models such as CARLsim, the time a spike travels from the presynaptic neuron along the axon until it reaches a postsynaptic neuron is modeled as synaptic delays with discrete values from one to several milliseconds. To simulate neural activity in large-scale SNNs efficiently, delays are transformed as indices to optimized structures that are built once before the simulation starts. As a consequence, and in contrast to synaptic weights, delays are not directly accessible as scalar data in the runtime memory. With CARLsim 6.1, the axonal delays can now be updated at runtime. We extended the kernel and interface by the method updateDelay(..) that can update delays during simuation (RUN state). Fig. 2 shows the methods applicable at each state. The intrinsic state model with transitions from CONFIG → SETUP → RUN.
The new method updateDelays(∆D) modifies the runtime structures (e.g. runtime[netId].postDelayInfo) directly in back-end memory.
Conceptually, updateDelays can be thought as a highly efficient incremental compile of the delta defined by sparse delay changes between pre- and postsynaptic neurons. In order to prepare a network for execution, the CARLsim method setupNetwork() compiles the network model to highly optimized runtime structures in backend memory, e.g., GPU device memory. In the resulting SETUP state, the explicit delays between the pre- and post-synaptic neurons are dissolved into optimized memory structures, for example runtime[netId].postDelayInfo.
updateDelays keeps the structural integrity intact. As outlined above, CARLsim uses sparse representation of synaptic connections. The method setup(), called in state CONFIG, translates the delays of the SNN model to the runtime data structures pre/postSynapticIds and postDelayInfo.
Fig. 3 outlines the principle of the validation of structural integrity: When the SNN is changed by updateDelays() in the state RUN, the resulting runtime structures are the same as if the delays are changed in CONFIG state first and then translated by setupNetwork.
For the contributors we provide some details of the implementation. Please refer to our paper for more information. Sparse representation of synaptic connections enable the simulation of large-scale SNNs as it reduces the required memory from O(NDM) to O(N(M + D)), where N is the number of neurons, M the number of synapses, and D the maximal axonal delay. Fig. 4a illustrates the sparse representation utilizing a minimalistic SNN with four neurons and three synaptic connections. The algorithm of updateDelays modifies arbitrary delays (denoted as ∆D) in runtime memory and is presented in Fig. 4b. The other parameters of the method identify the pre-/ postsynaptic neuron groups (gGrpIdPre, gGrpIdPost). Fig. 4c (4d) shows how a change from 4 ms to 2 ms (1 ms) of the synaptic connection between neuron 0 and 2 affects the data structures in runtime memory
Niedermeier, L. and Krichmar, J.L. (2023). "Experience-Dependent Axonal Plasticity in Large-Scale Spiking Neural Network Simulations". Presented at the International Joint Conference on Neural Networks (IJCNN), Queensland, Australia, 2023. To appear in the Proceedings 2023 of IEEE International Joint Conference on Neural Networks (IJCNN).