CARLsim  6.1.0
CARLsim: a GPU-accelerated SNN simulator
Chapter 15: Axonal Plasticity

15.1 Axonal Plasticity

Author
Lars Niedermeier

Axonal plasticity describes the biological phenomenon in which the myelin sheath thickness and the amplification of a signal change due to experience. Recent studies show this to be important for sequence learning and synchronization of temporal information. In spiking neural networks (SNNs), the time a spike travels from the presynaptic neuron along the axon until it reaches a postsynaptic neuron is an essential principle of how SNNs encode information. Axonal plasticity learns by adjusting the axonal delays and is therefore fundamentally different compared to other SNN learning algorithms like STP or STDP, which alter synaptic weights (i.e., synaptic plasticity).

axonplast_fig1.png
Fig. 1. Axonal Plasticity. Hypothetical circuit adopted from [Fields2015,Mount2017]. The speed of spike transmission depends on the thickness of the myelin sheath and the number of signal amplifiers, called nodes of Ranvier. (a) Non-synchronous spike arrival as the myelin sheath is thin. (b) Experience and neural activity cause axon alterations carried out by the glial cells oligodendrocyte precursors (OPCs) that differentiate into oligodendrocytes (OLs). (b) Thicker myelin sheath results faster conductance velocity, which can synchronize activity and causes the post-synaptic neuron to fire more reliably.

15.2 Adjusting Axonal Delays

In simulators for large scale SNN models such as CARLsim, the time a spike travels from the presynaptic neuron along the axon until it reaches a postsynaptic neuron is modeled as synaptic delays with discrete values from one to several milliseconds. To simulate neural activity in large-scale SNNs efficiently, delays are transformed as indices to optimized structures that are built once before the simulation starts. As a consequence, and in contrast to synaptic weights, delays are not directly accessible as scalar data in the runtime memory. With CARLsim 6.1, the axonal delays can now be updated at runtime. We extended the kernel and interface by the method updateDelay(..) that can update delays during simuation (RUN state). Fig. 2 shows the methods applicable at each state. The intrinsic state model with transitions from CONFIG → SETUP → RUN.

axonplast_fig2.png
Fig. 2. The CARLsim state model extended with axonal plasticity functionality (in red font).

15.3 Usage of updateDelays

The new method updateDelays(∆D) modifies the runtime structures (e.g. runtime[netId].postDelayInfo) directly in back-end memory.

// CONFIG state
CA3_Pyramidal = sim->createGroup("CA3_Pyramidal", 16, // 4x4
sim->setNeuronParameters(CA3_Pyramidal, 0.02f, 0.2f, -65.0f, 8.0f); // RS
// SETUP state
...
// RUN state
std::vector<std::tuple<int, int, uint8_t>> connDelays;
...
uint8_t d_new; // 1..20 ms
connDelays.push_back(std::tuple<int, int, uint8_t>(pre, post, d_new));
...
sim->updateDelays(CA3_Pyramidal, CA3_Pyramidal, connDelays);
Since
v6.1

15.4 Conceptual Background

Conceptually, updateDelays can be thought as a highly efficient incremental compile of the delta defined by sparse delay changes between pre- and postsynaptic neurons. In order to prepare a network for execution, the CARLsim method setupNetwork() compiles the network model to highly optimized runtime structures in backend memory, e.g., GPU device memory. In the resulting SETUP state, the explicit delays between the pre- and post-synaptic neurons are dissolved into optimized memory structures, for example runtime[netId].postDelayInfo.

updateDelays keeps the structural integrity intact. As outlined above, CARLsim uses sparse representation of synaptic connections. The method setup(), called in state CONFIG, translates the delays of the SNN model to the runtime data structures pre/postSynapticIds and postDelayInfo.

Fig. 3 outlines the principle of the validation of structural integrity: When the SNN is changed by updateDelays() in the state RUN, the resulting runtime structures are the same as if the delays are changed in CONFIG state first and then translated by setupNetwork.

axonplast_fig3.png
Fig. 3. Validation of the structural integrity for updateDelays. The runtime data structures RT = (pre/postSynapticIds, postDelayInfo) are the result of setupNetwork(). S_1 is a SNN and ∆D defines the change of delays that transforms S_1 to the SNN S_2 in the state CONFIG. Applying updateDelays in the state RUN with the same delay change ∆D results in the same runtime data structure.

15.5 Implementation Details

For the contributors we provide some details of the implementation. Please refer to our paper for more information. Sparse representation of synaptic connections enable the simulation of large-scale SNNs as it reduces the required memory from O(NDM) to O(N(M + D)), where N is the number of neurons, M the number of synapses, and D the maximal axonal delay. Fig. 4a illustrates the sparse representation utilizing a minimalistic SNN with four neurons and three synaptic connections. The algorithm of updateDelays modifies arbitrary delays (denoted as ∆D) in runtime memory and is presented in Fig. 4b. The other parameters of the method identify the pre-/ postsynaptic neuron groups (gGrpIdPre, gGrpIdPost). Fig. 4c (4d) shows how a change from 4 ms to 2 ms (1 ms) of the synaptic connection between neuron 0 and 2 affects the data structures in runtime memory

axonplast_fig6.png
Fig. 4. Sparse representation of synaptic connections demonstrated in a minimalistic SNN. The algorithm of updateDelay is applied for two essential cases and the resulting modifications in the runtime memory structures are marked in red color. (a) Sparse representation of synaptic connections dissolves delays into indices to data structures in runtime memory. The first neuron with id 0 of the minimalistic SNN has three connections to its post-synaptic neurons 1, 2, and 3. In sparse representation, synapses are ordered ascending by their axonal delays. For instance, the synapse for delay D_0,2 = 4ms is in the middle between those for D_0,1 and D_0,3. It is encoded in the forth entry of the axonal delays table. Delay Start references the position of the synapse. Delay Count encodes how many post-synaptic connections the neuron has. (b) Algorithm of the method updateDelay. (c) ∆D_0,2 = 2 ms increments the Delay Count at 2ms. The former entry Delay Start is set to 0 as this no longer references the second synapse. (d) ∆D_0,2 = 1 ms induces a re-sort of the synaptic connections by swapping the first two synapses. Delay Start becomes 0 and references the first synapse.

15.6 References

Niedermeier, L. and Krichmar, J.L. (2023). "Experience-Dependent Axonal Plasticity in Large-Scale Spiking Neural Network Simulations". Presented at the International Joint Conference on Neural Networks (IJCNN), Queensland, Australia, 2023. To appear in the Proceedings 2023 of IEEE International Joint Conference on Neural Networks (IJCNN).