CARLsim
6.1.0
CARLsim: a GPU-accelerated SNN simulator
|
SpikeMonitors and ConnectionMonitors, while very useful, can easily slow simulations down. They should be used for brief periods of time as opposed to the entire duration of the simulation. Additonally, it makes sense to target the exact group or connection you care about as opposed to all neurons or synapses in the simulation.
As mentioned above, leaving Spike or ConnectionMonitors on running for a long period of time or over a large group of neurons will slow the simulation down dramatically. Although many state variables are updated every timestep (ms), CARLsim performs more calculations as the number of spikes grow. Therefore simulations with high firing rates will necessarily slow the simulation down as well. A common trick to speed up simulations that have a training and testing phase is to train the SNN and then output the network state (with synaptic weights) to preserve the training using saveSimulation. The network state can then be reloaded using loadSimulation. In this way, users can load a pre-trained network anytime they wish without incurring the cost of training.
CARLsim currently supports forward-Euler and fourth-order Runge Kutta for the numerical integration of ODEs.
The integration method and integration time step can be specified via CARLsim::setIntegrationMethod. By default, the simulation uses forward-Euler with a basic integration step of 0.5ms.
The specified integration method will apply to all neurons in the network. Future CARLsim versions might allow to specify the integration method on a per-group basis.
In contrast to the integration time step, the simulation time step is always 1ms, meaning that spike times cannot be retrieved with sub-millisecond precision. Future CARLsim versions might allow for sub-millisecond spike times.
By default, CARLsim uses the forward (or standard) Euler method with an integration step of 0.5ms for numerical stability. This can be set explicitly with the following function call:
where numStepsPerMs
is the number of integration steps to perform per 1ms.
We suggest the number of time steps be at least 2 when working with the 4-parameter Izhikevich model (see 3.1.1 Izhikevich Neurons (4-Parameter Model)). We do not recommend to use forward-Euler when working with the 9-parameter Izhikevich or compartmental models (see 3.1.2 Izhikevich Neurons (9-Parameter Model) and 3.1.3 Multi-Compartment Neurons).
CARLsim also supports the use of fourth-order Runge-Kutta (also referred to as "RK4", "classical Runge-Kutta method", or simply as "\em the Runge-Kutta method").
This can be specified with the following function call:
where numStepsPerMs
is the number of integration steps to perform per 1ms.
We suggest the number of time steps be at least 10 when working with compartmental neurons (see 3.1.3 Multi-Compartment Neurons).
CARLsim is now threadsafe so a distinct CARLsim simulation can be run on every GPU device and/or every CPU core on the machine. We call simulations using multiple GPUs as multi-GPU simulation, using multiple CPUs as multi-CPU simulation, and using multiple GPUs and CPUs as hybrid simulation. The user can easily control simulations on multiple CPU/GPU by specifying the preferred partition while creating each group. Currently, upto 8 GPUs and 24 CPU cores can be used concurrently in a single simulation. The available processors are indexed from 0. By default, CARLsim places all the neuron groups on CPU 0 partition. The following examples show how to specify the preferred processor for each neuron group
For example, to create a group of Izhikevich neurons on a GPU partition using CARLsim::createGroup, simply specify a name (e.g., "exc1"), the number of neurons (e.g., 100), a type (e.g. EXCITATORY_NEURON), the preferred parition number (0-7 for GPU, must be less than or equal to available GPUs), and the computing backend (CPU_CORES/GPU_CORES):
To create a group of spike generators on GPU 0, the user also specifies a name, size, type, the preferred parition number, and the computing backend:
Similarly, the following method call creates a LIF neuron group named "inh1" and places it on CPU 3 partition.
An example CARLsim simulation using heterogeneous processors (CPU and GPU) and heterogeneous neurons (Izhikevich and LIF) is shown in the lif_izhi_random_spnet project under the projects/ directory. The example implements the clasic Izhikevich 80-20 network using LIF neurons and fast spiking Izhikevich neurons.
CARLsim provides a range of handy functions to change weight values on the fly; that is, without having to recompile the network. The utility SimpleWeightTuner implements a simple weight search algorithm inspired by the bisection method. The function CARLsim::setWeight allows a user to change the weight of a single synapse. Alternatively, CARLsim::biasWeights can be used to add a constant bias to every weight of a certain connection ID, and CARLsim::scaleWeights multiplies all the weights with a scaling factor.
These functions are useful especially for tuning feedforward weights in large-scale networks that would otherwise take a lot of time to repeatedly build. For tuning in more complex situations please refer to ch10_ecj.
These functions are only valid in ::carlsimState_t RUN_STATE and do not alter the topography of the network. They apply to weight values of already allocated synapses only.
The SimpleWeightTuner utility is a class that allows tuning of weight values of a specific connection (i.e., a collection of synapses), so that a specific neuron group fires at a predefined target firing rate—without having to recompile the CARLsim network.
A complete example is explained in tut4_simple_weight_tuner.
Consider a CARLsim network with an input group (gIn
) connected to an output group (gOut
). Suppose the goal is to find weight values that lead to some desired output activity (say, 27.4Hz), in response to some Poissonian input. A conventional approach to solving this problem would be to repeatedly build and run the network with different weight values, until some values are found that let gOut
approach the desired target firing rate. This process can be tedious, especially when dealing with large-scale networks that take a long time to build.
Instead, one can use a SimpleWeightTuner:
The SimpleWeightTuner constructor accepts a pointer to the above created network sim
and some termination conditions: The algorithm will terminate if either the absolute error between observed firing rate and target firing rate is smaller than some error margin, or upon reaching the maximum number of iterations. Calling SimpleWeightTuner::setConnectionToTune informs the class about which connection to tune and with which weight to start. The algorithm will repeatedly change the weights in a way that resembles the bisection method, until the mean firing rate of group gOut
reaches 27.4 +- 0.01 Hz (specified via SimpleWeightTuner::setTargetFiringRate). Note that the here involved connection (c0
) and neuron group (gOut
) can be completely independent from each other.
All that is left to do is to execute the algorithm until finished:
This will run sim
repeatedly for one second (for different time periods pass an optional argument) until one of the termination criteria is reached.
The easiest way to change the weight of a synapse is CARLsim::setWeight:
This function will set the weight of a particular synapse of connection ID connId
, namely the synapse connecting neuron neurIdPre
to neuron neurIdPost
, to value weight
. Here, the connection ID is the return argument of the corresponding CARLsim::connect call. Also, neuron IDs should be zero-indexed, meaning that the first neuron in the group should have ID 0.
If the specified weight lies outside the boundaries [minWt,maxWt]
of RangeWeight, then two different behaviors can be achieved, depending on a fifth optional argument updateWeightRange
.
updateWeightRange
is set to true
, then the corresponding weight boundaries [minWt,maxWt]
will be updated should the specified weight lie outside those boundaries.updateWeightRange
is set to false
, then the corresponding weight will be clipped so that it stays within the existing weight boundaries [minWt,maxWt]
.Alternatively, it is possible to change the weights of all the synapses that belong to a certain connection ID using CARLsim::biasWeights:
This function will add a constant bias
to the weight of every synapse of connection ID connId
. Here, the connection ID is the return argument of the corresponding CARLsim::connect call. Also, neuron IDs should be zero-indexed, meaning that the first neuron in the group should have ID 0.
If the new weight (old weight plus bias) lies outside the boundaries [minWt,maxWt]
of RangeWeight, then two different behaviors can be achieved, depending on a third optional argument updateWeightRange
.
updateWeightRange
is set to true
, then the corresponding weight boundaries [minWt,maxWt]
will be updated should the new weight lie outside those boundaries.updateWeightRange
is set to false
, then the corresponding weight will be clipped so that it stays within the existing weight boundaries [minWt,maxWt]
.Alternatively, it is possible to change the weights of all the synapses that belong to a certain connection ID using CARLsim::scaleWeights:
This function will multiply the weight of every synapse of connection ID connId
with a scaling factor scale
. Here, the connection ID is the return argument of the corresponding CARLsim::connect call. Also, neuron IDs should be zero-indexed, meaning that the first neuron in the group should have ID 0.
If the new weight (old weight times scaling factor) lies outside the boundaries [minWt,maxWt]
of RangeWeight, then two different behaviors can be achieved, depending on a third optional argument updateWeightRange
.
updateWeightRange
is set to true
, then the corresponding weight boundaries [minWt,maxWt]
will be updated should the new weight lie outside those boundaries.updateWeightRange
is set to false
, then the corresponding weight will be clipped so that it stays within the existing weight boundaries [minWt,maxWt]
.