#include <cuda_version_control.h>
#include <curand.h>
Go to the source code of this file.
|
enum | conType_t {
CONN_RANDOM,
CONN_ONE_TO_ONE,
CONN_FULL,
CONN_FULL_NO_DIRECT,
CONN_GAUSSIAN,
CONN_USER_DEFINED,
CONN_UNKNOWN
} |
| connection types, used internally (externally it's a string) More...
|
|
enum | MemType { CPU_MEM,
GPU_MEM
} |
| type of memory pointer More...
|
|
enum | SNNState { CONFIG_SNN,
COMPILED_SNN,
PARTITIONED_SNN,
EXECUTABLE_SNN
} |
| the state of spiking neural network, used with in kernel. More...
|
|
◆ compConnectConfig
This structure contains the configurations of compartmental connections that are created during configuration state. The configurations are later processed by compileNetwork() and translated to meta data which are ready to be linked.
- See also
- CARLsimState neural dynamics configuration
◆ compConnectionInfo
◆ ConnectConfig
This structure contains the configurations of connections that are created during configuration state. The configurations are later processed by compileNetwork() and translated to meta data which are ready to be linked.
- See also
- CARLsimState
◆ ConnectConfigMD
◆ ConnectConfigRT
This structure contains the configurations of connections that are created by optimizeAndPartiionNetwork(), which is ready to be executed by computing backend.
- See also
- CARLsimState
-
SNNState
◆ ConnectionInfo
◆ DelayInfo
◆ GlobalNetworkConfig
◆ GroupConfig
This structure contains the configuration of groups that are created during configuration state. The configurations are later processed by compileNetwork() and translated to meata data which are ready to be linked.
- See also
- CARLsimState
◆ GroupConfigMD
◆ GroupConfigRT
This structure contains the configurations of groups that are created by optimizeAndPartiionNetwork(), which is ready to be executed by computing backend.
- See also
- CARLsimState
-
SNNState
◆ HomeostasisConfig
◆ NetworkConfigRT
This structure contains the network configuration that is required for GPU simulation. The data in this structure are copied to device memory when running GPU simulation.
- See also
- SNN
◆ NeuralDynamicsConfig
◆ NeuromodulatorConfig
◆ RoutingTableEntry
This structure contains the spike routing information, including source net id, source global group id, destination net id, destination global group id
◆ RuntimeData
◆ STDPConfig
◆ STPConfig
◆ SynInfo
◆ ThreadStruct
This sturcture contains the snn object (because the multithreading routing is a static method and does not recognize this object), netID runtime used by the CPU runtime methods, local group ID, startIdx, endIdx, GtoLOffset
◆ conType_t
Enumerator |
---|
CONN_RANDOM | |
CONN_ONE_TO_ONE | |
CONN_FULL | |
CONN_FULL_NO_DIRECT | |
CONN_GAUSSIAN | |
CONN_USER_DEFINED | |
CONN_UNKNOWN | |
Definition at line 75 of file snn_datastructures.h.
◆ MemType
CARLsim supports execution either on standard x86 central processing units (CPUs) or off-the-shelf NVIDIA GPUs. The runtime data for CPU/GPU computing backend needs to be allocated in proper memory space.
CPU_MEM: runtime data is allocated on CPU (main) memory GPU_MEM: runtime data is allocated on GPU memory
Enumerator |
---|
CPU_MEM | runtime data is allocated on CPU (main) memory
|
GPU_MEM | runtime data is allocated on GPU memory
|
Definition at line 69 of file snn_datastructures.h.
◆ SNNState
Enumerator |
---|
CONFIG_SNN | |
COMPILED_SNN | |
PARTITIONED_SNN | |
EXECUTABLE_SNN | |
Definition at line 78 of file snn_datastructures.h.