You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently trying to implement an MNIST classifier using brian2genn on GPU. My problem is that TimedArray is not supported by brian2genn and I can't come up with another solution that does not use TimedArrays to input a dataset to the Network. Any suggestions?
Example piece of code that works on CPU but not on GPU:
Hi. This is indeed a major limitation in Brian2GeNN at the moment. I don't see a convenient solution right now (see #96 for a discussion of what we might do in the future), but if the total number of spikes that the PoissonGroup generates is not too big, then you could maybe do the following:
create only the TimedArray, the PoissonGroup and a SpikeMonitor and generate/record all the input spikes. For this, do not use Brian2GeNN, but instead the C++ standalone device or the default runtime mode.
In your actual simulation with Brian2GeNN, use a SpikeGeneratorGroup where you plug in the spikes recorded with the spike monitor from the previous simulation.
Before you do this, try to figure out an estimate of how many spikes the PoissonGroup will generate. As a rough guideline, each recorded spike will take up 16 Bytes of memory, so on a system with 16GB RAM you'd want to stay well below one billion spikes.
A minor point: the simulation of the spikes should be a bit faster if you use:
The NeuronGroup is equivalent to a PoissonGroup, but by using the run_regularly operation you only look up the rate every 100ms (when it actually changes), instead of every time step.
I am currently trying to implement an MNIST classifier using brian2genn on GPU. My problem is that
TimedArray
is not supported by brian2genn and I can't come up with another solution that does not useTimedArrays
to input a dataset to the Network. Any suggestions?Example piece of code that works on CPU but not on GPU:
The text was updated successfully, but these errors were encountered: