Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working with datasets on GPU #95

Open
YigitDemirag opened this issue Oct 10, 2019 · 1 comment
Open

Working with datasets on GPU #95

YigitDemirag opened this issue Oct 10, 2019 · 1 comment

Comments

@YigitDemirag
Copy link

I am currently trying to implement an MNIST classifier using brian2genn on GPU. My problem is that TimedArray is not supported by brian2genn and I can't come up with another solution that does not use TimedArrays to input a dataset to the Network. Any suggestions?

Example piece of code that works on CPU but not on GPU:

   # Create input to Network
    train_and_test = np.vstack([mnist_tr_in, mnist_te_in])
    # stimrate=100Hz, stimDuration=100ms
    stimulusMNIST = TimedArray(train_and_test*stimRate, dt=stimDuration)
    input = PoissonGroup(img_l*img_l*nmult,
                         rates='stimulusMNIST(t,i%(28*28))', name='pin')
    Network = Network(input)
@mstimberg
Copy link
Member

Hi. This is indeed a major limitation in Brian2GeNN at the moment. I don't see a convenient solution right now (see #96 for a discussion of what we might do in the future), but if the total number of spikes that the PoissonGroup generates is not too big, then you could maybe do the following:

  • create only the TimedArray, the PoissonGroup and a SpikeMonitor and generate/record all the input spikes. For this, do not use Brian2GeNN, but instead the C++ standalone device or the default runtime mode.
  • In your actual simulation with Brian2GeNN, use a SpikeGeneratorGroup where you plug in the spikes recorded with the spike monitor from the previous simulation.

Before you do this, try to figure out an estimate of how many spikes the PoissonGroup will generate. As a rough guideline, each recorded spike will take up 16 Bytes of memory, so on a system with 16GB RAM you'd want to stay well below one billion spikes.

A minor point: the simulation of the spikes should be a bit faster if you use:

input = NeuronGroup(img_l*img_l*nmult, 'rate : Hz', threshold='rand() < rate*dt', name='pin')
input.run_regularly('rate = stimulusMNIST(t,i%(28*28))', dt=stimDuration)

The NeuronGroup is equivalent to a PoissonGroup, but by using the run_regularly operation you only look up the rate every 100ms (when it actually changes), instead of every time step.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants