Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

C99 math functions fail on integer type arguments #133

Open
denisalevi opened this issue Sep 7, 2021 · 18 comments
Open

C99 math functions fail on integer type arguments #133

denisalevi opened this issue Sep 7, 2021 · 18 comments

Comments

@denisalevi
Copy link
Member

Running this script (with GeNN 4.5.1 and brian2GeNN 1.6)

from brian2 import *
import brian2genn

set_device('genn')

target = NeuronGroup(2, 'dv/dt = - exp(1)/ms : 1')

run(0*second)

fails with

...
GeNNworkspace/magicnetwork_model_CODE/neuronUpdateCUDAOptim.cc(79): error: calling a __host__ function("std::exp<int> ") from a __global__ function("updateNeuronsKernel") is not allowed

GeNNworkspace/magicnetwork_model_CODE/neuronUpdateCUDAOptim.cc(79): error: identifier "std::exp<int> " is undefined in device code

2 errors detected in the compilation of "GeNNworkspace/magicnetwork_model_CODE/neuronUpdateCUDAOptim.cc".
terminate called after throwing an instance of 'std::runtime_error'
  what():  optimizeBlockSize: NVCC failed
genn/bin/genn-buildmodel.sh: line 92: 1780082 Aborted                 (core dumped) "$GENERATOR" "$BASEDIR/../" "$OUT_PATH" "$FORCE_REBUILD"
...

CUDA math functions are overloaded only for floating point types in device code. That seems the source of the issue here. But I'm pretty sure that this used to work in older brian2GeNN / GeNN versions as the COBAHH example uses exp(<integer>) as well and is currently failing (at least for me).

Can someone reproduce this?

@mstimberg
Copy link
Member

Your example works for me, but I'm using a quite old CUDA version (9.1)...

@denisalevi
Copy link
Member Author

Ah, can you try it again with exp(4) instead of exp(1)? I wanted to simplify the example and didn't rerun it. It passes for me with exp(1) as well and fails with exp(4). I guess the compiler optimizes the exp call away if the argument is 1?

@mstimberg
Copy link
Member

Ah, indeed, I can reproduce the issue with exp(4). It would be nice to handle this somehow, but I don't see it as a big problem since code will basically always use variables, quantities, etc. which are all floating point values. Where do you see COBAHH use integer values?

@denisalevi
Copy link
Member Author

The exponential_euler method uses exponential of integers it seems. Changing to method='euler' fixes the problem.

@denisalevi
Copy link
Member Author

And a few internal variables are integers, such as i, j, N etc., for which this fails. And I assume if it fails for exp, it might fail for all math functions (only tested cos, for which it fails as well)? Also, brian does support integer type variables, for which it fails as well.

I mean I came across this only because the benchmarks that you used for the brian2GeNN paper now fail due to that (which they didn't in the past obviously).

In brian2cuda, I'm dealing with this by defining my own math functions that cast integral types to floating point types. One can loose precision here when the conversion is e.g. from int32 to float32, so I have a bunch of warnings. But its a bit of a mess and needs cleaning up / making sure this is doing what it should. Just as a reference, here I explained what I did in brian2cuda the implementation looks currently like this.

@mstimberg
Copy link
Member

Ok, apologies, I can now reproduce the problem. The reason why it worked for me before is that I use single precision by default, and for single precision GeNN replaces exp by expf which deals with integer arguments just fine. I have to admit I don't quite see why CUDA does not just do an implicit type conversion of integer arguments for the double precision function, but well... The reason why this worked before Brian 2.4 (e.g. with 2.3.0.2) is that due to a bug in our conversion of string equations to sympy, we translated all integers to floating point values... We changed this with brian-team/brian2#1202, but well, this now appears to break COBA as you observed (I think it is also behind the warnings you reported as brian-team/brian2#1339).

Probably having function implementations that cast the argument to double explicitly is the best solution, but as a workaround for benchmarking, I'd simply change all integer literals in the COBA example to floating point values...

@denisalevi
Copy link
Member Author

Unfortunately, the problem is not in integer literals but in the exponential euler integration, which both, the COBAHH and the MBody benchmarks use. And as far as I can tell, only the GeNN stateupdates use exp(<int>), maybe slipping in during GeNN code generation? In C++ STandalone, I can only see exp(<float/doubl>) calls. For GeNN, its happening in the updateNeuronsKernel.

I tried different integration methods and the COBAHH example seems only to work with exponential euler (from the ones available in brian). Any idea for a workaround? For the MBody example, it seems to work fine with non-exponential euler integration (at least for the small network of 2500 neurons, didn't try large ones just now).

@tnowotny
Copy link
Contributor

tnowotny commented Sep 8, 2021

I find this slightly worrying but to fully understand, @denisalevi , do you have some generated code (from the GeNNWorkspace) for a failing example with exponential Euler so we can see explicitly where the issue sneaks in?

@mstimberg
Copy link
Member

I did not mean that the original equations contain direct uses of exp(integer), but that they use integer literals (e.g. 40*mV instead of 40.0*mV) which lead to problems later on. Sympy deals with the equations in mathematical terms and therefore can change the type of arguments that involve implicit casting. E.g. in

exp((40*mV - v)/(5*mV))

The argument is clearly a floating point value (mV and v are floating point), but mathematically it is equivalent to e.g.

exp(40*mV/(5*mV)) / exp(-v/(5*mV))

And this simplifies to

exp(8) / (exp-v/(5*mV))

Now if the code initially used 40.*mV and 5.*mV, sympy's simplifications would have turned things into exp(8.) and all would have worked. This is why replacing the integer literals in the COBA example makes it run.

C++ standalone does not show this, because it uses the "loop invariant optimisation" (pulling common terms out of the loop over neurons). This is switched off in GeNN. If you switch it off for C++ via the preference, you will see terms like exp(8) as well. I can't say whether this is a general thing, though, maybe it could also work the other way round, i.e. in other examples you might get integers only when you pull out the terms.

So as a workaround for both COBAHH and MBody, I'd replace all integer numbers in the examples (13*mV13.0*mV, etc.), this should make things work (I only tested it for COBAHH, but the equations are basically the same in the MBody example).

@neworderofjamie
Copy link
Collaborator

neworderofjamie commented Sep 8, 2021

First of all, since GeNN 4.4.0, we actively strip out expf calls and rely on the C++ overloads of exp (as, confusingly, although OpenCL isn't C++ based, these are all it provides). Nonetheless, I think I see what is happening. CUDA provides the following:

float exp(float x);
float exp(double x);

But, unlike standard C++11, doesn't provide:

double exp(int x);

(where int is actually likely to be some sort of SFINAE template nightmare). This means that exp(4) is ambiguous in CUDA as it could be an expression that returns double or float so the compiler fails (http://cpp.sh/9prxjm demonstrates this situation in standard C++). I suspect CUDA doesn't include the integer overload as it would result in (slow) double-precision maths sneaking in in a non-obvious manner. From GeNN's POV, the best solution would be if Brian only used floating point literals for these calls but, we could also generate scalar exp(int x) overloads for CUDA which would return model-precision floating point values. This could result in (very) subtly different behaviour between CPU and GPU but that's probably ok.

@neworderofjamie
Copy link
Collaborator

You could also take your loop invariant constants and stick them into derived parameters - that way they'd get evaluated in standard C++ host code.

@denisalevi
Copy link
Member Author

Ahh, I see, yes I can replace the literals, happy with that workaround for now. Will test it right away. Thanks @mstimberg

Why does brian2GeNN turn off loop invariant optimizations? I've been seeing the log message but wasn't sure why.

@tnowotny I guess that answers your question, right? I can paste some generated code if there are still questions. But its basically a 1000 sign long arithmetic equation that happens to have some exp(3) calls in it 🎉

@neworderofjamie Where do you see the subtle differences to CPU? I think the C++ standard also just casts integer types to double (sounds like that from the docs).

@neworderofjamie
Copy link
Collaborator

neworderofjamie commented Sep 8, 2021

The difference is that scalar exp(int x) would return single precision values if the model precision was set to "float" whereas, on the CPU, the standard C++ overloads (as you say) will always return double. Admittedly, this would only cause a difference with "float" precision in pathological cases like:

const scalar V = exp(4.0+EPS)-exp(4);

@denisalevi
Copy link
Member Author

I see. Yeah I ended up with a preference to decide if you always cast to double or rather to the model-precision. Probably one of those preferences that will never be used though 😅

@tnowotny
Copy link
Contributor

tnowotny commented Sep 8, 2021

@denisalevi , yes, no need to paste code any more for now. We got there in the end ...

denisalevi added a commit to brian-team/brian2cuda that referenced this issue Sep 8, 2021
@mstimberg
Copy link
Member

Sorry for hijacking this thread again, but just a warning for @denisalevi and his benchmarks: I realized that the generated code actually changes when you replace the integer literals with floating point literals. SymPy will evaluate constants differently depending on whether they are integers or floating point values and this can entail that it replaces calls to exprel by variants of exp. So make sure that all of your benchmarks are consistent in that regard, because exprel is more complex to evaluate than exp.
I stumbled across this while fixing a completely unrelated bug, see this recent blog post.

@denisalevi
Copy link
Member Author

Thanks @mstimberg for the warning! I am using the same brian2 network for all benchmarks, so if it uses exp instead of exprel, it should do it for all configurations (right?). And I am also not yet using the exprel function but the the COBAHH definition but still the definition with exp, so there might be no exprel there anyways.

@mstimberg
Copy link
Member

Well, if you are still using the version with exp, then this does indeed not apply :). And in general, if it is the same network (including the number literals) and the same version of Brian, then of course they should all do the same thing.

mstimberg added a commit that referenced this issue May 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants