-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
C99 math functions fail on integer type arguments #133
Comments
Your example works for me, but I'm using a quite old CUDA version (9.1)... |
Ah, can you try it again with |
Ah, indeed, I can reproduce the issue with |
The |
And a few internal variables are integers, such as I mean I came across this only because the benchmarks that you used for the brian2GeNN paper now fail due to that (which they didn't in the past obviously). In brian2cuda, I'm dealing with this by defining my own math functions that cast integral types to floating point types. One can loose precision here when the conversion is e.g. from int32 to float32, so I have a bunch of warnings. But its a bit of a mess and needs cleaning up / making sure this is doing what it should. Just as a reference, here I explained what I did in brian2cuda the implementation looks currently like this. |
Ok, apologies, I can now reproduce the problem. The reason why it worked for me before is that I use single precision by default, and for single precision GeNN replaces Probably having function implementations that cast the argument to double explicitly is the best solution, but as a workaround for benchmarking, I'd simply change all integer literals in the COBA example to floating point values... |
Unfortunately, the problem is not in integer literals but in the exponential euler integration, which both, the COBAHH and the MBody benchmarks use. And as far as I can tell, only the GeNN stateupdates use I tried different integration methods and the COBAHH example seems only to work with exponential euler (from the ones available in brian). Any idea for a workaround? For the MBody example, it seems to work fine with non-exponential |
I find this slightly worrying but to fully understand, @denisalevi , do you have some generated code (from the GeNNWorkspace) for a failing example with exponential Euler so we can see explicitly where the issue sneaks in? |
I did not mean that the original equations contain direct uses of
The argument is clearly a floating point value (
And this simplifies to
Now if the code initially used C++ standalone does not show this, because it uses the "loop invariant optimisation" (pulling common terms out of the loop over neurons). This is switched off in GeNN. If you switch it off for C++ via the preference, you will see terms like So as a workaround for both COBAHH and MBody, I'd replace all integer numbers in the examples ( |
First of all, since GeNN 4.4.0, we actively strip out float exp(float x);
float exp(double x); But, unlike standard C++11, doesn't provide: double exp(int x); (where |
You could also take your loop invariant constants and stick them into derived parameters - that way they'd get evaluated in standard C++ host code. |
Ahh, I see, yes I can replace the literals, happy with that workaround for now. Will test it right away. Thanks @mstimberg Why does brian2GeNN turn off loop invariant optimizations? I've been seeing the log message but wasn't sure why. @tnowotny I guess that answers your question, right? I can paste some generated code if there are still questions. But its basically a 1000 sign long arithmetic equation that happens to have some @neworderofjamie Where do you see the subtle differences to CPU? I think the C++ standard also just casts integer types to double (sounds like that from the docs). |
The difference is that const scalar V = exp(4.0+EPS)-exp(4); |
I see. Yeah I ended up with a preference to decide if you always cast to |
@denisalevi , yes, no need to paste code any more for now. We got there in the end ... |
Else the benchmarks fail for brian2genn, see brian-team/brian2genn#133
Sorry for hijacking this thread again, but just a warning for @denisalevi and his benchmarks: I realized that the generated code actually changes when you replace the integer literals with floating point literals. SymPy will evaluate constants differently depending on whether they are integers or floating point values and this can entail that it replaces calls to |
Thanks @mstimberg for the warning! I am using the same brian2 network for all benchmarks, so if it uses |
Well, if you are still using the version with |
Running this script (with GeNN 4.5.1 and brian2GeNN 1.6)
fails with
CUDA math functions are overloaded only for floating point types in device code. That seems the source of the issue here. But I'm pretty sure that this used to work in older brian2GeNN / GeNN versions as the
COBAHH
example usesexp(<integer>)
as well and is currently failing (at least for me).Can someone reproduce this?
The text was updated successfully, but these errors were encountered: