Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AfSigmoidBias activation function seems wrong #4

Open
JoostHuizinga opened this issue Oct 27, 2016 · 2 comments
Open

AfSigmoidBias activation function seems wrong #4

JoostHuizinga opened this issue Oct 27, 2016 · 2 comments

Comments

@JoostHuizinga
Copy link

The activation function in AfSigmoidBias, defined in af.hpp, seems wrong. As you can see below, in the current implementation, only the bias (trait<P>::single_value(this->_params) is multiplied by lambda.

return 1.0 / (exp(-p + trait<P>::single_value(this->_params) * lambda) + 1);

However, I would assume that the sum of (-p + bias) should be multiplied by lambda. In code:

return 1.0 / (exp((-p + trait<P>::single_value(this->_params)) * lambda) + 1);

Also, since it is called a bias, shouldn't it be added to p before it becomes negative? In its current form, it seems that the bias would function more like a threshold.

@jbmouret
Copy link
Member

Good catch! At first sight, you seem to be right (I am not sure we have
used this function a lot. Did you use it regularly?)

Le ven. 28 oct. 2016 01:32, Joost Huizinga [email protected] a
écrit :

The activation function in AfSigmoidBias, defined in af.hpp, seems wrong.
As you can see below, in the current implementation, only the bias (
trait

::single_value(this->_params) is multiplied by lambda.

return 1.0 / (exp(-p + trait

::single_value(this->_params) * lambda) + 1);

However, I would assume that the sum of (-p + bias) should be multiplied
by lambda. In code:

return 1.0 / (exp((-p + trait

::single_value(this->_params)) * lambda) + 1);

Also, since it is called a bias, shouldn't it be added to p before it
becomes negative? In its current form, it seems that the bias would
function more like a threshold.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#4, or mute the thread
https://github.com/notifications/unsubscribe-auth/AFu8v5mpH-C0mvjR4awqe21fH9UOirNtks5q4RgdgaJpZM4Ki2R4
.

@JoostHuizinga
Copy link
Author

I don't think I have used this function in previous work; the retina experiment had the activation function defined within the experiment file itself, and the rest of my research used completely different types of activation functions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants