You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
https://arxiv.org/abs/1706.02515 proposes a new activation function SELU, which can replace explicit batch normalization and could improve overall performance and accuracy of our neural networks.
To do:
Implement SELU activation function
Implement special "alpha dropout" for SNNs because regular dropout is broken with SELU (see p. 6 of paper). I think we can do this transparently by automatically switching to the alpha dropout algorithm in Perceptron constructors if they detect activation_func='selu'
Weight initialisation method suggested at the bottom of p.3 of the paper (→ elektronn2.neuromancer.variables.initweights())
https://arxiv.org/abs/1706.02515 proposes a new activation function SELU, which can replace explicit batch normalization and could improve overall performance and accuracy of our neural networks.
To do:
Perceptron
constructors if they detectactivation_func='selu'
elektronn2.neuromancer.variables.initweights()
)Links:
The text was updated successfully, but these errors were encountered: