You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Leaving this as an open answer to common question)
Why GBReweighter/UGradientBoostingClassifier provide different weights after each training?
Both algorithms are based on stochastic tree boosting. Settings like subsample and max_features drive to randomized tree building (i.e. each tree uses only random part of train data), which is widely known to strengthen ensemble by building more diverse trees.
hep_ml follows sklearn convention to keep random things random unless explicitly asked otherwise.
Reproducible behavior is achieved with setting random_state
arogozhnikov
changed the title
Randomization of GBReweighter and UGradientBoostingClassifier
Random behavior of GBReweighter and UGradientBoostingClassifier
Oct 30, 2018
(Leaving this as an open answer to common question)
Why GBReweighter/UGradientBoostingClassifier provide different weights after each training?
Both algorithms are based on stochastic tree boosting. Settings like
subsample
andmax_features
drive to randomized tree building (i.e. each tree uses only random part of train data), which is widely known to strengthen ensemble by building more diverse trees.hep_ml
followssklearn
convention to keep random things random unless explicitly asked otherwise.Reproducible behavior is achieved with setting
random_state
The text was updated successfully, but these errors were encountered: