You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When creating a graph learner that has as input a task with tf columns, the data_prototype that is saved in the learner's state after training contains the arg and value vectors as well as the evaluator and other metadata defined in tf.
This unnecessarily blows up the size of learner states in a way that was not intended.
I think this should be fixed in tf, i.e. 0-lentgh tf vectors should drop discardable metadata.
The text was updated successfully, but these errors were encountered:
As we now decided to not merge the warning, we should do something about this.
E.g. in mlr3 when creating the data prototype during $train(), it should be possible to add a function that leanifies each column, e.g. stored in mlr_reflections$data_leanifier$tf. This function would then remove the srcref attribute from the functional columns to avoid overly large object sizes when installing with sourcerefs. In the resample() case this is no problem because the prototype is not kept in the learner state.
When creating a graph learner that has as input a task with tf columns, the
data_prototype
that is saved in the learner's state after training contains thearg
andvalue
vectors as well as the evaluator and other metadata defined intf
.This unnecessarily blows up the size of learner states in a way that was not intended.
I think this should be fixed in
tf
, i.e. 0-lentgh tf vectors should drop discardable metadata.The text was updated successfully, but these errors were encountered: