-
-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caching #16
Comments
I think caching would be best implemented in |
agreed |
i think so too. and this should really be properly defined in a design doc,as this is pretty hard to get right. for this we need a little bit more progress on mlr3pipelines. |
shall we move the issue? |
do you mean for degugging purposes? seems potentially nice then, but i really would hold that separate as usecase and regarding technical solutions |
like i said above to move forward we need a formal definition / proposal how caching would work in mlr3 |
Some Thoughts:
|
tackled in #382 |
After dealing with the idea of caching in mlr recently, I think this is an important topic for mlr3.
It would be a core base feature and should be integrated right from the start.
While in mlr I just did it for caching filter values for now, we should think of implementing it as a pkg option and make it available for all calls (resample, train, tuning, filtering, etc).
Most calls (dataset, learner, hyperpars) are unique and caching won't have that much of an effect as for filtering (for which the call to generate the filter values is always the same and the subsetting happens afterwards).
However, it can also have a positive effect on "normal" train/test calls:
I've added a function
delete_cache()
andget_cache_dir()
in my mlr PR to make the cache handling more convenient. We could think about a own classcache
for such things.Please share your opinions.
The text was updated successfully, but these errors were encountered: