Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frameworks for HP optimization #55

Open
21 tasks
azev77 opened this issue Jun 9, 2020 · 4 comments
Open
21 tasks

Frameworks for HP optimization #55

azev77 opened this issue Jun 9, 2020 · 4 comments

Comments

@azev77
Copy link

azev77 commented Jun 9, 2020

Julia HP optimization packages:

Other HP optimization packages:

There are projects that benchmark different AutoML systems: https://openml.github.io/automlbenchmark/
From our conversation: JuliaAI/MLJ.jl#416 (comment)
I wanted to tell you guys about Optuna (repo & paper) a new framework for HP optimization.
A nice comparison w/ Hyperopt shows what can be done for HP visualization:
https://neptune.ai/blog/optuna-vs-hyperopt

Here are a few snips:
image

image

A 3 minute clip: https://www.youtube.com/watch?v=-UeC4MR3PHM

It would really be amazing for MLJ to incorporate this!

@ghost
Copy link

ghost commented Jun 9, 2020

Hyperopt.py (also Hyperopt.jl)

To clarify, Hyperopt.jl is not related to the Python hyperopt. It uses different optimisation techniques (random search, latin hypercube sampling and Bayesian optimization) and deserves its own position in the list.

However TreeParzen.jl is a direct port of the Python hyperopt to Julia, uses the same optimisation technique (tree-parzen estimators), and has the same behaviour.

@vollmersj
Copy link
Collaborator

Maybe it is worth considering bandit frame works Ax

@azev77
Copy link
Author

azev77 commented Jul 4, 2020

Thanks. @vollmersj, added. Please let me know if you have other suggestions

@casasgomezuribarri
Copy link

Hi, I'm really excited to see a Bayesian optimization method for hyperparameter tuning! I note that RandomSerach() and LatinHypercube() are already possible choices for the tuning = kwarg of TunedModel(), and I see them grouped together with Bayesian opt in the original post of this issue. Is it already possible to implement this method for hyperparameter tuning/is there any notion of whether it will be out soon/how soon? :-)

Thanks a lot for basically everything so far!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants