You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Let me start by saying that I am a fan of your work here. I have recently open-sourced by GNN-based meta-learning method for optimization. I have applied it to the sparse index-tracking problem from real-world (after an initial benchmarking on Schwefel function), and it seems to outperform Fast CMA-ES significantly both in terms of producing robust solutions on the blind test set and also in terms of time (total duration and iterations) and space complexity. I include the link to my repository here, in case you would consider adding the method or the benchmarking problem to your repository. Note: GNN, which learns how to generate populations of solutions at each iteration, is trained using gradients retrieved from the loss function, as opposed to black-box ones.
Sincerely, K
The text was updated successfully, but these errors were encountered:
Hi, thanks for introducing your repo!
I believe these are interesting algos/examples to a wide range of practitioners, and we would love to have your PRs.
As for meta-learning, @LlionJ had this example on locomotion tasks, you may want to have a comparison with his results.
I apologize for the rest response. I must have missed the notice. Thanks for referring to @LlionJ's work, I will check in-detail.
I have made a few updates, meanwhile, modified the code for the previous repo shared to perform, for robust optimization (by introducing behavioral diversity), in a noisy reward environment (due to monte-carlo optimization, and action noise/corrupting).
P.S. This basically shows the effectiveness of the optimizer in quality-diversity optimization, on a noisy/stochastic environment.
I also created another repo with examples of solving 100k problems (as well as 500k param MovieLens 1M matrix factorization): https://github.com/kayuksel/genmeta-vs-nevergrad P.S. This also includes a comparison to Nevergrad on Schwefel 30-dim.
Please let me know if you would have any questions. Have a great week.
Hello,
Let me start by saying that I am a fan of your work here. I have recently open-sourced by GNN-based meta-learning method for optimization. I have applied it to the sparse index-tracking problem from real-world (after an initial benchmarking on Schwefel function), and it seems to outperform Fast CMA-ES significantly both in terms of producing robust solutions on the blind test set and also in terms of time (total duration and iterations) and space complexity. I include the link to my repository here, in case you would consider adding the method or the benchmarking problem to your repository. Note: GNN, which learns how to generate populations of solutions at each iteration, is trained using gradients retrieved from the loss function, as opposed to black-box ones.
Sincerely, K
The text was updated successfully, but these errors were encountered: