Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A GNN-based Meta-Learning Method for Sparse Portfolio Optimization #58

Open
kayuksel opened this issue Jan 5, 2023 · 3 comments
Open
Assignees

Comments

@kayuksel
Copy link

kayuksel commented Jan 5, 2023

Hello,

Let me start by saying that I am a fan of your work here. I have recently open-sourced by GNN-based meta-learning method for optimization. I have applied it to the sparse index-tracking problem from real-world (after an initial benchmarking on Schwefel function), and it seems to outperform Fast CMA-ES significantly both in terms of producing robust solutions on the blind test set and also in terms of time (total duration and iterations) and space complexity. I include the link to my repository here, in case you would consider adding the method or the benchmarking problem to your repository. Note: GNN, which learns how to generate populations of solutions at each iteration, is trained using gradients retrieved from the loss function, as opposed to black-box ones.

Sincerely, K

@lerrytang
Copy link
Contributor

Hi, thanks for introducing your repo!
I believe these are interesting algos/examples to a wide range of practitioners, and we would love to have your PRs.
As for meta-learning, @LlionJ had this example on locomotion tasks, you may want to have a comparison with his results.

@lerrytang lerrytang self-assigned this Jan 10, 2023
@kayuksel
Copy link
Author

kayuksel commented Jan 29, 2023

Dear @lerrytang,

I apologize for the rest response. I must have missed the notice. Thanks for referring to @LlionJ's work, I will check in-detail.

I have made a few updates, meanwhile, modified the code for the previous repo shared to perform, for robust optimization (by introducing behavioral diversity), in a noisy reward environment (due to monte-carlo optimization, and action noise/corrupting).

P.S. This basically shows the effectiveness of the optimizer in quality-diversity optimization, on a noisy/stochastic environment.

I also created another repo with examples of solving 100k problems (as well as 500k param MovieLens 1M matrix factorization):
https://github.com/kayuksel/genmeta-vs-nevergrad P.S. This also includes a comparison to Nevergrad on Schwefel 30-dim.

Please let me know if you would have any questions. Have a great week.

Sincerely, Kamer

@kayuksel
Copy link
Author

@LlionJ's weight visualization is similar to my meta-learning method (the heart-beats are exploitation-exploration episodes).
https://www.youtube.com/watch?v=g4kbGvCJfXU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants