Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code for running GLOP #222

Open
ujjwaldasari10 opened this issue Sep 30, 2024 · 4 comments
Open

Code for running GLOP #222

ujjwaldasari10 opened this issue Sep 30, 2024 · 4 comments
Assignees
Labels
feature New Feature

Comments

@ujjwaldasari10
Copy link

Can you please give an example code for training and inference of GLOP for lets say TSP of size 10K?

@ujjwaldasari10 ujjwaldasari10 added the bug Something isn't working label Sep 30, 2024
@fedebotu
Copy link
Member

Hi @ujjwaldasari10, we did not merge the code for GLOP yet since it is not refactored so far

CC @Furffico @henry-yeh

@fedebotu fedebotu assigned Furffico and unassigned cbhua and fedebotu Sep 30, 2024
@DasariShreeUjjwal
Copy link

@fedebotu, @cbhua I am adding a couple of more doubts here. 1) Is it possible to save a trained EAS model and apply the model to a newer test instance other than the one it was saved on? 2) In the original DEEPACO paper the models are run for some 'evaluations' and not epochs. Can you please let me know the equivalent implementation in RL4CO since it's not trivial to me, going through the code base? Thanks

@Furffico
Copy link
Member

Furffico commented Oct 14, 2024

Hi @DasariShreeUjjwal 👋🏻, thank you for your interest in our work!
With regard to question 2:

In the original DEEPACO paper the models are run for some 'evaluations' and not epochs. Can you please let me know the equivalent implementation in RL4CO since it's not trivial to me, going through the code base?

The "# of evaluations" in the DeepACO paper refers to the number of evaluating the performance (route length) of solutions during an ACO run on a single problem instance. However, the "epochs" in rl4co is the number of total iterations for the whole training loop, which is more similar to the Total training instances in Table 7. This can be expressed as Total training instances = epochs * train_data_size.

For the former concept, we have evaluations = ACO_iterations * n_ants, and you may set these two parameters accordingly like this:

from rl4co.models import DeepACO
model = DeepACO(
    ...,
    policy_kwargs = dict(n_ants=20, n_iterations=10)
)

@Furffico
Copy link
Member

Can you please give an example code for training and inference of GLOP for lets say TSP of size 10K?

BTW, I'm currently working on reimplementing GLOP in RL4CO, and it's almost complete. You might check the codebase here: https://github.com/Furffico/rl4co/tree/dev-glop/rl4co/models/zoo/glop

@Furffico Furffico added feature New Feature and removed bug Something isn't working labels Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New Feature
Projects
None yet
Development

No branches or pull requests

5 participants