You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your interesting research. I have two questions regarding the implementation of the model in the paper.
The graph encoder has two modules: GNN and Transformer. Each module takes the computational graph as input. How do you obtain a single embedding from many node embedding from EACH module before concatenating them together? Are you averaging them?
There are two losses mentioned in the paper: The CL loss and the regression loss. Are the graph encoder and the prediction head trained together? If so, then how did you combine these two losses? Or did you first train the graph encoder and then freeze it?
Thank you.
The text was updated successfully, but these errors were encountered:
Hi,
Thank you for your interesting research. I have two questions regarding the implementation of the model in the paper.
Thank you.
The text was updated successfully, but these errors were encountered: