Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate Standard ACL Paper Matching Scorer #64

Open
neubig opened this issue May 14, 2021 · 4 comments · May be fixed by #72
Open

Integrate Standard ACL Paper Matching Scorer #64

neubig opened this issue May 14, 2021 · 4 comments · May be fixed by #72
Assignees

Comments

@neubig
Copy link

neubig commented May 14, 2021

Hi,

We've been using a different way to calculate affinity scores for ACL conferences that is very fast to calculate and also seems to be relatively effective: https://github.com/acl-org/reviewer-paper-matching/

It'd be nice to integrate this into this package to make it compatible with OpenReview's recommendation system. I'd be happy to do this when I have time, but my time is pretty limited nowadays (and I'm not very familiar with the openreview-expertise package yet) so if someone else would have time to do so that would also be greatly appreciated.

For reference, the relevant part for calculating the affinity scores is here: https://github.com/acl-org/reviewer-paper-matching/blob/master/suggest_reviewers.py#L981

@carlosmondra
Copy link
Member

Thanks for all your feedback @neubig! I think implementing more algorithms in the expertise of OpenReview is a great idea.

@purujitgoyal
Copy link
Contributor

purujitgoyal commented Jul 27, 2021

Hi, Graham (@neubig), I was looking at the code block you mentioned above and had a couple of questions. The suggest_reviewers.py expects a trained model to get the embeddings for the submissions and the reviewer data.

Would you like to integrate this method of calculating the affinity scores with the existing models in the openreview-expertise repo or as a separate entity with models used as mentioned in the reviewer-paper-matching repo?

@neubig
Copy link
Author

neubig commented Jul 30, 2021

Hi @purujitgoyal ! Thanks for helping, and sorry about the late reply. I'm afraid I don't really understand the distinction between the two options you presented though.

To clarify, in the reviewer-paper-matching repository linked above, there is a method to calculate affinity scores based on discriminatively trained embeddings. This seems to work pretty well, and qualitatively the matches that I've gotten using this method seemed a bit better than the models implemented in the openreview-expertise repository. The code to calculate these affinity scores is here: https://github.com/acl-org/reviewer-paper-matching/blob/master/suggest_reviewers.py#L981

It would be nice if when we run the openreview-expertise code, these affinity scores could be calculated and used instead of other options to calculate affinity scores such as spectre-mfr.

Does this clarify things?

@purujitgoyal
Copy link
Contributor

purujitgoyal commented Jul 30, 2021

I see. So, if I understand it correctly, the user will provide a pre-trained model to calculate the embeddings. We don't have to train a model on openreview data, right? @neubig

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants