Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concerns about evaluation methods in the paper #30

Open
kplum opened this issue Mar 11, 2021 · 0 comments
Open

Concerns about evaluation methods in the paper #30

kplum opened this issue Mar 11, 2021 · 0 comments

Comments

@kplum
Copy link

kplum commented Mar 11, 2021

Thanks for your contribution, @ZPdesu, but I have some questions about your paper.

  1. In this comment (Clarification of network training  #7 (comment)) you say that you compute the FID between generated images from the test set and real images from the training set. Could you explain why you chose this approach? In my opinion this is not what we really want to measure, the networks might overfit to the training set and performance should always be measured using only unseen images.
  2. Why are the pix2pixHD images for Cityscapes that are shown in the paper and with which you supposedly compare the SEAN method so bad? They look nothing like the pix2pixHD images from the original paper or what I have reproduced with pix2pixHD in smaller resolutions. Did you train pix2pixHD yourself, if so could you specify with what parameters?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant