Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Little question about cal_loss #52

Open
zivzone opened this issue Jul 22, 2022 · 0 comments
Open

Little question about cal_loss #52

zivzone opened this issue Jul 22, 2022 · 0 comments

Comments

@zivzone
Copy link

zivzone commented Jul 22, 2022

Hi,
Thank you for this amazing works. I have a little question about this funtion at this line

def cal_loss(self, im_dict, latent_in, latent_F=None, F_init=None):

In both "invert_images_in_FS" & "invert_images_in_w", it seems that you didn't pass "latent_F" & "F_init" in cal_loss to do the computation below:

   if latent_F is not None and F_init is not None:
      l_F = self.net.cal_l_F(latent_F, F_init)
       loss_dic['l_F'] = l_F
       loss += l_F

I wonder that should we still calaulate l_F loss in somewhere? or I misunderstanding somethings?

BR,
Ziv

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant