You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I noticed that for AdaptiveFeatureGenerator, the encoder is implemented using padding of Conv, self.layer1 = norm_layer(nn.Conv2d(opt.spade_ic, ndf, kw, stride=1, padding=pw))
However, the warping images have strong artifacts at boundry.
Instead, when replacing the padding with reflect padding, the result is significantly improved.
Because of this, it seems there is a difference between the pretrained model and the testing code. Also, when using reflect padding for training, I often find the training collapses with unknow error after several epochs. Did you notice the same issue?
The text was updated successfully, but these errors were encountered:
Hi, in the AdaptiveFeatureGenerator, I found feature backbone is extracted without reflect padding, self.layer1 = norm_layer(nn.Conv2d(opt.spade_ic, ndf, kw, stride=1, padding=pw))
Because of this, the warped images have surrounding artifacts.
I wonder is there any ways to solve this artifact problem. Thanks.
Hi, I noticed that for AdaptiveFeatureGenerator, the encoder is implemented using padding of Conv,
self.layer1 = norm_layer(nn.Conv2d(opt.spade_ic, ndf, kw, stride=1, padding=pw))
However, the warping images have strong artifacts at boundry.
Instead, when replacing the padding with reflect padding, the result is significantly improved.
Because of this, it seems there is a difference between the pretrained model and the testing code. Also, when using reflect padding for training, I often find the training collapses with unknow error after several epochs. Did you notice the same issue?
The text was updated successfully, but these errors were encountered: