PyTorch Lightning-based implementation of ICLR2021's "Towards faster and stabilized GAN training for high-fidelity few-shot image synthesis" (unofficial).
The generator needs to compare favorably to StyleGAN2 with latest model configuration and differentiable data augmentation for best few-shot training performance.
- spectral normalization (over D or G)
- exponential-moving-average optimization on G
- differentiable augmentation (over D)
- GLU instead of ReLU in G
- skip-layer excitation module
- self-supervised discriminator
- LPIPS-VGG perceptual loss for reconstruction
- Label smoothing in hinge loss
- Noise injection layer
- Swish activation in SLE blocks
- Auxiliary 128-sized layer output
- Add FID tracking (every 10k iterations)
- Add sampling with truncation
- Add interpolation tools
- Add style mixing pipeline
The following image grid of size 1024 has been generated at the 65k-th iteration (on a "one day - one P100" basis) with the main configuration as is.