This work uses diffusion models to implement generative fill techniques like image unmasking, inpainting, expansion and
various generative fill techniques.
Image inpainting (Original image, Masked image, Reconstructed image)
DDPM generated (Trained and generated using Landscape dataset).
LDM generated (Trained on COCO and generated using Landscape dataset).
Sampling with COCO
More generated images can be found in results
Check out my DDPM implementation.
This repo includes generative filling using DDPM. To perform the same using LDM, checkout my LDM repo here. The ldm-genfill folder in this repo only has the model config files to perform Generative filling using ldm models. So, use the implementation in my LDM repo with config files in this repo. Demo & other instructions for all LDM conditioning is available in the LDM repo.
Currently unmasked regions from DDPM looks more contextually relevant compared to LDM as DDPM directly works on images. Using Text and class conditioning to support GenFill slightly improves the ability of contextual fill and further training also may improve. There may be plans to make improvements at a later time.
- Reconstructing missing areas of images a.k.a image inpainting.
Recent Updates
- Implement generative fill techniques with class and text guidance.
- Generative filling in latent space for High Resolution.
- GenerativeFill using DDPM.
- LDM GenFill sampling dataset.
- Checkout my LDM repo for LDM generative fill Demo.