Implementation of the 3DMM deep face reconstruction method with train and test code:
Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set
This is an experiment that was done many years ago. Now I release the code if anyone needs it.
See assets/images
and assets/results
.
Libraries that are needed for running the code, with preferred versions:
torch==2.2.1
torchvision==0.17.1
pytorch3d==0.7.6
tensorboard==2.14.0
opencv-python==3.4.11.43
dlib==19.17.0
Download the parameters that are needed for running the code:
- Please go to this repo, download BFM and Expression Basis, and run the conversion program to get
BFM09_model_info.mat
. Then place theBFM09_model_info.mat
file inparams/bfm
and runparse.py
inside theparams
directory. - If you want to train the model by yourself, please go to this repo and download
model_ir_se50.pth
. Place the downloaded file inside theparams
directory. This is only needed for training.
- An example for inference is shown by running
python reconstruct_and_render.py
, which reconstructs the faces inassets/images
and the rendered results are saved inassets/results
.
- Training dataset: if you want to construct your own dataset, please follow the following steps
- Collect images with human faces and place them in
data/images
. It is preferred that each image contains only one face. - Inside the
data
directory, runpython main.py
. The program will crop the faces, get 68 facial landmarks and the corresponding face region masks. Processed training data will be saved indata/data
. I usedlib
to get facial landmarks. You can replace it with a better one. Face regions are detected by using nasir6/face-segmentation (Thanks to this work!)
- Collect images with human faces and place them in
- Train the model: run
python train.py
. Hyperparameters are written inside this file.- TODO: Distributed Data Parallel is not used yet.
- TODO: 3DMM parameters can also be trained in order to get a better model. I haven't experiment with this.
Thanks to the following works: