A state-of-the-art, simple and fast network for Deep Video Denoising which uses no motion compensation.
Oral presentation at CVPR2020. CVPR publication page
Previous deep video denoising algorithm: DVDnet
This source code provides a PyTorch implementation of the FastDVDnet video denoising algorithm, as in Tassano, Matias and Delon, Julie and Veit, Thomas. "FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation", arXiv preprint arXiv:1907.01361 (2019).
You can download several denoised sequences with our algorithm and other methods here and here
The 2017 DAVIS dataset was used for training. You can find a list with the names of the 480p sequences employed here. The dataloader needs the sequences in mp4 format. You can find the converted .mp4 files under the training folder here.
Note: when converting the sequences one has to pay particular attention to the 'crf' and 'keyint' ffmpeg parameters to avoid strong compression. For the code to convert the image sequences see this gist
Two testsets are used in the paper: Set8 and the 2017 DAVIS testset.
Set8 is composed of 8 sequences: 4 sequences from the Derf 480p testset ("tractor", "touchdown", "park_joy", "sunflower") plus other 4 540p sequences. You can find these under the test_sequences folder here.
FastDVDnet is orders of magnitude faster than other state-of-the-art methods
Left: Input noise sigma 40 denoised with FastDVDnet (sorry about the dithering due to gif compression)
Right: PSNRs on the DAVIS testset, Gaussian noise and clipped Gaussian noise
You can use this Colab notebook to replicate the results
The code runs on Python +3.6. You can create a conda environment with all the dependecies by running
conda env create -f requirements.yml -n <env_name>
NOTE: the code was updated to support a newer version of the DALI library. For the original version of the algorithm which supported pytorch=1.0.0 and nvidia-dali==0.10.0 you can see this release
If you want to denoise an image sequence using the pretrained model you can execute
test_fastdvdnet.py \
--test_path <path_to_input_sequence> \
--noise_sigma 30 \
--save_path results
NOTES
- The image sequence should be stored under <path_to_input_sequence>
- The model has been trained for values of noise in [5, 55]
- run with --no_gpu to run on CPU instead of GPU
- run with --save_noisy to save noisy frames
- set max_num_fr_per_seq to set the max number of frames to load per sequence
- to denoise clipped AWGN run with --model_file model_clipped_noise.pth
- run with --help to see details on all input parameters
DISCLAIMER: The weights shared in this repo were trained with a previous DALI version, v0.10.0, and pytorch v1.0.0. The training code was later updated to work with a more recent version of DALI. However, it has been reported that the perfomance obtained with this newer DALI version is not as good as the original one, see m-tassano#51 for more details.
If you want to train your own models you can execute
train_fastdvdnet.py \
--trainset_dir <path_to_input_mp4s> \
--valset_dir <path_to_val_sequences> \
--log_dir logs
NOTES
- As the dataloader in based on the DALI library, the training sequences must be provided as mp4 files, all under <path_to_input_mp4s>
- The validation sequences must be stored as image sequences in individual folders under <path_to_val_sequences>
- run with --help to see details on all input parameters
Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.
- Author : Matias Tassano
mtassano at fb dot com
- Copyright : (C) 2019 Matias Tassano
- Licence : GPL v3+, see GPLv3.txt
The sequences are Copyright GoPro 2018