-
Develop a discriminator utilizing multi-task learning (MTL), which leverages three simultaneous tasks—restoration, image-level decision, and pixel-level decision—to transfer contextual, global, and local feedback between the real normal-dose and synthesized images to the generator.
-
Propose two regulations to improve the representation capabilities of the discriminator: restoration consistency (RC), which compares the discriminator's outputs from the input data with the corresponding restoration data generated by our MTL discriminator for consistency, and non-difference suppression (NDS), which excludes areas that cause confusion in discriminator decisions.
-
Design a novel generator that consists of residual fast Fourier transform with convolution (Res-FFT-Conv) blocks that fuse frequency-spatial dual-domain representations. The proposed generator effectively captures rich information by simultaneously utilizing spatial (or local), spectral (or global), and residual connections. To the best of our knowledge, this represents an inaugural effort in employing the Res-FFT-Conv block within the generator for LDCT denoising, which demonstrates the versatility of the block.
-
Evaluate our network with extensive experiments, including an ablation study and visual scoring using two distinct datasets of brain and abdominal CT images. Six metrics based on pixel- and feature-spaces were used, and the results indicated superior performances in both quantitative and qualitative measures compared to those of state-of-the-art denoising techniques.
This repository provides the official implementation code of MTD-GAN in the following paper:
Generative Adversarial Network with Robust Discriminator Through Multi-Task Learning for Low-Dose CT Denoising
Authors: Sunggu Kyung, Jongjun Won, Seongyong Pak, Sunwoo Kim, Sangyoon Lee, Kanggil Park, Gil-Sun Hong, and Namkug Kim
MI2RL LAB
IEEE Transactions on Medical Imaging (TMI)
DOI: 10.1109/TMI.2024.3449647
- Linux
- CUDA 11.6
- Python 3.8.5
- Pytorch 1.13.1
$ git clone https://github.com/babbu3682/MTD-GAN.git
$ cd MTD-GAN/
$ pip install -r requirements.txt
Download the dataset from Low Dose CT Grand Challange.
- The processed dataset directory structure as follows:
datasets/MAYO
train
|-- full_3mm
|-- quarter_3mm
|-- L067
|-- L096
|-- L109
|-- L143
.
.
.
valid
|-- full_3mm
|-- quarter_3mm
|-- L333
.
.
.
test
|-- full_3mm
|-- quarter_3mm
|-- L506
.
.
.
• train:
CUDA_VISIBLE_DEVICES=2 python -W ignore train.py \
--dataset 'mayo' \
--dataset-type-train 'window_patch' \
--dataset-type-valid 'window' \
--batch-size 20 \
--train-num-workers 16 \
--valid-num-workers 16 \
--model 'MTD_GAN_Method' \
--loss 'L1 Loss' \
--method 'pcgrad' \
--optimizer 'adamw' \
--scheduler 'poly_lr' \
--epochs 500 \
--warmup-epochs 10 \
--lr 1e-4 \
--min-lr 1e-6 \
--multi-gpu-mode 'Single' \
--device 'cuda' \
--print-freq 10 \
--save-checkpoint-every 1 \
--checkpoint-dir '/workspace/sunggu/4.Dose_img2img/MTD_GAN/checkpoints/abdomen/MTD_GAN' \
--save-dir '/workspace/sunggu/4.Dose_img2img/MTD_GAN/predictions/train/abdomen/MTD_GAN' \
--memo 'abdomen, 500 epoch, node 14'
• test:
CUDA_VISIBLE_DEVICES=2 python -W ignore test.py \
--dataset 'mayo_test' \
--dataset-type-test 'window' \
--test-batch-size 1 \
--test-num-workers 16 \
--model 'MTD_GAN' \
--loss 'L1 Loss' \
--multi-gpu-mode 'Single' \
--device 'cuda' \
--print-freq 10 \
--checkpoint-dir '/workspace/sunggu/4.Dose_img2img/MTD_GAN/checkpoints/abdomen/MTD_GAN' \
--save-dir '/workspace/sunggu/4.Dose_img2img/MTD_GAN/predictions/test/abdomen/MTD_GAN' \
--resume "/workspace/sunggu/4.Dose_img2img/MTD_GAN/checkpoints/abdomen/MTD_GAN/epoch_77777_checkpoint.pth" \
--memo 'abdomen, node 14' \
--epoch 77777
- CSV files for evaluating statistical significance with p-values using a paired t-test. 🖇️Link
- High-quality images included in our manuscript. 🖇️Link
- Evaluation criteria for the blind reader study. 🖇️Link
For personal information security reasons of medical data in Korea, our data cannot be disclosed. The previous and our works' weights are too large to upload, please contact us by email by filling out the appropriate form.
If you use this code for your research, please cite our papers:
⏳ It's scheduled to be uploaded soon.
We acknowledge the open-source libraries, including the Diffuser and MONAI Generative Models, which enabled valuable comparisons in this study, and we extend our thanks to the pioneering authors (e.g., RED-CNN, EDCNN, CTformer, Restormer, WGAN_VGG, MAP-NN, DUGAN). The MTL weight adjustment code was referenced in nash-mtl.
Project is distributed under MIT License