SARU-Net: A Self Attention ResUnet to generate synthetic CT images for MRI-only BNCT treatment planning
- SARU++
- 3D SARU++
Backend:SARU-flask
Frontend:SARU-VUE
Topas utils:topas4bnct
- Linux or macOS
- Python 3
- CPU or NVIDIA GPU + CUDA CuDNN
We advise the creation of a new conda environment including all necessary packages. The repository includes a requirements file. Please create and activate the new environment with
conda env create -f requirements.yml
conda activate attngan
Running those commands should result in a similar directory structure:
root
datasets
MRICT
train
patient_001_001.png
...
patient_002_001.png
...
patient_100_025.png
test
patient_101_001.png
...
patient_102_002.png
...
patient_110_025.png
val
patient_111_001.png
...
patient_112_002.png
...
...
Our pre-trained model used 130 + patient cases, for a total of about 4500 image pairs, while performing data enhancement methods such as random flipping, random scaling, and random cropping.
We release a pretrained set of weights to allow reproducibility of our results. The weights are downloadable from Google Drive(or 百度云). Once downloaded, unpack the file in the root of the project and test them with the inference notebook.
All the models were trained on 2*NVIDIA 12GB TITAN V.
The training routine of SARU is mainly based on the pix2pix codebase, available with details in the official repository.
To launch a default training, run
python train.py --data_root path/to/data --gpu_ids 0,1,2 --netG attnunet --netD basic --model pix2pix --name attnunet-gan
To help users better understand and use our code, we briefly overview the functionality and implementation of each package and each module here.
If you use this code for your research, please cite our papers.
@article{zhao2022saru,
title={SARU: A self attention ResUnet to generate synthetic CT images for MR-only BNCT treatment planning},
author={Zhao, Sheng and Geng, Changran and Guo, Chang and Tian, Feng and Tang, Xiaobin},
journal={Medical Physics},
year={2022},
publisher={Wiley Online Library}
}
contrastive-unpaired-translation (CUT) CycleGAN-Torch | pix2pix-Torch | pix2pixHD| BicycleGAN | vid2vid | SPADE/GauGAN iGAN | GAN Dissection | GAN Paint
Our code is inspired by pytorch-CycleGAN-and-pix2pix and pytorch-CBAM