This repository contains all code and experiments created for dark corner artifact removal for ISIC (and other) skin lesion images.
Please read the DISCLAIMER before using any of the methods or code from this repository.
If you use any part of the DCA masking/removal process in this research project, please consider citing this paper:
@InProceedings{Pewton_2022_CVPR,
author = {Pewton, Samuel William and Yap, Moi Hoon},
title = {Dark Corner on Skin Lesion Image Dataset: Does It Matter?},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2022},
pages = {4831-4839}
}
The main dataset used in this research is the result from the duplicate removal process detailed in this repository.
If using this dataset or any of the associated methods then please consider citing the following paper:
@article{cassidy2021isic,
title = {Analysis of the ISIC Image Datasets: Usage, Benchmarks and Recommendations},
author = {Bill Cassidy and Connah Kendrick and Andrzej Brodzicki and Joanna Jaworek-Korjakowska and Moi Hoon Yap},
journal = {Medical Image Analysis},
year = {2021},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2021.102305},
url = {https://www.sciencedirect.com/science/article/pii/S1361841521003509}
}
** - needs to be created by user
Dark_Corner_Artifact_Removal
├─ Data
| └─ Annotations
| └─ DCA_Masks
| | └─ train
| | | └─ mel
| | | └─ oth
| | └─ val
| | └─ mel
| | └─ oth
| └─ Dermofit**
| └─ Metrics_Dermofit
| | └─ generated_metrics
| | └─ input**
| | | └─ gt**
| | | └─ large**
| | | └─ medium**
| | | └─ oth**
| | | └─ small**
| | └─ output**
| | └─ large**
| | └─ medium**
| | └─ oth**
| | └─ small**
| └─ train_balanced_224x224
| | └─ train
| | └─ mel
| | └─ oth
| | └─ val
| | └─ mel
| | └─ oth
| └─ train_balanced_224x224_inpainted_ns
| | └─ train
| | | └─ mel
| | | └─ oth
| | └─ val
| | └─ mel
| | └─ oth
| └─ train_balanced_224x224_inpainted_telea
| └─ train
| | └─ mel
| | └─ oth
| └─ val
| └─ mel
| └─ oth
├─ Models
| └─ Baseline
| | └─ .. all model experiments ..
| └─ Inpaint_NS
| | └─ .. all model experiments ..
| └─ Inpaint_Telea
| └─ .. all model experiments ..
├─ Modules
└─ Notebooks
└─ 0 - Preliminary Experiments
└─ 1 - Dataset
└─ 2 - Dynamic Masking
└─ 3 - Image Modifications
└─ 4 - Results
- Generate the ISIC balanced dataset from https://github.com/mmu-dermatology-research/isic_duplicate_removal_strategy and save inside the
Data
directory. - Download
EDSR_x4.pb
from https://github.com/Saafke/EDSR_Tensorflow and save inside theModels
directory./Models/EDSR_x4.pb
- Create Dermofit directory inside Data directory.
Data/Dermofit
- Load Dermofit image library https://licensing.edinburgh-innovations.ed.ac.uk/product/dermofit-image-library inside the Dermofit directory. This will be split into many sub-folders (AK, ALLBCC, ALLDF, etc...), leave that as it is.
- Create Metrics_Dermofit file structure as shown above.
- Modify filepaths and run
/Modules/prepare_dermofit.py
only if using dermofit.
This project requires the following installations:
- Python 3.9.7
- Anaconda 4.11.0
- pandas 1.3.5
- numpy 1.21.5
- scikit-learn 1.0.2
- scikit-image 0.16.2
- Jupyter Notebook
- matplotlib 3.5.0
- OpenCV 4.5.5
- Pillow 8.4.0
- Tensorflow 2.9.0-dev20220203
- Tensorflow-GPU 2.9.0-dev20220203
- CUDA 11.2.1
- CuDNN 8.1
- Keras
Load /Notebooks/2 - Dynamic Masking/Mask All DCA Images.ipynb/
in Jupyter Notebook and run
Load /Notebooks/3 - Image Modifications/Inpaint Dataset.ipynb/
in Jupyter Notebook and run cells as required - recommended to run individually as the removal process is time consuming.
Run /Modules/generate_dermofit_metrics.py
Modify and run /Modules/AbolatationStudy.py
as required
Modify and run /Modules/model_performance.py
as required
Load /Notebooks/4 - Results/GradCAM Method Comparison.ipynb/
in Jupyter Notebook and run
Load /Notebooks/4 - Results/GradCAM-Inscribed DCAs.ipynb/
in Jupyter Notebook and run
Full results for the deep learning experiments:
Baseline Model Results:
Model | Settings | Metrics | Micro-Average | ||||
Best Epoch | Acc | TPR | TNR | F1 | AUC | Precision | |
VGG16 | 33 | 0.78 | 0.84 | 0.73 | 0.80 | 0.87 | 0.76 |
VGG19 | 32 | 0.78 | 0.80 | 0.76 | 0.79 | 0.87 | 0.77 |
Xception | 20 | 0.81 | 0.86 | 0.76 | 0.82 | 0.88 | 0.78 |
ResNet50 | 18 | 0.79 | 0.85 | 0.74 | 0.80 | 0.87 | 0.77 |
ResNet101 | 6 | 0.78 | 0.85 | 0.70 | 0.79 | 0.85 | 0.74 |
ResNet152 | 19 | 0.79 | 0.84 | 0.74 | 0.80 | 0.87 | 0.76 |
ResNet50V2 | 14 | 0.77 | 0.82 | 0.73 | 0.78 | 0.85 | 0.75 |
ResNet101V2 | 41 | 0.79 | 0.79 | 0.78 | 0.79 | 0.87 | 0.78 |
ResNet152V2 | 25 | 0.78 | 0.78 | 0.77 | 0.78 | 0.85 | 0.77 |
InceptionV3 | 36 | 0.80 | 0.80 | 0.81 | 0.80 | 0.88 | 0.80 |
InceptionResNetV2 | 20 | 0.82 | 0.83 | 0.80 | 0.82 | 0.89 | 0.81 |
DenseNet121 | 5 | 0.76 | 0.84 | 0.67 | 0.78 | 0.82 | 0.72 |
DenseNet169 | 36 | 0.80 | 0.87 | 0.72 | 0.81 | 0.88 | 0.76 |
DenseNet201 | 17 | 0.79 | 0.87 | 0.70 | 0.80 | 0.86 | 0.74 |
EfficientNetB0 | 28 | 0.78 | 0.87 | 0.69 | 0.80 | 0.87 | 0.74 |
EfficientNetB1 | 19 | 0.77 | 0.86 | 0.68 | 0.79 | 0.85 | 0.73 |
EfficientNetB3 | 13 | 0.75 | 0.88 | 0.63 | 0.78 | 0.82 | 0.70 |
EfficientNetB4 | 46 | 0.78 | 0.85 | 0.71 | 0.79 | 0.86 | 0.74 |
Inpainting Results (Navier Stokes based method)
Model | Settings | Metrics | Micro-Average | ||||
Best Epoch | Acc | TPR | TNR | F1 | AUC | Precision | |
VGG16 | 49 | 0.79 | 0.85 | 0.72 | 0.80 | 0.87 | 0.75 |
VGG19 | 34 | 0.78 | 0.84 | 0.72 | 0.79 | 0.86 | 0.75 |
Xception | 19 | 0.80 | 0.83 | 0.78 | 0.81 | 0.88 | 0.79 |
ResNet50 | 39 | 0.79 | 0.84 | 0.75 | 0.80 | 0.88 | 0.77 |
ResNet101 | 33 | 0.79 | 0.87 | 0.71 | 0.81 | 0.87 | 0.75 |
ResNet152 | 17 | 0.79 | 0.85 | 0.73 | 0.80 | 0.88 | 0.76 |
ResNet50V2 | 20 | 0.79 | 0.81 | 0.76 | 0.79 | 0.87 | 0.77 |
ResNet101V2 | 40 | 0.79 | 0.88 | 0.70 | 0.80 | 0.88 | 0.79 |
ResNet152V2 | 23 | 0.78 | 0.80 | 0.75 | 0.78 | 0.86 | 0.76 |
InceptionV3 | 22 | 0.79 | 0.80 | 0.77 | 0.79 | 0.87 | 0.78 |
InceptionResNetV2 | 19 | 0.80 | 0.79 | 0.81 | 0.80 | 0.88 | 0.81 |
DenseNet121 | 37 | 0.80 | 0.83 | 0.77 | 0.80 | 0.88 | 0.78 |
DenseNet169 | 12 | 0.77 | 0.78 | 0.75 | 0.77 | 0.85 | 0.76 |
DenseNet201 | 25 | 0.78 | 0.80 | 0.75 | 0.78 | 0.86 | 0.76 |
EfficientNetB0 | 20 | 0.77 | 0.88 | 0.66 | 0.79 | 0.86 | 0.72 |
EfficientNetB1 | 13 | 0.76 | 0.78 | 0.75 | 0.77 | 0.83 | 0.75 |
EfficientNetB3 | 28 | 0.77 | 0.82 | 0.73 | 0.78 | 0.86 | 0.75 |
EfficientNetB4 | 37 | 0.78 | 0.88 | 0.69 | 0.80 | 0.87 | 0.74 |
Inpainting Results (Telea based method)
Model | Settings | Metrics | Micro-Average | ||||
Best Epoch | Acc | TPR | TNR | F1 | AUC | Precision | |
VGG16 | 54 | 0.79 | 0.82 | 0.75 | 0.79 | 0.87 | 0.77 |
VGG19 | 10 | 0.71 | 0.78 | 0.64 | 0.73 | 0.78 | 0.68 |
Xception | 10 | 0.79 | 0.84 | 0.75 | 0.80 | 0.88 | 0.77 |
ResNet50 | 10 | 0.77 | 0.81 | 0.74 | 0.78 | 0.87 | 0.76 |
ResNet101 | 33 | 0.80 | 0.80 | 0.79 | 0.80 | 0.88 | 0.79 |
ResNet152 | 23 | 0.79 | 0.80 | 0.78 | 0.79 | 0.87 | 0.78 |
ResNet50V2 | 23 | 0.78 | 0.76 | 0.81 | 0.78 | 0.87 | 0.80 |
ResNet101V2 | 25 | 0.79 | 0.78 | 0.79 | 0.78 | 0.87 | 0.79 |
ResNet152V2 | 29 | 0.79 | 0.83 | 0.75 | 0.80 | 0.87 | 0.77 |
InceptionV3 | 18 | 0.79 | 0.81 | 0.76 | 0.79 | 0.86 | 0.77 |
InceptionResNetV2 | 11 | 0.79 | 0.88 | 0.69 | 0.81 | 0.88 | 0.74 |
DenseNet121 | 61 | 0.80 | 0.80 | 0.80 | 0.80 | 0.88 | 0.80 |
DenseNet169 | 18 | 0.78 | 0.75 | 0.80 | 0.77 | 0.87 | 0.79 |
DenseNet201 | 38 | 0.79 | 0.84 | 0.73 | 0.80 | 0.87 | 0.76 |
EfficientNetB0 | 18 | 0.78 | 0.85 | 0.72 | 0.80 | 0.87 | 0.75 |
EfficientNetB1 | 51 | 0.78 | 0.86 | 0.79 | 0.78 | 0.87 | 0.79 |
EfficientNetB3 | 49 | 0.79 | 0.79 | 0.78 | 0.79 | 0.87 | 0.78 |
EfficientNetB4 | 10 | 0.75 | 0.86 | 0.64 | 0.77 | 0.82 | 0.71 |
@article{cassidy2021isic,
title = {Analysis of the ISIC Image Datasets: Usage, Benchmarks and Recommendations},
author = {Bill Cassidy and Connah Kendrick and Andrzej Brodzicki and Joanna Jaworek-Korjakowska and Moi Hoon Yap},
journal = {Medical Image Analysis},
year = {2021},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2021.102305},
url = {https://www.sciencedirect.com/science/article/pii/S1361841521003509}
}
@misc{rosebrock_2020,
title = {Grad-cam: Visualize class activation maps with Keras, tensorflow, and Deep Learning},
url = {https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/},
journal = {PyImageSearch},
author = {Rosebrock, Adrian},
year = {2020},
month = {3},
note = {[Accessed: 10-03-2022]}
}
@article{scikit-image,
title = {scikit-image: image processing in {P}ython},
author = {van der Walt, {S}t\'efan and {S}ch\"onberger, {J}ohannes {L}. and
{Nunez-Iglesias}, {J}uan and {B}oulogne, {F}ran\c{c}ois and {W}arner,
{J}oshua {D}. and {Y}ager, {N}eil and {G}ouillart, {E}mmanuelle and
{Y}u, {T}ony and the scikit-image contributors},
year = {2014},
month = {6},
keywords = {Image processing, Reproducible research, Education,
Visualization, Open source, Python, Scientific programming},
volume = {2},
pages = {e453},
journal = {PeerJ},
issn = {2167-8359},
url = {https://doi.org/10.7717/peerj.453},
doi = {10.7717/peerj.453}
}
@article{scikit-learn,
title = {Scikit-learn: Machine Learning in {P}ython},
author = {Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal = {Journal of Machine Learning Research},
volume = {12},
pages = {2825--2830},
year = {2011}
}
@inproceedings{lim2017enhanced,
title = {Enhanced deep residual networks for single image super-resolution},
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Mu Lee, Kyoung},
booktitle= {Proceedings of the IEEE conference on computer vision and pattern recognition workshops},
pages = {136--144},
year = {2017}
}