Code for our TPAMI 2022 paper An Intermediate-level Attack Framework on The Basis of Linear Regression.
- Python 3.8.8
- PyTorch 1.7.1
- Torchvision 0.8.2
- Joblib 1.1.0
- Sklearn
Select images from ImageNet validation set, and write .csv
file as following:
class_index, class, image_name
0,n01440764,ILSVRC2012_val_00002138.JPEG
2,n01484850,ILSVRC2012_val_00004329.JPEG
...
Perform different baseline attacks (IFGSM/PGD/LinBP) with restarts under $\ell_\infty $ or
python attack_baseline.py -h
usage: attack_baseline.py [-h] [--epsilon EPSILON] [--niters NITERS] [--batch-size BATCH_SIZE] [--save-dir SAVE_DIR] [--force] [--seed SEED] [--constraint {linf,l2}] [--method {IFGSM,PGD,LinBP}] [--linbp-layer LINBP_LAYER] [--restart RESTART]
[--data-dir DATA_DIR] [--data-info-dir DATA_INFO_DIR] [--source-model-dir SOURCE_MODEL_DIR]
optional arguments:
-h, --help show this help message and exit
--epsilon EPSILON
--niters NITERS
--batch-size BATCH_SIZE
--save-dir SAVE_DIR
--force
--seed SEED
--constraint {linf,l2}
--method {IFGSM,PGD,LinBP}
--linbp-layer LINBP_LAYER
--restart RESTART
--data-dir DATA_DIR
--data-info-dir DATA_INFO_DIR
--source-model-dir SOURCE_MODEL_DIR
where --data-info-dir
is the .csv
file of selected images.
Perform our ILA++-LR with different linear regression methods (RR/SVR/ElasticNet) under $\ell_\infty $ or
python3 attack_ilapplr.py -h
usage: attack_ilapplr.py [-h] [--epsilon EPSILON] [--niters NITERS] [--batch-size BATCH_SIZE] [--save-dir SAVE_DIR] [--force] [--seed SEED] [--constraint {linf,l2}] [--history-dir HISTORY_DIR] [--ila-layer ILA_LAYER] [--lr-method {RR,SVR,ElasticNet}]
[--njobs NJOBS] [--random-start] [--data-dir DATA_DIR] [--data-info-dir DATA_INFO_DIR] [--source-model-dir SOURCE_MODEL_DIR]
optional arguments:
-h, --help show this help message and exit
--epsilon EPSILON
--niters NITERS
--batch-size BATCH_SIZE
--save-dir SAVE_DIR
--force
--seed SEED
--constraint {linf,l2}
--history-dir HISTORY_DIR
--ila-layer ILA_LAYER
--lr-method {RR,SVR,ElasticNet}
--njobs NJOBS
--random-start
--data-dir DATA_DIR
--data-info-dir DATA_INFO_DIR
--source-model-dir SOURCE_MODEL_DIR
where the --history-dir
is the directory of adversarial examples generated by baseline attacks, and the --njobs
is the number of multi-processing for doing linear regression.
Evaluate the success rate of adversarial examples:
python3 test.py -h
usage: test.py [-h] [--dir DIR] [--njobs NJOBS] [--seed SEED] [--test-log TEST_LOG] [--victim-dir VICTIM_DIR]
optional arguments:
-h, --help show this help message and exit
--dir DIR
--njobs NJOBS
--seed SEED
--test-log TEST_LOG
--victim-dir VICTIM_DIR
The following resources are very helpful for our work:
Please cite our work in your publications if it helps your research:
@article{guo2022intermediate,
title={An Intermediate-level Attack Framework on The Basis of Linear Regression},
author={Guo, Yiwen and Li, Qizhang and Zuo, Wangmeng and Chen, Hao},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2022},
publisher={IEEE}
}
@inproceedings{li2020yet,
title={Yet Another Intermediate-Level Attack},
author={Li, Qizhang and Guo, Yiwen and Chen, Hao},
booktitle={ECCV},
year={2020}
}