Official Repository of ''Backdoor Attacks on Vision Transformers''.
- Python >= 3.7.6
- PyTorch >= 1.4
- torchvision >= 0.5.0
- timm==0.3.2
We follow the same steps as ''Hidden Trigger Backdoor Attacks'' for dataset preparations. We repeat the instructions here for convenience.
python create_imagenet_filelist.py cfg/dataset.cfg
-
Change ImageNet data source in dataset.cfg
-
This script partitions the ImageNet train and val data into poison generation, finetune and val to run HTBA attack. Change this for your specific needs.
- Please create a separate configuration file for each experiment.
- One example is cfg/singlesource_singletarget_1000class_finetune_deit_base/experiment_0001_base.cfg. Create a copy and make desired changes.
- The configuration file makes it easy to control all parameters (e.g. poison injection rate, epsilon, patch_size, trigger_ID)
- First create directory data/transformer/<EXPERIMENT_ID> and a file in it named source_wnid_list.txt which will contain all the wnids of the source categories for the experiment.
python generate_poison_transformer.py cfg/singlesource_singletarget_1000class_finetune_deit_base/experiment_0001_base.cfg
python finetune_transformer.py cfg/singlesource_singletarget_1000class_finetune_deit_base/experiment_0001_base.cfg
python test_time_defense.py cfg/singlesource_singletarget_1000class_finetune_deit_base/experiment_0001_base.cfg
- We have provided the triggers used in our experiments in data/triggers
- To reproduce our experiments please use the correct poison injection rates. There might be some variation in numbers depending on the randomness of the ImageNet data split.
This project is under the MIT license.
Please cite us using:
@article{subramanya2022backdoor,
title={Backdoor Attacks on Vision Transformers},
author={Subramanya, Akshayvarun and Saha, Aniruddha and Koohpayegani, Soroush Abbasi and Tejankar, Ajinkya and Pirsiavash, Hamed},
journal={arXiv preprint arXiv:2206.08477},
year={2022}
}