By Tong Wu, Tianhao Wang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal, from Princeton University
AISec'22: Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security
We develop a new physical-world backdoor threat model via rotation transformation, and demonstrate its effectiveness on image classification and object detection task.
The code is tested with Python 3.8 and PyTorch 1.7.0. The code should be compatible with other versions of packages. For other packages, use pip install -r requirement.txt
- See bd_traffic/ to run experiments on traffic sign dataset (GTSRB).
- See bd_face/ run experiments on face recognition dataset (YouTube Faces).
- See bd_yolo/ run experiments on object detection dataset (PASCAL).
- Since the main algorithm is easy to adapt to other dataset, we recommend writing the code within your own repo.
- We also write the code
./other/rotate.py
that is compatible with backdoor-toolbox to conduct more experiments.
If anything is unclear, please open an issue or contact Tong Wu ([email protected]).
If you find this work helpful, consider citing it.
@inproceedings{10.1145/3560830.3563730,
author = {Wu, Tong and Wang, Tianhao and Sehwag, Vikash and Mahloujifar, Saeed and Mittal, Prateek},
title = {Just Rotate It: Deploying Backdoor Attacks via Rotation Transformation},
year = {2022},
isbn = {9781450398800},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3560830.3563730},
doi = {10.1145/3560830.3563730},
booktitle = {Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security},
pages = {91–102},
numpages = {12},
keywords = {physically realizable attacks, spatial robustness, rotation backdoor attacks},
location = {Los Angeles, CA, USA},
series = {AISec'22}
}