We upload the doa models, which can be founded at MNIST, Cifar10. These models can be used as baseline model to defend against any patch attacks (Physically Realizable Attacks).
In Proceedings of the 8th International Conference on Learning Representations (ICLR’20)
Defending Against Physically Realizable Attacks on Image Classification
We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.
A large literature has emerged on defending deep neural networks against adversarial examples on the feature space, namely l_2, l_infty etc. However,there seems no effective methods specifically to defend against physically realizable attacks (major concern in real life).
(a)Left three images are an example of the eyeglass frame attack. Left: original face input image. Middle: modified input image (adversarial eyeglasses superimposed on the face). Right: an image of the predicted individual with the adversarial input in the middle image.
(b)Right three images are an example of the stop sign attack. Left: original stop sign input image. Middle: adversarial mask. Right: stop sign image with adversarial stickers, classified as a speed limit sign.
Basically, there are three characteristics.
- The attack can be implemented in the physical space (e.g., modifying the stop sign);
- the attack has low suspiciousness; this is operationalized by modifying only a small part of the object, with the modification similar to common “noise” that obtains in the real world;
- the attack causes misclassification by state-of-the-art deep neural network
We introduce a rectangle which can be placed by the adversary anywhere in the image. Then the attacker can furthermore introduce l_infty noise inside the rectangle with epsilon = 255.
- Exhaustive Searching : Adding a grey rectangular sticker to image, considering all possible locations and choosing the worst-case attack
- Gradient Based Searching : Computing the magnitude of the gradient w.r.t each pixel, considering all possible locations and choosing C locations with largest magnitude. Exhaustively searching among these C locations.
Examples of the ROA attack on face recognition, using a rectangle of size 100 × 50.
(a) Left three images. Left: the original A. J. Buckley’s image. Middle: modified input image (ROA superimposed on the face). Right: an image of the predicted individual who is Aaron Tveit with the adversarial input in the middle image.
(b) Right three images. Left: the original Abigail Spencer’s image. Middle: modified input image (ROA superimposed on the face). Right: an image of the predicted individual who is Aaron Yoo with the adversarial input in the middle image.
We apply the adversarial training approach for ROA to fine tune the clean model, achieving significant improvement compared to conventional robust classifers.
Effectiveness of DOA on face recognition against eyeglass frame attacks.
(a) Left image: Performance of DOA (using the 100 * 50 rectangle) against the eyeglass frame attack in comparison with conventional methods. Comparison between DOA, adversarial training, and randomized smoothing (using the most robust variants of these).
(b) Right image:Comparing DOA performance for different rectangle dimensions and numbers of PGD iterations inside the rectangle.
- Clone this repository:
git clone https://github.com/tongwu2020/phattacks.git
- Install the dependencies:
conda create -n phattack
conda activate phattack
# Install following packages:
conda install scipy pandas statsmodels matplotlib seaborn numpy
conda install -c conda-forge opencv
pip install foolbox==2.3.0
See Pytorch for the command for your system to install correct version of Pytorch May need more packages
- Run specific task: for Face Recognition,
cd glass
or for traffic sign classification,
cd sign
cd ROA
View the Quick Demo in Google Colab
@inproceedings{
Wu2020Defending,
title={Defending Against Physically Realizable Attacks on Image Classification},
author={Tong Wu and Liang Tong and Yevgeniy Vorobeychik},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=H1xscnEKDr}
}
Contact [email protected] with any questions.