Skip to content

A project for investigating Adversarial Examples. Implemented with PyTorch.

Notifications You must be signed in to change notification settings

fotinidelig/foolproofCNN

Repository files navigation

foolproofNN

A project for investigating Adversarial Examples in CNNs. Implemented with PyTorch.

Original Perturbation Adversarial

Check out the tutorial.ipynb notebook to get a glimse of what adversarial attacks do. =)

About this repo

Datasets used:

Models implemented from scratch (use flag --model wideresnet):

Attacks implemented from scratch:

Defences implemented from scratch:

How to use this repo

Note: All attack methods and defences (and all consequent functions) can be used on their own as a tool by importing the respective python modules.

General arguments: --model_name specifies where to store or to load from a model.

--model sets the model architecture and might need some other parameters.

--dataset and root are used to choose either a pre-existing PyTorch dataset or to specify the root directory of the image data.

Training a model - train.py

At runtime you can train the model of your choice and add optional arguments.

To start, simply run:

./train.py --model cwcifar10 --dataset cifar10 --model_name Test-CIFAR-10

and to see the usage, run ./train.py --h

usage: train.py [-h] [--dataset {cifar10,mnist}] [--root ROOT]
                [--filter {high,low,band}] [--threshold THRESHOLD]
                [--model_name MODEL_NAME]
                [--model {cwcifar10,cwmnist,wideresnet,resnet,effnet,googlenet}]
                [--layers {18,34,50,101}] [--depth DEPTH] [--width WIDTH]
                [--input_size INPUT_SIZE] [--output_size OUTPUT_SIZE]
                [--pre-trained] [--transfer_learn] [--augment]
                [--epochs EPOCHS] [--lr LR] [--lr-decay LR_DECAY]

Running an attack - attack.py

Here you have various optional arguments for the target model or attack method.

For example, to run a PGD attack use:

./attack.py --attack pgd --norm 2 --iters 40 --alpha 0.004 --epsilon 0.01 --samples 15 --model cwcifar10

and again see all parameters with ./attack.py --h

usage: attack.py [-h] [--dataset {cifar10,mnist}] [--root ROOT]
                 [--filter {high,low,band}] [--threshold THRESHOLD]
                 [--model_name MODEL_NAME]
                 [--model {cwcifar10,cwmnist,wideresnet,resnet,effnet,googlenet}]
                 [--layers {18,34,50,101}] [--depth DEPTH] [--width WIDTH]
                 [--input_size INPUT_SIZE] [--output_size OUTPUT_SIZE] [--cpu]
                 [--attack {cw,pgd,boundary}] [--samples SAMPLES]
                 [--batch BATCH] [--targeted] [--lr LR] [--epsilon EPSILON]
                 [--alpha ALPHA] [--iters ITERS] [--norm {2,inf}]

Training with TRADES - trades_train.py

Similar to normal training, with some additional parameters for the TRADES algorithm. As an example, try running:

./trades_train.py --model cwcifar10 --dataset cifar10 --lambda 0.5 --norm 2 --iters 50 --alpha 0.009

usage: trades_train.py [-h] [--dataset {cifar10,mnist}] [--root ROOT]
                       [--filter {high,low,band}] [--threshold THRESHOLD]
                       [--model_name MODEL_NAME]
                       [--model {cwcifar10,cwmnist,wideresnet,resnet,effnet,googlenet}]
                       [--layers {18,34,50,101}] [--depth DEPTH]
                       [--width WIDTH] [--input_size INPUT_SIZE]
                       [--output_size OUTPUT_SIZE] [--pre-trained]
                       [--transfer_learn] [--augment] [--epochs EPOCHS]
                       [--lr LR] [--lr-decay LR_DECAY] [--lambda _LAMBDA]
                       [--norm {inf,2}] [--epsilon EPSILON] [--alpha ALPHA]
                       [--iters ITERS]

Configuration files - config.ini

Here you can set your global configuration parameters that are used within the project. These can be the log/ouptut file names, folders where you save results and any other parameters you might need.

config.py is used to set these parameters in an easier way without directly accessing config.ini.

usage: config.py [-h] [-s SECTION] [--verbose {yes,no}] [-k KEY [KEY ...]]
                 [-v VAL [VAL ...]]

--verbose controls what type of output you want when running train.py or attack.py - either descriptive (verbose) or simple.

You can access these configuration parameters in a python module as follows:

import configparser
config = configparser.ConfigParser()
config.read('config.ini')
bool_param_val = config.getboolean('general','bool_param_name')
param_val = config.get('general','param_name')

Citation

NTUA official thesis archive: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18263

@masterthesis{mythesis,
    title        = {Vulnerabilities and robustness of Convolutional Neural Networks against
                    Adversarial Attacks in the spatial and spectral domain},
    author       = {Fotini Deligiannaki},
    year         = 2022,
    month        = {February},
    address      = {Athens, GR},
    note         = {Available at \url{http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18263}},
    school       = {National Technical University of Athens},
    type         = {Diploma MEng. Thesis}
}

About

A project for investigating Adversarial Examples. Implemented with PyTorch.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published