Skip to content

rtenlab/Neural_Network_Weight_Attack

 
 

Repository files navigation

Bit-Flips Attack:

BFA

This repository constains a Pytorch implementation of the paper, titled "Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search", which is published in ICCV-2019.

If you find this project useful to you, please cite our work:

@inproceedings{he2019bfa,
 title={Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search},
 author={Adnan Siraj Rakin and He, Zhezhi and Fan, Deliang},
 booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
 pages={1211-1220},
 year={2019}
}

Table of Contents

Introduction

This repository includes a Bit-Flip Attack (BFA) algorithm which search and identify the vulernable bits within a quantized deep neural network.

Dependencies:

For more specific dependency, please refer environment.yml and environment_setup.md

Usage

Please modify PYTHON=, TENSORBOARD= and data_path= in the example bash code (BFA_imagenet.sh) before running the code.

HOST=$(hostname)
echo "Current host is: $HOST"
  
# Automatic check the host and configure
case $HOST in
"alpha")
    # PYTHON="/home/elliot/anaconda3/envs/pytorch_041/bin/python" # python environment
    PYTHON="/home/elliot/anaconda3/envs/bindsnet/bin/python"
    TENSORBOARD='/home/elliot/anaconda3/envs/bindsnet/bin/tensorboard'
    data_path='/home/elliot/data/imagenet'
    ;;
esac

Then just run the following command in the terminal.

bash BFA_imagenet.sh
# CUDA_VISIBLE_DEVICES=2 bash BFA_imagenet.sh  # to specify GPU id to ex. 2

The example log file of BFA on ResNet34:

  **Test** Prec@1 73.126 Prec@5 91.380 Error@1 26.874
k_top is set to 10
Attack sample size is 128
**********************************
Iteration: [001/020]   Attack Time 3.241 (3.241)  [2019-08-28 07:59:27]
loss before attack: 0.4138
loss after attack: 0.5209
bit flips: 1
hamming_dist: 1
  **Test** Prec@1 72.512 Prec@5 91.072 Error@1 27.488
iteration Time 65.493 (65.493)
**********************************
Iteration: [002/020]   Attack Time 2.667 (2.954)  [2019-08-28 08:00:35]
loss before attack: 0.5209
loss after attack: 0.7529
bit flips: 2
hamming_dist: 2
  **Test** Prec@1 70.492 Prec@5 89.866 Error@1 29.508
iteration Time 65.671 (65.582)
**********************************

It shows to identify one bit througout the entire model only takes ~3 Second (i.e., Attack Time) using 128 sample images for BFA.

Model quantization

We direct adopt the post-training quantization on the DNN pretrained model provided by the model-zoo of pytorch.

Note: for save the model in INT-8, additional data conversion is expected.

Bit Flipping

Considering the quantized weight is a integer ranging from to , if using bits quantization. For example, the value range is -128 to 127 with 8-bit representation. In this work, we use the two's complement as its binary format (), where the back and forth conversion can be described as:

Warning: The correctness of the code is also depends on the dtype setup for the quantized weight, when convert it back and forth between signed integer and two's complement (unsigned integer). By default, we use .short() for 16-bit signed integers to prevent overflowing.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

The software is for educaitonal and academic research purpose only.

About

Changing several bit which overwhelms the quantized CNN

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.7%
  • Shell 2.3%