Generating adversarial examples from a restored model causes examples generation to run endlessly #1229
-
Hi, This is not exactly a feature request but a question. Maybe it is already implemented.
In the above code to generate adversarial examples using FGSM, I restored a custom model trained to classify SVHN data and created an ART classifier from it. But when I run the attack. generate, the code runs endlessly. I reduced the p_test data to contain only 1 sample but it still did not work. Again, I am sorry if this is not exactly a bug or feature request. I do not know where to ask it. |
Beta Was this translation helpful? Give feedback.
Replies: 8 comments 13 replies
-
Hi @Abhishek2271 Thank you very much for using ART! Most of the attacks have a |
Beta Was this translation helpful? Give feedback.
-
HI @beat-buesser, thank you for the reply. When I checked, the CPU usages does spike-up Also, I checked with a simple classifier. predict: predictions = classifier.predict(p_test) here, classifier is an ART TensorFlowClassifier. And this also runs continuously.
I think it is blank. How do I view the stack-trace? I meant that the output is blank with the code running. |
Beta Was this translation helpful? Give feedback.
-
@Abhishek2271 Usually the terminal where you run the program will contain a stack-trace of where the execution was at the moment when you interrupted the running program, this could give an indication in which line your program is at that time or if it has started or not, etc. I see you are using Windows. Could you please describe how you are running the Python script? Would you be able to share the complete script? It would be interesting to see the code before the attack. |
Beta Was this translation helpful? Give feedback.
-
Hi @beat-buesser, Thank you for your interest and time on the issue. I am using Jupiter notebook to run python scripts; although, I also tried running via .py files. Doing this also did not show anything on the terminal. The script is as below:
The model that I restored has the following graph:
I used the "Logits" here as an input to the classifier after restoring. |
Beta Was this translation helpful? Give feedback.
-
Attached is the model I used. The dataset I used is SVHN test data set from http://ufldl.stanford.edu/housenumbers/ |
Beta Was this translation helpful? Give feedback.
-
Hi @Abhishek2271 Thank you for the details, I will take a look if I can run it. Btw, we started using the new Discussions feature (the tab next to Pull Requests) and we move general question over there until an issue arises. |
Beta Was this translation helpful? Give feedback.
-
@beat-buesser thank you very much for help. I think I understand what you mean and it makes sense. I think I also save the input placeholder in the model itself and retrieve it but it still gives the same issue. Is below the correct way to restore the input placeholder:
|
Beta Was this translation helpful? Give feedback.
-
Hi @beat-buesser, Thank you again for your help with this issue. Your suggestions on this were what led me to solve it. |
Beta Was this translation helpful? Give feedback.
Hi @beat-buesser,
Thanks again for looking into this issue. To reiterate: I was having issues restoring a saved model and using it to craft adversarial examples.
I was able to make this work by creating a clean graph again and NOT using the metadata for restoring the graph. This was not ART related issue, it turns out that the quantization functions I was using in the model somehow resulted in meta graph not being able to restore the model properly. I verified this by removing all quantization-related functions from the model and restoring the model. ART successfully crafted examples in this case.
So, I ended up restoring the model without using meta graph and running ART using the restor…