ART on different precision #2310
-
I am trying to use pre-trained weights of CNN model on different precision like FP16 , FP64 as well as mixed precision, to test attacks and defense in PyTorch framework. For FP16 the error coming as : "RuntimeError: Input type (float) and bias type (c10::Half) should be the same" Does ART supports different precision model or just FP32? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @xplore-1 In general, ART should support multiple precisions, but it is not tested specifically. There is a variable at |
Beta Was this translation helpful? Give feedback.
Hi @xplore-1 In general, ART should support multiple precisions, but it is not tested specifically. There is a variable at
art.ART_NUMPY_DTYPE
which defines the precision for internal Numpy operations. Its default isnumpy.float32
.