You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: one of the variables needed for gradient computation has been modified by an in-place operation: [torch.FloatTensor [50, 150]], which is output 0 of TBackward, is at version 9; expected version 8 instead.
This time the problem seems to be from training the classifier used to feed label to the generator (specifically, this block of code). The notebook can run to the end after commenting out this block (line 232 to 235).
(Update: an alternative workaround is to set fixed_classifier=True and pretrain_cls=True in model_config. This should pre-train the classifier and freeze the classifier weights during generator training.)
(I used PyTorch 1.7 to run the notebook.)
The text was updated successfully, but these errors were encountered:
When trying to run the Introspective Rationale Explainer Sample Notebook (after going around #150 by using specifying CUDA=True), this error came up:
This time the problem seems to be from training the classifier used to feed label to the generator (specifically, this block of code). The notebook can run to the end after commenting out this block (line 232 to 235).
(Update: an alternative workaround is to set
fixed_classifier=True
andpretrain_cls=True
inmodel_config
. This should pre-train the classifier and freeze the classifier weights during generator training.)(I used PyTorch 1.7 to run the notebook.)
The text was updated successfully, but these errors were encountered: