Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: grad can be implicitly created only for scalar outputs #21

Open
NamanChuriwala opened this issue Sep 5, 2018 · 3 comments

Comments

@NamanChuriwala
Copy link

NamanChuriwala commented Sep 5, 2018

I get the following error while running pretrain.py. I'm running this code on 16CPUs , no GPU. How do I solve this error?

Traceback (most recent call last):
File "pretrain.py", line 70, in
loss.backward()
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/autograd/init.py", line 84, in backward
grad_tensors = _make_grads(tensors, grad_tensors)
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/autograd/init.py", line 28, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

@XiuHuan-Yap
Copy link

Hi! I got a similar error while running the code on 1GPU. It occurs when the variable "loss" is 'empty' i.e.: tensor([], device='cuda:0', grad_fn=)

I got around the error by running loss.backwards() only if loss.nelement>0.1.

I'm not sure why this error occurs and will be glad to hear any explanations.

@NamanChuriwala
Copy link
Author

I don't think even that works since the problem is stereo function in jtnn_vae returning a null tensor. This can be solved by a simple change described below:

**In jtnn_vae.py replace:

if len(labels) == 0: return create_var(torch.Tensor(0)), 1.0

with

if len(labels) == 0: return create_var(torch.Tensor([0])), 1.0**

@gebawe
Copy link

gebawe commented May 5, 2022

I think the problem is fixed, but maybe in the future, it would be helpful.

I have faced a similar problem; it occurs when your GT labels are not the same as the class you want to predict in the current iteration. In my case, I was working in semantic segmentation with 8 classes [0,1,2,3,4,5,6,7], and class zero [0] was encoded as ignore class. Thus, when the GT labels are only class [0], then I get "loss" is 'empty' i.e.: tensor([], device='cuda:0', grad_fn=) grad can be implicitly created only for scalar outputs".

A simple fix would be to skip the iteration without doing and Forward and Backward pass when the is no class in GT labels that you are trying to predict with the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants