Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert_to_caffe2_models.py: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! #169

Open
rsamvelyan opened this issue Dec 8, 2021 · 2 comments

Comments

@rsamvelyan
Copy link

I'm using a laptop with Nvidia Cuda capable GPU.

C:\Users\rsamv\Documents\pytorch-ssd_DEC2020>python convert_to_caffe2_models.py mb1-ssd C:/Users/rsamv/Documents/pytorch-ssd_DEC2020/models/mb1-ssd-apple/mb1-ssd-Epoch-100-Loss-0.1923127407208085.pth C:/Users/rsamv/Documents/pytorch-ssd_DEC2020/models/mb1-ssd-apple/open-images-model-labels.txt
Traceback (most recent call last):
File "C:\Users\rsamv\Documents\pytorch-ssd_DEC2020\convert_to_caffe2_models.py", line 47, in
torch.onnx.export(net, dummy_input, model_path, verbose=False, output_names=['scores', 'boxes'])
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\onnx_init_.py", line 316, in export
return utils.export(model, args, f, export_params, verbose, training,
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\onnx\utils.py", line 107, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\onnx\utils.py", line 724, in _export
_model_to_graph(model, args, verbose, input_names,
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\onnx\utils.py", line 493, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\onnx\utils.py", line 437, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\onnx\utils.py", line 388, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\jit_trace.py", line 1166, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\jit_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\jit_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\rsamv\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "C:\Users\rsamv\Documents\pytorch-ssd_DEC2020\vision\ssd\ssd.py", line 93, in forward
boxes = box_utils.convert_locations_to_boxes(
File "C:\Users\rsamv\Documents\pytorch-ssd_DEC2020\vision\utils\box_utils.py", line 104, in convert_locations_to_boxes
locations[..., :2] * center_variance * priors[..., 2:] + priors[..., :2],
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

@rsamvelyan
Copy link
Author

I hardcoded CPU as follows in ssd/ssd.py and it worked, but looks like it only worked with the CPU.:

LINE 29

` # register layers in source_layer_indexes by adding them to a module list

    self.source_layer_add_ons = nn.ModuleList([t[1] for t in source_layer_indexes
    if isinstance(t, tuple) and not isinstance(t, GraphPath)])
    if device:
        self.device = "cpu" #device
    else:
        self.device = "cpu" #torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    if is_test:
        self.config = config
        self.priors = config.priors.to(self.device)`

@yinggegit
Copy link

when I run convert_to_caffe2_models.p fond same error,
and I change (priors) to 'cpu' and run no error.
maybe is other way to fix this error.

--------------------------------------modify 104 line of box_utils.py --------------------------------------------------

#device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device = 'cpu'
priors = priors.to(device)

return torch.cat([
    locations[..., :2] * center_variance * priors[..., 2:] + priors[..., :2],
    torch.exp(locations[..., 2:] * size_variance) * priors[..., 2:]
], dim=locations.dim() - 1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants