You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I was wondering if there was any support for CPU inferences. The sample script from hubconf.py doesn't run even if after all the code instructing tensors and models to move to cuda were removed perhaps because of some internal line which still expects CUDA
@elvistheyo Nope as I said it would take a lot of effort which might end up wasted anyway. Let me know if you choose to try it out though I could try to assist you with it if possible
I think it will be difficult and not beneficial to infer on cpu. Approximately it will take 1.5~4 minutes to perform one inference for the ViT-L model. Additionally, one important acceleration library xformers does not support cpu as well.
The type torch.bfloat16 is only supported on GPU. The data type for all tensors should be torch.float32 for cpu devices.
Hi I was wondering if there was any support for CPU inferences. The sample script from hubconf.py doesn't run even if after all the code instructing tensors and models to move to cuda were removed perhaps because of some internal line which still expects CUDA
torch.autocast(device_type='cuda', dtype=torch.bfloat16, enabled=False)
in mono/model/decode_heads/RAFTDepthNormalDPTDecoder5.py
Not sure how many more such instances there are so I wanted to get it clarified. I am sure it will be difficult to run on CPU but still
The text was updated successfully, but these errors were encountered: