You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, thanks for the excellent work! I am working on using a large backbone on your EPCDepth network which takes a lot of time to train. I am wondering if we can accelerate the training with multiple GPUs. I have tried using the torch.distributed but failed for some reason. Have you tried using multiple GPUs for training? I really appreciate any help you can provide.
The text was updated successfully, but these errors were encountered:
Hi there, thanks for the excellent work! I am working on using a large backbone on your EPCDepth network which takes a lot of time to train. I am wondering if we can accelerate the training with multiple GPUs. I have tried using the torch.distributed but failed for some reason. Have you tried using multiple GPUs for training? I really appreciate any help you can provide.
The text was updated successfully, but these errors were encountered: