You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello Xdever! I expect to train the network by multi-gpus, but in the version, it seems that when I open several GPUs visible such as GPU0,1, the result is that the training is not accelerated while two GPUs are fully occupied.
How to achieve multi-gpus training function? Thanks.
The text was updated successfully, but these errors were encountered:
The problem is that training with batch size of >1 is not supported. Therefore it can't be parallelized efficiently. Supporting batch of >1 is not trivial because of varying number of bounding boxes per image, and varying image size. Boxes could be represented by TensorArrays, or a padded tensor and a count vector. The end of the network must probably be replicated many times and the output of the neural part should be split between them among the batch dimension.
Hello Xdever! I expect to train the network by multi-gpus, but in the version, it seems that when I open several GPUs visible such as GPU0,1, the result is that the training is not accelerated while two GPUs are fully occupied.
How to achieve multi-gpus training function? Thanks.
The text was updated successfully, but these errors were encountered: