Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPUs training #26

Open
leepp15 opened this issue Jan 8, 2018 · 1 comment
Open

Multi-GPUs training #26

leepp15 opened this issue Jan 8, 2018 · 1 comment

Comments

@leepp15
Copy link

leepp15 commented Jan 8, 2018

Hello Xdever! I expect to train the network by multi-gpus, but in the version, it seems that when I open several GPUs visible such as GPU0,1, the result is that the training is not accelerated while two GPUs are fully occupied.
How to achieve multi-gpus training function? Thanks.

@RobertCsordas
Copy link
Owner

The problem is that training with batch size of >1 is not supported. Therefore it can't be parallelized efficiently. Supporting batch of >1 is not trivial because of varying number of bounding boxes per image, and varying image size. Boxes could be represented by TensorArrays, or a padded tensor and a count vector. The end of the network must probably be replicated many times and the output of the neural part should be split between them among the batch dimension.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants