Skip to content
This repository has been archived by the owner on Jun 15, 2022. It is now read-only.

Slow training speed #23

Open
ghost opened this issue Jan 5, 2018 · 3 comments
Open

Slow training speed #23

ghost opened this issue Jan 5, 2018 · 3 comments

Comments

@ghost
Copy link

ghost commented Jan 5, 2018

Hi @leetenki, thanks for the implementation

I am trying to train the network from scratch using the COCO dataset using a Tesla K40m with the default parameters. However the training speed seems to be rather slow, around 0.17iters/sec. At this rate it would take several weeks to reach the 440000 iterations.

Is this training speed considered normal? Thank you!

@leetenki
Copy link
Contributor

leetenki commented Feb 1, 2018

Hi, we changed code. now I think you can train the model faster.

@cchamber
Copy link

same problem! Is it possible to train on multiple gpus?

@NiteshBharadwaj
Copy link

It's showing 0.28 iter/s on a 1080 GTX with an ETA of 12 days. What is the typical training setup?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants