Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

a thought #7

Open
Duncanswilson opened this issue Nov 14, 2017 · 0 comments
Open

a thought #7

Duncanswilson opened this issue Nov 14, 2017 · 0 comments

Comments

@Duncanswilson
Copy link

hey guys,

first I wanna say that it's so nice to see people sharing their thoughts and work like this.

I just wanted to ask, wrt. to the Keras distributed tests, are you scaling batch size with the number of gpus? (as Keras just splits the given batchsize, across the cards. so for a batchsize of 256 on 4 cards the real batchsize is 64 per card. (I honestly think this should be changed, but c'est la vie)

so this may be why you see less efficiency on the cards.

here's a plot from my tests, that shows quasilinear speedups on EC2 instances.

pasted image at 2017_11_13 05_31 pm

hope this helps!!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant