You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 19, 2021. It is now read-only.
We now always enable auto mixed precision when the graph has this option in the meta data. However, when enabling auto mixed precision with CPU prediction, prediction becomes very slow (even though Tensorflow reports not enabling mixed precision, because it is not supported for CPUs).
From a cursory look in htop, it seems that the processes become single-threaded.
Possible solutions:
Disable mixed precision in the tagger (as opposed to the trainer), since we are usually running prediction on CPUs anyway.
danieldk
changed the title
Attempting to enable auto mixed precision on CPU makes it slow
Attempting to enable auto mixed precision on CPUs makes it slow
Nov 4, 2019
danieldk
changed the title
Attempting to enable auto mixed precision on CPUs makes it slow
Attempting to enable auto mixed precision on CPUs makes prediction slow
Nov 4, 2019
Disable mixed precision in the tagger (as opposed to the trainer), since we are usually running prediction on CPUs anyway.
Sounds like the least invasive solution. It may be interesting to test if there are differences between MP & no MP when running on GPU for tagging.
Alternatively, we could write an inference graph that strips all training related parts which would also help to decouple train-binary-graph compatibility from tag-binary-graph compatibility (#144 (comment)).
When using CPU prediction, Grappler will complain about mixed
precision not being available and then performs single-threaded
prediction. Since mixed-precision prediction is primarily for speeding
up training, disable it during prediction.
Fixes#165.
When using CPU prediction, Grappler will complain about mixed
precision not being available and then performs single-threaded
prediction. Since mixed-precision prediction is primarily for speeding
up training, disable it during prediction.
Fixes#165.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
We now always enable auto mixed precision when the graph has this option in the meta data. However, when enabling auto mixed precision with CPU prediction, prediction becomes very slow (even though Tensorflow reports not enabling mixed precision, because it is not supported for CPUs).
From a cursory look in htop, it seems that the processes become single-threaded.
Possible solutions:
@twuebi any opinions?
The text was updated successfully, but these errors were encountered: