You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use metrics.py to evaluate a model using the same weight, I get different mIoU values for different runs.
I am using your DeepLab implementation as a backbone in another network and also using your evaluation code
Below are 3 such runs, where metrics.py has been used to evaluate the model on the same validation set, using the same weights.
seems like its an issue of numerical instability.
Particularly, I feel that either the _fast_hist function or the division in scores function in utils/metric.py file is the root cause.
Will greatly appreciate if you can provide some help here
thank you!
The text was updated successfully, but these errors were encountered:
When I use
metrics.py
to evaluate a model using the same weight, I get different mIoU values for different runs.I am using your DeepLab implementation as a backbone in another network and also using your evaluation code
Below are 3 such runs, where
metrics.py
has been used to evaluate the model on the same validation set, using the same weights.RUN 1
RUN 2
RUN 3
seems like its an issue of numerical instability.
Particularly, I feel that either the
_fast_hist
function or the division inscores
function in utils/metric.py file is the root cause.Will greatly appreciate if you can provide some help here
thank you!
The text was updated successfully, but these errors were encountered: