-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing time takes much longer than reported in the repo!! #31
Comments
Hi, thanks for your interest in our work. Which of our pretrained models are you using? We have models that take in small, medium, or large crop sizes (see https://github.com/mseg-dataset/mseg-semantic#how-fast-can-your-models-run). Are you using single-scale or multi-scale inference? The numbers reported in the readme here are for single-scale prediction with our lightweight model. You would need to use the following script instead: |
Hi @johnwlambert , thanks for the reply. I will be discussing about the inference time further in this thread but I need a help regarding running this
Looks like there is a tensor size mismatching.
|
I'm late on this but I figured out the issue after debugging it myself. In the default .yaml (in my case
By default they are -1 and apparently supplementing them via commandline wasn't enough. Anyway, in the unlikely case this reply helps someone make sure |
Nice work. |
Hallo, So the dependencies MSeg-api and MSeg_semantic were already installed. I tried the google collab first and then copying the commands, so i could run the script in my linux also. the command is like this: the weight i used, i downloaded it from the google collab, so the mseg-3m-1080.pth But for me, it took 12 minutes just for 1 image.... do u know why so or where i should check it more? My setup: Thanks for your help.. i cannot go forward if 1 image takes 12 min :( |
Hi, @luda1013. I think you should use the batched script, you can find it in "/mseg-semantic/mseg_semantic/tool/universal_demo_batched.py". It seems the algorithm would use OpenCV instead of CUDA when executing "universal_demo.py". |
It's taking 35-40s to process the segmentation of a single frame.
My test setup configuration:
i. Ubuntu 18.04 LTS, core i7, 24 Gb RAM
ii. Graphics Nvidia 1070M (Laptop version of 1070Ti)
iii. Cuda 10.2
iv. Pytorch version: 1.6.0 + cu101
v. CUDA_HOME = /usr/local/cuda-10.2
This is the output from my terminal:
I think I am missing something. According to this repo, the detection fps should be around 16 fps on a quadro P5000 and I think for Nvidia GTX 1070, it should be something similar but not (1/40) fps.
Can anybody help ??
The text was updated successfully, but these errors were encountered: