-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust pred_scales for different input image size? #319
Comments
About non-rectangular images, very interesting findings in #270 (comment) for
|
I've done the black padding before and it's not that desirable for images with very variable image size like COCO, since you lose a lot of pixels that way. In #270 he was specifically trying to overfit onto one image, so it's not like the network needed that many pixels to be able to classify anyway. For non square images right now, you can try adding that black pixel border, but the better implementation that I have on the TODO list is to just have everything a fixed non-square aspect ratio. Note that I can't change the size of the image arbitrarily while training because of the way the prototypes create masks (the features expect a consistent image size, so the size has to be fixed at the start). As for whether the changes to the scales should be done automatically, I don't think so. If you look at the im400 and im700 configs you can see what changes are necessary there and they're quite simple to extrapolate your config to those changes. I don't want to be touching the scales automatically because what scales you want depends on your dataset, since some datasets tend to have bigger objects and others tend to have smaller ones. |
this would be great. I think typically the images come from one source (camera, same dataset) so they are the same size & aspect ratio. Just not square in most cases.
Looking at the code above, yes, I think this is already "quite automated", I just think this could be also the case for base/resnet (?).
I'd welcome some comment in the config enlightening this topic.
If we agree on sth, I can make a PR based on your comments for this topic. |
Just an update on tuning the sizes, scales. We used smaller images, Unfortunately,
I don't know if
EDIT: don't pick on the 550/400 (we used 550 backbone, but I copied here code with 400 as example) |
Did you retrain with the smaller And if you retrained, we also found im400 to be a little lacking, so I'd expect 300 to be much more lacking. I mentioned this in the paper, saying that perhaps instance segmentation just needs more pixels to classify, so blowing up even 320x240 images to 550x550 is necessary. |
we retrained the model (=yolact) with Images for
yep, I remembered this. Now I'm testing if scaling down from larger (640x480) |
The backbone is trained as well while you train YOLACT, so that shouldn't be an issue. I guess the problem just was 320x320 is too small.
Those will be rescaled to 320x320 automatically, yeah.
max_size=640 should work better than 550. We tried 600 and it was better than 550, but we decided against it because it was to much slower for what little performance was gained. |
Seems I've misunderstood the
would be interesting to have the improvement (depends on dataset) vs train time tradeoff. |
It is a "pre-processing" step, but the size of the input image determines the size of the backbone layers, since P2 for instance is input size // 2, P3 is input size // 4, etc. The issue with change after training is that the weights were trained expecting a certain image size, so they probably won't work on a different image size. |
The comment
#242 (comment)
mentions we should adjust
pred_scales
/setmax_size
if our image is not 550x550 (backbone input size), mostly to avoid upscaling.max_size
is set? )Thank you
The text was updated successfully, but these errors were encountered: