Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to create my own dataloader #26

Open
aquexel opened this issue Sep 8, 2023 · 9 comments
Open

How to create my own dataloader #26

aquexel opened this issue Sep 8, 2023 · 9 comments

Comments

@aquexel
Copy link

aquexel commented Sep 8, 2023

Hello Guys !
Thank you for your work,

I have a question, how to create my own DataLoader based only with lines in a JSON (x1,x2,y1,y2) for example. What DeepLSD need to be trainable ?

Thank you in advance

@rpautrat
Copy link
Member

Hi, DeepLSD is originally meant to be trained without ground truth lines, so we do not have any code to do what you want.

However, it could be feasible with a bit of work, by converting your lines into a distance and angle fields (e.g. using the distanceTransform of OpenCV, or doing it programmatically), and then supervising the network with these two fields.

@aquexel
Copy link
Author

aquexel commented Oct 24, 2023

Hi, I was able to successfully integrate my own lines in addition to the LSD detector. Currently, I am working with a dataset derived from 3D scan surveys where I obtain ortho-rectified images. These images are composed of thousands of colored points. Are there any optimizable parameters to avoid detecting line fragments on image defects?
Here an example (right side):
Maison n°4

@rpautrat
Copy link
Member

Hi, are you using the pretrained DeepLSD, or are you retraining it on your own data? Because the pretrained model will certainly fail on this kind of data, but if you retrain it with these artifacts in the data, it might be able to ignore them. For example you could generate the line ground truth from your 3d scans (by fitting lines to aligned points), then reproject them to 2d to get a ground truth.

@aquexel
Copy link
Author

aquexel commented Oct 25, 2023

I retraining by using my data, but I don't know if it's better to change some parameters (like more counts on homography_adaptation) or freeze the backbone and train the head

@rpautrat
Copy link
Member

I think you can keep the same parameters and retrain the whole backbone+head, this should be fine.

@aquexel
Copy link
Author

aquexel commented Oct 26, 2023

Did you try other backbones ?

@rpautrat
Copy link
Member

We also tried a few other networks: FCN for instance segmentation, the backbone of DeepLabv3, and a simple vgg. In the end, the UNet was the best in our case.

@aquexel
Copy link
Author

aquexel commented Oct 26, 2023

Hi ! Okay, did you try Resnet as a backbone ? why is there so difference between the backbone ? I have learned that the deeper the backbone, the better it is.
Thanks

@rpautrat
Copy link
Member

We initially cast the task as a semantic segmentation task, which is why we tested semantic segmentation backbones.

The deeper is not always necessarily the better. In DeepLSD, the task of the network is quite low-level, so a shallower network is better and we did not try ResNet. But feel free to try it for your own task. In your case, a deeper network could potentially help to fix the artifacts that you have in your data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants