-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Immediate stop at training progress 0% #24
Comments
Have you fix it? I get the same bug |
Does this problem occur only with Tanks/Francis? Can you print self.P or check self.seq_len? |
It occurs to all dataset (including the one that is provided Tanks and CO3D) |
I guess this error comes from the failed installation of Lietorch. You can check its official repo to see if you can run the provided simple examples. |
@OasisYang When I try the test examples provided by Lietorch, it comes out that the immediate stop also happens when "Testing lietorch forward pass (GPU)". I guess that it has something to be done with memory leak. |
I get the same issue. Have you solved it yet? |
I reinstall the Lietorch and solve the problem. the Eigen |
I get the same ERROR. And I install lietorch successfully. Have you fixed it? |
May I ask how you solved this? I successfully installed lietorch and passed the test, but it still reports this error. |
In the end i tried to run it in wsl instead And in the installation phase it is written that we need to install cuda by this command: Magically it works! |
Does anyone face this problem?
Currently, I am trying to train with the dataset provided "Tanks/Francis", but it failed.
The text was updated successfully, but these errors were encountered: