Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running N2V prediction on a GPU cluster fails #154

Open
Eddymorphling opened this issue May 6, 2024 · 0 comments
Open

Running N2V prediction on a GPU cluster fails #154

Eddymorphling opened this issue May 6, 2024 · 0 comments

Comments

@Eddymorphling
Copy link

Eddymorphling commented May 6, 2024

Hi folks,
I have a trained 3D-N2V that I would like to use to predict on a folder that has around 100, 3D tiff stacks. I am trying to run the predictions using GPU compute resources on my local HPC cluster using the CLI I found in the N2V-wiki. Here is what I try to run

python /home/N2V_predict.py --fileName=*.tif --dataPath='/Raw_data' --tile=4 --dims=ZYX --baseDir='/Analysis/N2V_model' --name=N2V-3D

Strangely this job fails and I see that my GPU runs out of memory. Does it try to load all 100 files at once and runs out of memory? I am not sure. When I run this job on an interactive-GPU node using the napari-n2v plugin, it works well as it only run predictions on one file at a time. Any clue how I can get it working using a non-interactive HPC cluster run? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant