You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi folks,
I have a trained 3D-N2V that I would like to use to predict on a folder that has around 100, 3D tiff stacks. I am trying to run the predictions using GPU compute resources on my local HPC cluster using the CLI I found in the N2V-wiki. Here is what I try to run
Strangely this job fails and I see that my GPU runs out of memory. Does it try to load all 100 files at once and runs out of memory? I am not sure. When I run this job on an interactive-GPU node using the napari-n2v plugin, it works well as it only run predictions on one file at a time. Any clue how I can get it working using a non-interactive HPC cluster run? Thank you.
The text was updated successfully, but these errors were encountered:
Hi folks,
I have a trained 3D-N2V that I would like to use to predict on a folder that has around 100, 3D tiff stacks. I am trying to run the predictions using GPU compute resources on my local HPC cluster using the CLI I found in the N2V-wiki. Here is what I try to run
python /home/N2V_predict.py --fileName=*.tif --dataPath='/Raw_data' --tile=4 --dims=ZYX --baseDir='/Analysis/N2V_model' --name=N2V-3D
Strangely this job fails and I see that my GPU runs out of memory. Does it try to load all 100 files at once and runs out of memory? I am not sure. When I run this job on an interactive-GPU node using the napari-n2v plugin, it works well as it only run predictions on one file at a time. Any clue how I can get it working using a non-interactive HPC cluster run? Thank you.
The text was updated successfully, but these errors were encountered: