Replies: 1 comment
-
Appreciate help in advance! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I was using neuralforecast doing a forecasting. I have a long input and forecast size (>1000), so I need to carefully setup parameters. Now I'm tuning on the step_size. with everything else fixed, with step_size=1, the training can run; however, when I increase the step_size even to 5, it gives error: "CUDA out of memory. Tried to allocate 5 GiB, 22 GiB total capacity, 12 GiB already allocated, 4 GiB free, 18 GiB reserved total by PyTorch."
Isn't that increasing step_size would reduce the number of windows? Why it causes out of memory issue? Get confused...
Beta Was this translation helpful? Give feedback.
All reactions