You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering if you could give me a sense if the memory use of a big_spLinReg I am running seems to be appropriate, and if not if there was a way to reduce the memory requirements. My dataset is 300 individuals x 23626816 sites stored as a double FBM, and in order to run efficiently with many cores I need ~600Gb of memory. Does this seem correct to you? Just wondering if I am doing something wrong or if there are ways to reduce memory usage without sacrificing efficiency here.
Thanks!
The text was updated successfully, but these errors were encountered:
If my calculations are correct, the backingfile should take 53 GB on disk.
So, you should not need much more than that.
And using less memory should still be fine since the model should work on a subset of the data most of the time.
I was wondering if you could give me a sense if the memory use of a big_spLinReg I am running seems to be appropriate, and if not if there was a way to reduce the memory requirements. My dataset is 300 individuals x 23626816 sites stored as a double FBM, and in order to run efficiently with many cores I need ~600Gb of memory. Does this seem correct to you? Just wondering if I am doing something wrong or if there are ways to reduce memory usage without sacrificing efficiency here.
Thanks!
The text was updated successfully, but these errors were encountered: