-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUSTED-PH with large number of sequences #39
Comments
Dear @rsiani, You should be able to run BUSTED-PH on ~1,500 sequences in a reasonable amount of time (~1 day or so, would be my guess). How long is your alignment and which version of HyPhy are you using? I would suggest adding One other possibility is that the program is running out of memory, but that should trigger sooner in the execution. Best, |
Dear @spond, As soon as I have results from that as well I will update you! Best, |
Dear @rsiani, Generally, you get a ~3-5x performance hit with SRV on. Here it could be worse than that because of the additional memory overhead. Each branch will require the storage of 9 transition matrices (default settings, with 3x3 rate classes), which is about ~800MB for a tree of 1500 sequences, so there's a lot of memory movement which slows things down a lot. I'd be curious to learn how long it takes. Make sure to specify Also, for this many branches, you could consider increasing the number of rate classes (of course this will slow the performance down). Which CPUs do you have? Best, |
Hello there,
I have previously used BUSTED-PH successfully on ~500 seqs. Now, I was trying to improve the analysis for a publication and I managed to get up to ~1500 seqs. However, after running for several hours, the program seems to quietly die and doesn't produce results. Unfortunately, I wasn't precautious enough to redirect the sd.out to a file to look at what point it's crashing...
Anyway, I just wanted to ask if at a theoretical level there is any issue with such numerous sequences and if I should consider an alternative method? Or if I could just solve it by throwing more resources at it (I am currently limited to 14 threads).
The text was updated successfully, but these errors were encountered: