Hello community,
I stumbled upon a problem with MrBayes when trying to run it in MPI mode in a HPC. Basically, I have run it before in another cluster and it all worked correctly, I have my executable file for a partition to be run, and it actually runs well, although really slow (for an MPI run). I gave 3M iterations, and when it arrived to the end, the sump and sumt command did not compile the pertinent information, because there were no numerical values to sample from. I checked the run1.p and run2.p files, and I realized that they started with a -inf log likelihood, and it stayed stationary from the beginning of the analysis. no variation whatsoever. The analysis was just iterating over the same values. After this weird behaviour, I set the run as a normal MrBayes run, without any MPI approach, and without BEAGLE entering into play, and the analysis seems to be running fine, the log likelihoods are shifting and being sampled. For this moment, this is enough for me, since I know I'll get my analysis done, but I would like to discuss this, because in the future I will have the need to run heavier analyses, and this could be an annoying bottleneck. Have anybody had this issue before with the BEAGLE library in HPCs?, what could be going on?
Thank you in advance for any insight into this problem.
Best regards.