I think I solved the problem, you need to add a "-o" flag with an output file name. I think it has to do with trying to use some local environment variable that isn't defined/restricted on the cluster but not if you run it on desktop.
Good luck!
-edit: to be more specific, here's how I solved the problem, I thought it might be the output because of this GitHub error page for a related program. The manual for Bayenv provides information about how to run each of the programs . bayenv2 is the program they provide to run the analysis, and ./calc_bfs.sh is a helper script that splits your SNP file into a bunch of separate files to actually run the analysis. Once you have your variance-covariance file (tab separated!) as described in the manual, you need to run bayenv2 for each locus you have genotypes. ./calc_bfs.sh splits the files up for you, and then makes a call to bayenv2 repeatedly. You can avoid this error by copying the concepts from ./calc_bfs.sh and implementing them yourself.
1) Load bayenv2 with module load or however you need to do it.
2) Split your genotype file into a bunch of small files, with one locus per file:
split -a 10 -l 2 <FILENAME> snp_batch
3) Loop over each of the smaller files, and make the call to bayenv2. You can place this all in a bash script like the authors did, or add it into your job file if you're submitting to a cluster. I also add in the sample file just to be safe, which is one line file that has the number of genotypes calls (2 * # of individuals if diploid) for each of your groups. It'll save the output to PREFIX_FOR_OUTPUT.bf.
for f in $(ls snp_batch*);
do
bayenv2 -i $f -e <ENV_FILE> -m <VAR-COVAR_FILE> -k <NUMBER_OF_ITERATIONS> -r $RANDOM -p <NUMBER_OF_POPULATIONS> -n <NUMBER_OF_ENVIRONMENTS> -t -s <SAMPLE_FILE> -o <PREFIX_FOR_OUTPUT>;
done;
4) Then, you can delete all those extra files with the following command:
rm -f snp_batch*
** If you are not importing bayenv2, but instead just have the executable file downloaded from their repository online, make sure you make it executable first:
chmod +x bayenv2
And execute it (ie, in the above code snippet) like this:
./bayenv2
Good luck!
A segmentation fault could be produced by many things, the most common one is not enough memory to allocate variables.
Thanks for the suggestion. In this case, memory allocation doesn't seem to be the issue. I've increased it all the way to 64GB and it still fails immediately.
Hello,
I have the same problem, have you found an answer ? I can't seem to find one...
the same problem here, no answer found!
Have you found a solution by any chance?
It is weird. I am running 2 analysis, files created the same way. One works, the other gives me segmentation fault. Files are structured the same, only differences is the number of populations.
@Diesel - I have this exact same problem with BayEnv2.. I have created the files in the exact same way for my two sets of data. One runs but the second results in a segmentation error. I note that many say its if files are not tab seperated but I am pretty sure mine are. How many popualtions and env vars are you testing? I am wondering if it may be something to do with this? Did you resolve your issue? thanks
I had a typo in the path to my file! No useful error such as "file not found" - but "segmentation fault" - perhaps this is your problem too @Diesel?
Hello I have also had this same problem - did you find a solution to it?