Hi,
I want to remove human contaminants from gut metagenomes. For this I am using BBMap. The first step is to index the contaminant genome (human genome). I am following the exact procedure as described here, which I cross validated by finding the same commands described on other pages (biostars, github).
My command:
bbmap.sh -Xmx24G ref=hg19_main_mask_ribo_animal_allplant_allfungus.fa.gz
is simply not working. My script outputs an output and error files. The output file is empty, the error files simply writes the command
java -Djava.library.path=/exports/csce/eddie/biology/groups/mcnally/camille/programs/bbmap/jni/ -ea -Xmx24G -cp /exports/csce/eddie/biology/groups/mcnally/camille/programs/bbmap/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 -Xmx24G ref=hg19_main_mask_ribo_animal_allplant_allfungus.fa.gz
and I have no other output.
I can't figure out the problem. Am I using the command wrong? Or is it an issue with my java? I am using my university cluster.
Thanks, Camille
If you are using a cluster are you asking for a corresponding amount of RAM from the job scheduler? Are you using Java 7/8? You should also look at the log files of your job scheduler and post the errors present here.
Thanks for your reply
Yes I am specifying -l h_vmem=24G on the job scheduler
I am using Java 8 (java/jdk/1.8.0)
And the log file of the job scheduler is here (if this is what you meant?)
That is not telling us much.
I am running as indexing operation with the same file to test and so far no problem. I am using 30g of RAM. Will update later as to what happens.
Ha, this is tricky, the thing is that I do not have any output/error to suggest what is wrong. I don't think it is a memory issue since maxvmem if I recall says how much memory has been used at max and this says 0. Looks like it did not run at all?
You could start running it outside the job scheduler to make sure bbmap is working fine (be ready to kill the job). I was able to complete the indexing without any problems. I did use 30g of RAM.