Entering edit mode
19 hours ago
pranjalarun10
•
0
While running metaspades I am getting this error. I am new to it, i don"t know wheather its a memory error or something else because there is no such thing as ran out of memory as seen in others log file.
kindly help.
spades version 4.0.0
seg faults
can result from running out of memory. Looks like you are running this on a cluster. How large is the dataset? How much memory are you assigning to this job? Are you using a job scheduler or trying to run this on the head node?yes, I am using hpc and SLURM job scheduler, running through sbatch. In this I have assigned 900 gb memory and 80 threads. Both R1 and R2 file is of 16GB each. Total 32 gb
The data files were trimmed together (I see
trim
in the name)? R1/R2 reads are not out of sync? If true I suggest that you try dropping the number of threads and memory a bit and see if that helps. Try 32 threads and 400 GB RAM.This is paired end read and trimmed <35bp and Q<30
Trimmed together was what I was asking. i.e. you did not trim R1 and R2 files independently.
Reads were selected that were better than those criteria? If you are trying to assemble reads that are shorter than 35 bp then that could cause the problem above.
Trimming was done independently for both R1 and R2 files i.e., with trimming option of trimming base pair less than 35 bp and Q less than 30
Please do not paste screenshots of plain text content, it is counterproductive. You can copy paste the content directly here (using the code formatting option shown below), or use a GitHub Gist if the content volume exceeds allowed length here.