Entering edit mode
7 months ago
Greg
•
0
I'm trying to run manta on RHEL 8. I built it using gcc 9.2.0 with --build-type=RelWithDebInfo
The following command throws a "segmentation fault, core dumped". Running it under gdb, doing a "bt" when it crashes, consistently gets me a stack trace with 8733 frames, of which ~8700 are
searchRepeats (opt=..., index=aNumber, word=..., wordIndices=..., wordStack=..., repeatWords=...) at manta-1.6.0.release_src/src/c++/lib/assembly/IterativeAssembler.cpp:586
Yes, always that line. aNumber ranges from 20 up to 33319.
manta/1.6.0/libexec/GenerateSVCandidates --threads 10 --align-stats alignmentStats.xml --graph-file svLocusGraph.bin --bin-index 0 --bin-count 1 --max-edge-count 10 --min-candidate-sv-size 8 --min-candidate-spanning-count 3 --min-scored-sv-size 50 --ref GCA_000001405.15_GRCh38_no_alt_short_headers_nonACTG_to_N.fa --candidate-output-file svHyGen/candidateSV.0000.vcf --tumor-output-file svHyGen/tumorSV.0000.vcf --chrom-depth chromDepth.txt --edge-runtime-log svHyGen/edgeRuntimeLog.0000.txt --edge-stats-log results/stats/svCandidateGenerationStats.xml --edge-stats-report results/stats/svCandidateGenerationStats.tsv --tumor-align-file sample.bam
These are the top stack frames. So I'm assuming the problem is running out of memory
#0 0x0000155554637b4e in _int_malloc () from /lib64/libc.so.6
No symbol table info available.
#1 0x0000155554639982 in malloc () from /lib64/libc.so.6
No symbol table info available.
#2 0x000000000070ddd5 in operator new (sz=81) at /research/bsi/tools/shared/gcc-9.2.0/libstdc++-v3/libsupc++/new_op.cc:50
p = <optimized out>
#3 0x000000000078920d in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_mutate (this=this@entry=0x15555419d1f0, __pos=__pos@entry=40, __len1=<optimized out>, __s=__s@entry=0x0, __len2=__len2@entry=1)
at /research/bsi/tools/shared/build/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/basic_string.tcc:310
__how_much = 0
__new_capacity = 80
__r = <optimized out>
- This is a loaded system. How do I give manta more memory?
- Has anyone seen this problem?
Oh, the bam file is only 23 gb, so "overly large bam file" is not the issue
Describe loaded in more detail. How much memory is there? Are you the only user or are they others using the system? Try reducing the threads some to see if that helps.
"Describe loaded in more detail":
I ran it on the cluster with 10 cores, 10 threads, and 20 GB. It crashed. So I upped it to 80GB. Still crashed, same place
If it wasn't throwing a seg fault in
_int_malloc
, I'd think the problem isn't memory related. But since it is, it appears to me the problem is that Manta simply isn't taking advantage of the available memoryI ran it under
/usr/bin/time -v
. I gotSince there's a lot more than 1GB available, this reinforces my belief that the problem is Manta just not using the available resources
cross-posted : https://stackoverflow.com/questions/78277029
And then to bioinfo SE: https://bioinformatics.stackexchange.com/questions/22354/debugging-manta-1-6-0-seg-fault
Greg, please stick to ONE place.