Hi,
I am getting following error while doing denovo assembly using SPAdes on a linux with 15 GB RAM and more than 50GB space left in hard disk. The size of two fastq files being used as input is about 2.4 GB and 2 GB.
ERROR K-mer Counting The reads contain too many k-mers to fit into available memory limit. Increase memory limit and restart
Spades.log:
Command line: spades.py --careful -o WT_ -1 firstfile.fq -2 secondfile.fq -m 10
System information:
SPAdes version: 3.5.0
Python version: 2.7.12
OS: Linux-4.4.0-59-generic-x86_64-with-Ubuntu-16.04-xenial
Output dir: SPAdes-3.5.0-Linux/bin/WT_
Mode: read error correction and assembling
Debug mode is turned OFF
Dataset parameters:
Multi-cell mode (you should set '--sc' flag if input data was obtained with MDA (single-cell) technology
Reads:
Library number: 1, library type: paired-end
orientation: fr
left reads: ['firstfile.fq']
right reads: ['secondfile.fq']
interlaced reads: not specified
single reads: not specified
Read error correction parameters:
Iterations: 1
PHRED offset will be auto-detected
Corrected reads will be compressed (with gzip)
Assembly parameters:
k: automatic selection based on read length
Mismatch careful mode is turned ON
Repeat resolution is enabled
MismatchCorrector will be used
Coverage cutoff is turned OFF
Other parameters:
Dir for temp files: tmp
Threads: 16
Memory limit (in Gb): 10
======= SPAdes pipeline started. Log can be found here: SPAdes-3.5.0-Linux/bin/WT_/spades.log
===== Read error correction started.
== Running read error correction tool: SPAdes-3.5.0-Linux/bin/hammer SPAdes-3.5.0-Linux/bin/WT_/corrected/configs/config.info
0:00:00.000 4M / 4M INFO General (main.cpp : 82) Loading config from SPAdes-3.5.0-Linux/bin/WT_/corrected/configs/config.info
0:00:00.000 4M / 4M INFO General (memory_limit.hpp : 42) Memory limit set to 10 Gb
0:00:00.001 4M / 4M INFO General (main.cpp : 91) Trying to determine PHRED offset
0:00:00.001 4M / 4M INFO General (main.cpp : 97) Determined value is 33
0:00:00.002 4M / 4M INFO General (hammer_tools.cpp : 36) Hamming graph threshold tau=1, k=21, subkmer positions = [ 0 10 ]
=== ITERATION 0 begins ===
0:00:00.002 4M / 4M INFO K-mer Index Building (kmer_index.hpp : 467) Building kmer index
0:00:00.002 4M / 4M INFO K-mer Splitting (kmer_data.cpp : 127) Splitting kmer instances into 128 buckets. This might take a while.
0:00:00.002 4M / 4M INFO General (file_limit.hpp : 29) Open file limit set to 1024
0:00:00.002 4M / 4M INFO K-mer Splitting (kmer_data.cpp : 145) Memory available for splitting buffers: 0.416504 Gb
0:00:00.002 4M / 4M INFO K-mer Splitting (kmer_data.cpp : 153) Using cell size of 436736
0:00:00.857 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 167) Processing firstfile.fq
0:00:18.381 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 813597 reads
0:00:38.048 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 1673452 reads
0:00:57.634 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 2519299 reads
0:01:16.964 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 3305418 reads
0:01:37.462 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 4168421 reads
0:01:50.922 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 4493764 reads
0:01:50.922 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 167) Processing secondfile.fq
0:02:08.666 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 5591263 reads
0:02:35.935 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 6636651 reads
0:03:20.730 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 176) Processed 8752362 reads
0:03:25.466 3G / 3G INFO K-mer Splitting (kmer_data.cpp : 181) Processed 8987528 reads
0:03:25.620 32M / 3G INFO General (kmer_index.hpp : 345) Starting k-mer counting.
0:03:53.603 32M / 3G INFO General (kmer_index.hpp : 351) K-mer counting done. There are 418968448 kmers in total.
0:03:53.603 32M / 3G INFO General (kmer_index.hpp : 353) Merging temporary buckets.
0:04:11.857 32M / 3G INFO K-mer Index Building (kmer_index.hpp : 476) Building perfect hash indices
0:06:34.813 160M / 7G INFO General (kmer_index.hpp : 371) Merging final buckets.
0:06:50.161 160M / 7G INFO K-mer Index Building (kmer_index.hpp : 515) Index built. Total 144936940 bytes occupied (2.7675 bits per kmer).
0:06:50.161 160M / 7G ERROR K-mer Counting (kmer_data.cpp : 261) The reads contain too many k-mers to fit into available memory limit. Increase memory limit and restart
== Error == system call for: "['SPAdes-3.5.0-Linux/bin/hammer', 'SPAdes-3.5.0-Linux/bin/WT_/corrected/configs/config.info']" finished abnormally, err code: 255
In case you have troubles running SPAdes, you can write to spades.support@bioinf.spbau.ru
Please provide us with params.txt and spades.log files from the output directory.
params.txt
Command line: spades.py --careful -o WT_ -1 firstfile.fq -2 secondfile.fq -m 10
System information:
SPAdes version: 3.5.0
Python version: 2.7.12
OS: Linux-4.4.0-59-generic-x86_64-with-Ubuntu-16.04-xenial
Output dir: SPAdes-3.5.0-Linux/bin/
Mode: read error correction and assembling
Debug mode is turned OFF
Dataset parameters:
Multi-cell mode (you should set '--sc' flag if input data was obtained with MDA (single-cell) technology
Reads:
Library number: 1, library type: paired-end
orientation: fr
left reads: ['firstfile.fq']
right reads: ['secondfile.fq']
interlaced reads: not specified
single reads: not specified
Read error correction parameters:
Iterations: 1
PHRED offset will be auto-detected
Corrected reads will be compressed (with gzip)
Assembly parameters:
k: automatic selection based on read length
Mismatch careful mode is turned ON
Repeat resolution is enabled
MismatchCorrector will be used
Coverage cutoff is turned OFF
Other parameters:
Dir for temp files: SPAdes-3.5.0-Linux/bin/WT_/tmp
Threads: 16
Memory limit (in Gb): 10
Thx for the help.
Cheers, Ambi.
Hi Brian,
I have now managed to do denovo on my local machine by first normalising using your bbnorm package. It required far less memory (around 6 GB as compared to 15 GB without normalization).
Many thanks for recommending the bbnorm. I just noted that you are developer of this package as well which is great a contribution to the community. Very well done and thanks once again.
ps: got another question regarding this project but will post it as separate question as it is not related to denovo.
Cheers, Ambi.
Hi Ambi,
I'm happy BBNorm was helpful in this case! To close this thread as resolved, if you feel that is has in fact been resolved, please accept the answer.
Thanks, Brian
just accepted the answer. Thx again for your help.
Ambi.