Entering edit mode
7.5 years ago
CY
▴
750
I think we all agree that RAM / cache consumption is a issue when launch a bioinformatics pipeline. I would like to open a discussion on which tools / steps demand high volume of RAM. I know STAR is pretty RAM consuming when load genomes (30gb). And indexing, sorting probably.
Can anyone share some general insight on this topic? Any ideas are appreciate!
Assemblies of large genomes/datasets are going in invariably need large amounts of RAM. Trinity for example will need a GB of RAM per million paired-end reads.
clumpify.sh
from BBMap suite is able to deduplicate data in RAM with an option. I have used clumpify with 50G NovaSeq data and had seen memory consumption as high as 1.3TB. Note: Clumpify is also able to use disk storage and temp files, so this is not an absolute requirement.In general, some software would be able to use workarounds (like using temp storage) but for others (assembly) there may not be a valid alternate option for gobs of RAM.