Nextflow memory issues custom config -c
1
0
Entering edit mode
2.9 years ago

Hi all, I am trying to run nextflow on my laptop

 nextflow run nf-core/rnaseq \
 --input samplesheet.csv \
 --genome mm10 \
 -profile docker

I am having issues with memory:

Error executing process > 'NFCORE_RNASEQ:RNASEQ:FASTQC_UMITOOLS_TRIMGALORE:FASTQC (KO_3)'

Caused by:
  Process requirement exceed available memory -- req: 36 GB; avail: 12.4 GB

Command executed:

  [ ! -f  KO_3_1.fastq.gz ] && ln -s KO_3_1.fq.gz KO_3_1.fastq.gz
  [ ! -f  KO_3_2.fastq.gz ] && ln -s KO_3_2.fq.gz KO_3_2.fastq.gz
  fastqc --quiet --threads 6 KO_3_1.fastq.gz KO_3_2.fastq.gz

  cat <<-END_VERSIONS > versions.yml
  "NFCORE_RNASEQ:RNASEQ:FASTQC_UMITOOLS_TRIMGALORE:FASTQC":
      fastqc: $( fastqc --version | sed -e "s/FastQC v//g" )
  END_VERSIONS

Command exit status:
  -

Command output:
  (empty)

Work dir:
  /mnt/d/mygene/work/2b/06decf838b4cea52d84929d929c65f

Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`

I am not sure how to provide a custom config file using -c. I created a txt file with these features.

process {
withName: FASTQC {
    memory = 36.GB
}}

I provided that into the command:

 nextflow run nf-core/rnaseq \
     --input samplesheet.csv \
     --genome mm10 \
     -profile docker \
-r 3.5 -c fastqc_config.txt

But it does not work. I am not sure how to solve this issue. I guess there is a problem in the -c configuration but I havent found any examples online to follow. I would appreciate any insights.

nextflow • 4.6k views
ADD COMMENT
2
Entering edit mode

Your machine does not have enough memory to run this pipeline with defaults, probably due to the STAR alignment step. Try --aligner hisat2 instead. For specific help join nf-core Slack https://nf-co.re/join and see the #rnaseq channel.

ADD REPLY
1
Entering edit mode

A word of caution: Better ask for help at the Slack rather than spending time at trying to modify or add configs to nf-core pipelines. That is imho borderline impossible due to the complexity of those pipelines and all the internal checks and validation these pipelines do that are not obvious for the end user.

ADD REPLY
1
Entering edit mode
20 months ago
zimazamiz ▴ 10

Maybe you can try to modify this file: ~/.nextflow/assets/nf-core/smrnaseq/nextflow.config

I went in and changed these lines in the file nextflow.config:

// Max resource options

// Defaults only, expecting to be overwritten
max_memory                  = '128.GB'
max_cpus                    = 16
max_time                    = '240.h'

And then when the job started, the console reported the new values I'd put:

Max job request options

max_cpus : 3

max_memory : 15.GB

The "same" file nf-core/smrnaseq/nextflow.config should be available for all/most of the nf-core processes = nf-core/*/nextflow.config

Best of luck!

ADD COMMENT
0
Entering edit mode

Be sure though that you have evidence that the tools in the pipeline still run eith these params. While not 100% accurate, usually the defaults maie at least some sense and customly reducing memory settings may cause OOM errors at some point.

ADD REPLY

Login before adding your answer.

Traffic: 2742 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6