I am running a snakemake workflow, and I am confused on how my memory and thread requests translate to resource allocation on my slurm partition. In my rule I ask for 8 threads, and 10 gigs of memory, and am provided 16 cores per job. However, I have set the --cores flag set to 46 on .sh that runs my snakefile, yet five of these jobs are running at once, which would give me the impression that my --core 46 command is being exceeded. Also, I'm interested to know the relationship between memory allocated per job and additional cores allocated. For example, I had 10 gigs of memory for each job, but was getting an oom kill error, before specifying a thread increase to 8. Before I went up to 8 threads, my snakejobs were only being allocated 2 cores. So by just adjusting the threads up to 8, but keeping memory the same, I went from two cores to 16 cores allocated per job. Why would that be? Are additional cores used up as memory? Also, does this mean that if all 5 jobs are running with the 16 cores provided per job, that my --cores 46 command in my .sh was superceeded somehow? I was having trouble working this out from the snakemake website. Any help would be greatly appreciated!
Building DAG of jobs... Using shell: /usr/bin/bash Provided cores: 16 Rules claiming more threads will be scaled down. Job counts: count jobs 1 index_genome 1
[Tue Feb 2 10:15:51 2021] rule index_genome: input: /mypath/genomic.fna output: /mypath/genomic.fna.fa.ann jobid: 0 wildcards: bwa_extension=.fa.ann threads: 8 resources: mem_mb=10000
[bwa_index] Pack FASTA... 1.96 sec [bwa_index] Construct BWT for the packed sequence... [bwa_index] 172.44 seconds elapse. [bwa_index] Update BWT... 1.23 sec [bwa_index] Pack forward-only FASTA... 0.93 sec [bwa_index] Construct SA from BWT and Occ...
Thank you for clearing up the memory per thread usage question. That makes sense to me, and will help me in the future. Concerning the cores, snakemake actually has a separate --local-cores flag. I think the problem might have been that I had selected the --jobs and --cores options. With just the --cores 48, my snakefile was only running three jobs at a time, which makes more sense as I allocated 16 threads per job for this rule. However, each job still had 32 cores allocated to it, according to my slurm outputs.