slurm error
1
0
Entering edit mode
3.4 years ago
rheab1230 ▴ 140

Hello everyone: I submitted a job via slurm and got this error: Error: cannot allocate vector of size 7.8 Gb Execution halted Warning message: system call failed: Cannot allocate memory I have run this job but my job got terminated and the error was this. Can anyone tell me how to solve this? I am running R job via sbatch.

TWAS slurm • 2.5k views
ADD COMMENT
0
Entering edit mode

You need to provide more details:

What is your slurm job specifications? How big are your data?

This error looks like R complaining not having enough memory to allocate the required data. Increasing your slurm batch job memory requirement to like 20G might help. But there's no way for us to know unless you provide more information.

ADD REPLY
0
Entering edit mode
#!/bin/bash
#SBATCH -J Variant_Map_GR37           # Job name
#SBATCH -o Variant_Map_GR37.o%j       # Name of stdout output file
#SBATCH -e Variant_Map_GR37.e%j       # Name of stderr error file
#SBATCH -p normal      # Queue (partition) name
#SBATCH -N 1               # Total # of nodes (must be 1 for serial)
#SBATCH -n 1               # Total # of mpi tasks (should be 1 for serial)
#SBATCH -t 48:00:00        # Run time (hh:mm:ss)
#SBATCH --mail-user=xxxxx
#SBATCH --mail-type=all    # Send email at begin and end of job
ADD REPLY
0
Entering edit mode

the file size is 37,587,249 KB

ADD REPLY
0
Entering edit mode
#!/bin/bash
#SBATCH -J Variant_Map_GR37           # Job name
#SBATCH -o Variant_Map_GR37.o%j       # Name of stdout output file
#SBATCH -e Variant_Map_GR37.e%j       # Name of stderr error file
#SBATCH -p normal      # Queue (partition) name
#SBATCH -N 1               # Total # of nodes (must be 1 for serial)
#SBATCH -n 1               # Total # of mpi tasks (should be 1 for serial)
#SBATCH -t 48:00:00        # Run time (hh:mm:ss)
#SBATCH --mail-user=xxxx
#SBATCH --mail-type=all    # Send email at begin and end of job
ADD REPLY
1
Entering edit mode
3.4 years ago
GenoMax 147k

You are not specifying any memory allocation in your SLURM file. You need to add (change the number as needed)

#SBATCH --mem=20g 

Without this directive you are likely getting the default memory allocation which is smaller than 8G your program wants to have.

ADD COMMENT
0
Entering edit mode

I included this line: #SBATCH --mem=20g and now while running sbatch scriptname. I am getting this error: sbatch: error: Memory specification can not be satisfied sbatch: error: Batch job submission failed: Requested node configuration is not available

ADD REPLY
0
Entering edit mode

You will need to check with local cluster admins to see how you can get more memory for your job. Depending on your SLURM setup you may need to submit to some specific partition or ask for more cores etc.

ADD REPLY
0
Entering edit mode

Okay. I will do that. Thank you so much for helping

ADD REPLY

Login before adding your answer.

Traffic: 1759 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6