Hello everyone, I'm having trouble launching this script in sbatch Slurm from a Linux cluster. Once I launch the sbatch command followed by the script path and check with squeue, I don't get any job ID or .err file in the specified folder. I've double-checked the paths multiple times, and they are correct. I don't understand why, since it has always worked before. I hope you can help me. Thank you.
#!/bin/bash
#SBATCH --job-name=trimming
#SBATCH --mem=64GB # amout of RAM in MB required (and max ram available).
##SBATCH --mem-per-cpu=5000 # amount of ram per Core (see ntasks, if you ask for ntasks
#SBATCH --time=INFINITE ## OR #SBATCH --time=10:00 means 10 minutes OR --time=01:00:00 means 1 hour
#SBATCH --ntasks=10 # number of required cores
#SBATCH --nodes=1 # not really useful for not mpi jobs
##SBATCH --partition=work ##work is the default and unique queue, you do not need to specify.
#SBATCH --error="/home/barresi.m/RNAseq/RNAseq11/RNAseq_ERR/trimming.err"
#SBATCH --output="/home/barresi.m/RNAseq/RNAseq11/RNAseq_OUT/trimming.out"
source /opt/common/tools/besta/miniconda3/bin/activate
conda activate aligners
for i in $(cat /home/barresi.m/RNAseq/RNAseq11/patients_list1.txt)
do trimmomatic PE -threads 6 -phred33 \
/home/barresi.m/RNAseq/RNAseq11/Fastq/$i\_R1.fastq.gz \
/home/barresi.m/RNAseq/RNAseq11/Fastq/$i\_R2.fastq.gz \
/home/barresi.m/RNAseq/RNAseq11/Trimming/$i\_R1_paired.fq.gz \
/home/barresi.m/RNAseq/RNAseq11/Trimming/$i\_R1_unpaired.fq.gz \
/home/barresi.m/RNAseq/RNAseq11/Trimming/$i\_R2_paired.fq.gz \
/home/barresi.m/RNAseq/RNAseq11/Trimming/$i\_R2_unpaired.fq.gz \
ILLUMINACLIP:/datasets/adapters/trimmomatic/NexteraPE-PE.fa:2:30:10 \
TRAILING:20 \
MINLEN:30; \
done
This would be a question better addressed to your local HPC team, as none of us are likely to know the particulars of your specific cluster.
I see in your script that you've commented out the
--partition
flag. At least on my cluster, that would lead to job submission failures, but I can't say if that's the case for your system.Also, there's an extra
\
before thedone
- not sure if that could be interfering with the script, but often times these things lead to invisible failures.My guess is that the double hash in line four messes things up. SLURM is picky with these header lines. Try removing it.