Hello,
I am running the following script in a Linux-based cluster.
for FILE in $FASTQ_DIR/*.fastq
do
BASENAME=$(basename $FILE .fastq)
bwa mem -t 4 -M $REFERENCE $FILE > $BAM_DIR/$BASENAME.sam
done
When I check the status after running this code, it shows that I am using only 1 CPU and 0.4% of the memory. The cluster has 256 CPUs and 512 Gb RAM.
Is there a way to specify more number of CPUs and RAM to be used for specific codes? I do not have the SLURM system installed in the cluster.
Previously I used the following code, but it did not run or do anything.
systemd-run --scope -p CPUQuota=60% -p MemoryMax=300M --user ./test.bash
Thank you
You must mean 512 GB of RAM.
If you are working on a cluster then presumably it has some kind of job scheduling system. This question is best addressed to your local systems tech support.
Ah sorry. Yes, it is 512 GB of RAM. I addressed this and got a reply that they are working on building a job scheduling system. So at the moment, there is no such scheduling option on this cluster.
Looks like you already know how to limit system resources (
systemd-run
) based on your original post.Please keep in mind that aligners may not use all cores/threads at all times. As long as you are specifying the right options things should work.
read the doc :https://blog.ronin.cloud/slurm-intro/
There is no SLURM in the cluster I am working on.
Oops, my bad: I saw the keyword slurm on OP's title and added it as a tag. OP mentions "without SLURM"
You deleted a bunch of posts that had all received feedback. Please do not do this again, or your account will be suspended.