I have installed BEAST using conda in HPC.
$conda install -c bioconda beast
Now I'm trying to run it through Slurm script using 10 million generation
#!/bin/sh
#SBATCH --job-name=Beast_10Mn # Job name
#SBATCH --ntasks=16 # Run on the no. of CPU
#SBATCH --ntasks-per-node=4 # cores to spread across distinct nodes
#SBATCH --output=BST_log_%j # Standard output log
#SBATCH --error=job.%J.err # Standard error log
for i in dipTime_10mn.xml
do
echo "${i}"
java -jar /lib/beast.jar -threads 16 "${i}"
done;
But it shows Error: Unable to access jar file /lib/beast.jar
It will be an immense help if someone kindly helps.
Thanks,
If you are using SLURM on a cluster it is unlikely that you have permissions to write to
/lib
directory. Yourbeast.jar
must be installed in a local directory by conda. Find and replace that path.You would also want to use the
for
loop outside to set individual SLURM jobs up to make this efficient. Submitting a single job with afor
loop inside it is not taking advantage of your cluster resources.looks wrong to me