I know the computations by breaking the files and parallelize it in SGE or SLURM. I want to use whole computation power (20 node , 80 cores) for a single job without breaking it. If I just submit it (a single perl script for large computation by AutoFact program) from master node it is occupying the cluster but not computing with 20 node and 80 cores.
Hi,
I have a feeling more details (specifics on what you're trying to accomplish, what the perl script does, etc) will help you get an answer that is best suited to your scenario. The question in its current form is pretty vague and offers little detail on your *exact* current procedure. Pl help us help you better :)
"for a single job without breaking it" seems incompatible with "computing with 20 node and 80 cores."