An option is to use eHive, which is free and open source:
http://www.biomedcentral.com/1471-2105/11/240
The processing of the jobs can go from the very simple list of commands to the very complex pipelining, like the ones used in Ensembl and other projects out there. A simple example of command line piping into a queueing system, with fail tolerance, resource management (num. CPUs, memory, etc), all in one script is here:
ensembl-hive/scripts/cmd_hive.pl
also have a look at InputFile_SystemCmd:
init_pipeline.pl Bio::EnsEMBL::Hive::PipeConfig::InputFile_SystemCmd_conf -ensembl_cvs_root_dir $HOME $dbdetails -inputfile very_long_list_of_blast_jobs.txt
beekeeper.pl -url $dburl -loop
There are a few Perl dependencies to get it working, and then the backend can be a no-frills simple sqlite which will work fine for tens to few hundreds of concurrent jobs, or a MySQL backend that usually works well for hundreds to close to a thousand concurrent jobs.
LSF support comes out of the box in eHive. There is also support for some other queueing systems, like SGE. The same script that you use in your farm you can test first in your workstation without the need of a queueing system, just using the '-local' option.
Paracel seems like the thing I was looking for, but I doubt we'll want to pay for it. It would be great if you could share your script. Mine is really simple, it just divides and submits jobs to LSF. I'm not sure how yours involves MPI?
ok, contact me, I will help you