Entering edit mode
2.4 years ago
tsomakiank
▴
50
Hello everyone. I am trying to set up a set of pipelines from Github(HemTools) locally. The problem is that the developer of the pipelines developed them to work in LSF IBM systems. Thus, all the pipelines come along with a .lsf file that I assume is responsible for running the pipeline. My question is if it is possible to somehow make it work in my server(not LSF system) and of course if you have any clue on how to do that.
Thanks in advance!
The README and ReadTheDocs both say that this is tailored for a specific HPC. Seriously, use anything else unless you want to waste time figuring outwhat and how they hardcoded things. There are plenty of NGS pipelines. Check nf-core, Snakemake wrappers or just google what you need and add pipeline to your search string. Everything is better than hardware-specific pipelines.
Thanks for replying @atpoint. It seems a very tempting-to-use set of tools and I thought to give it shot. Maybe its time to let it go :). Thanks again for replying!!
Yeah, 8 mean it is not just the scheduler but also availability of resources, scratch quota and allowed walltimes, available nodes etc. All kinds of obstacles with a pipeline tailored for a specific HPC. nf-core pipelines are great at being scalable and working on a workstation, HPC, cloud instance, everywhere given some minimal hardware resources are available.
Thank ATpoint got it. Maybe I thought I could figure it out because I know nothing about LSF systems.
Generally, an LSF job might use something like the following to submit jobs
bsub -n 8 someExecutable
But in this case, it is highly tied to the LSF system there, see this file for example
https://github.com/YichaoOU/HemTools/blob/master/utils/lsf_utils.py
I would avoid these pipelines for now.
Thanks for replying colindaven! Maybe it isnt worth the struggle.