We have a large sample size for WGS and variant calling, and now we are assessing different pipeline options. One of the pipelines we are investigating is the GATK4 pipeline. Broad provided a workflow defined in .WDL + JSON, but it uses cloud computing (reference files in the google storage cloud). This does not work for us because we are not allowed to access cloud in any sense, for privacy and security reasons. Besides, it's hard to run the google storage cloud computing pipeline -- I always get some errors related to google storage.
The above being said, I am wondering if anyone has run the GATK4 pipeline using only local files with the workflow recommended by Broad, and is willing to share the workflow (in .wdl or .cwl or something similar).
Your help would be highly appreciated!
(p.s. This is a not-for-profit project which would greatly benefit the research community and general public, so your contribution would be maximized because you are contributing to the human kind, not just a small group of people)
You are the best!
I did post my questions to the GATK forum, but nobody answers me recently.
A quick question: what are 2T, 56T, 20k, HDD, on-prem, throughputs, FPGA?
A 2 cent suggestion: It would be nice to have a choice to automatically download all auxiliary datasets to designated directories as defined in the WDL/json file, or you can pre-bundle everything.
Thanks so much!