Managing large data sets
2
2
Entering edit mode
8.8 years ago
umn_bist ▴ 390

So storage has become an issue due to the volume of TCGA data sets - I'm easily capping at 30 Tb. I would like to revise my pipeline so that I can slim down intermediary output files (if possible).

My current pipe:

  1. Collect TCGA data with CGHub (paired end files are .tar .gz compressed into 1 file)
  2. Extract
  3. cutadapt trim adapter, low quality (intermediate .fastq output)
  4. PRINSEQ trim poly-A/T/G/C reads (intermediate .fastq output)
  5. STAR align (intermediate .bam output)
  6. picard add RG info (intermediate .bam output)
  7. picard mark duplicate (intermediate .bam output)
  8. GATK trim N CIGAR (intermediate .bam output)
  9. MuTect call variants (intermediate .vcf output)
  10. snpSift filter variants (intermediate .vcf output)
  11. snpEff annotate variants (final .vcf output)

My ideal setup:

  1. Collect TCGA data with CGHub (paired end files are .tar .gz compressed into 1 file)
  2. Use compressed input to trim adapter, low quality, poly-A/T/G/C - output to std out
  3. steam std out as input for alignment (preferably STAR) - output to bam file
  4. ideally I would like to add RG and mark duplicates at the same time using picard if possible (output bam file)
  5. GATK trim N CIGAR (intermediate .bam file)
  6. Mutect call variants (intermediate .vcf file)
  7. snpSiftfilter variants (intermediate .vcf file)
  8. snpEff annotate variants (final .vcf output)

Question:

  1. Which adapter can take compressed paired files (as 1 file) as input - and trim adapter, low quality, and poly-A/T/G/C? Is this trimmomatic or fastx?
  2. Which splice aware aligner can take std out stream as input?
TCGA RNAseq • 2.5k views
ADD COMMENT
0
Entering edit mode

I'm working on a script that transparently wraps a bam file so it can be simultaneously read and written, provided that:

  1. you can read the input bam from the stdin (no index requirements)
  2. you write output bam on the stdout

You use it like:

bam2sql.py --make ./input.bam ./output.bam.sql
bam2sql.py --out ./output.bam.sql | picard MarkDuplicates | bam2sql.py --in ./output.bam.sql
... many different jobs like the one above ...
bam2sql.py --out ./output.bam.sql > final.bam # optional

It handles everything else and read/writes without any blocking, so its not like the whole BAM is just stored in memory. It does use memory though if your reordering reads or deleting reads. Perhaps with FUSE it could even be made to do random read/write by having an attached process constantly updating the indexes, however most bioinformatic software probably reads the index to memory once then caches it, which makes random read/writes more complicated. But for now the above fits 90% of my needs.

It will be done by tomorrow. It would have been done today if there weren't multiple different ways to sort a BAM by QNAME.... grr.

ADD REPLY
1
Entering edit mode
8.8 years ago

I would recommend BBMap tools (available here) for both.

1) BBDuk can be used for both quality and adapter trimming, and you could handle polynucleotide runs by including those in the adapter reference file. It works on compressed data (.gz), but you'd have to check with Brian Bushnell about .tar files.

2) BBMap is a splice-aware aligner, and pipes using in=stdin/out=stdout syntax.

ADD COMMENT
0
Entering edit mode
8.8 years ago

P.S.-For steps that cannot be piped (e.g., Picard MarkDuplicates), you can always 'rm' the intermediate file after it's been used. Inelegant, but it works...

ADD COMMENT

Login before adding your answer.

Traffic: 1518 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6