We are trying to establish a standard of sequencing coverage (depth) - CNV sliding window size relationship.
For example, Some researches have shown that CNV can be detected for 0.25 WGS sequencing depth and 50 kb window size.
My question is: If 0.25 septh - 50 kb window size is valid, can we in principle approximately infer that 0.5 depth - 25 kb window size has the same sensitivity (same sample, sample sample prep and sequencing protocol)
My underlying assumption is that the CNV detection sensitivity (significance) is depends on the amount of reads within the sliding window (0.2550 == 0.525).
Any comments are appreciated.
Thanks Chris. I briefly read your paper. readDepth calculates the smallest bin size by correcting a number of bias.
If I ignore those bias (mapability, GC content) and only ask for an estimation, can I assume >12500 reads / window as a standard? (based on the example above: 0.25 sequencing depth * 50 kb).
If this number (12500 reads / window ) is valid, do you have any idea why it is this number intrinsically but not 10000, 5000 or 20000?
1) Look at Figure 2a in that paper. 2) There isn't a simple conversion factor of that type. You're going to be taking data that is somewhat noisy, testing different bin sizes, then (possibly) running it through correction for gc-bias/mapability, then (definitely) using a segmentation algorithm to group adjacent windows. If you want to model this, you'll need to generate some data (preferably by downsampling real and "clean" bams), then introduce some events into there and see how well you can detect them. My point is, it's complex and the "right" answer will completely depend on the downstream algorithms, as well as your tolerance for FP vs FN.