Hello!
When using bamCoverage to generate coverage track for our ChIP-seq data, I choose BPM as the normalization method. With this setting, I found that the bamCoverage divided raw read count number per bin by one common denominator, for example, 10.15 for one library.
Could anybody teach me how this denominator was derived for each library? I could'nt figure out after read the tutorial from deeptools - I guess it may be related to library size?
Thank you!
Patrick