Hi,
I am analyzing human cell line RNAseq data. I have generated gene level read densities by dividing total read counts by gene length.
Now I have an extremely strong correlation (P<1e-100) between this read density and the number of exons contained in a gene. Something seems wrong here. Is this normal? Any ideas where or how I could have messed up the normalization? Has anybody recognized this relationship before?