I want to compare two runs of a similar experiment that ranks genes by an arbitrary score and using any standard correlation metric (e.g. Spearman rank correlation) shows that they are not very similar. However, this is because the experiment is set up in such a way that it has very confident measures for the top scores, but after a while the results become mostly noise. So let's say I had 1000 genes measured and only the top ~30 or so from each run I are the ones I care about -
Is there any similarity metric I could use that gives a higher penalty for differences in rank between high confidence genes (so I would punish it a lot if it rank 10 in one experiment and rank 100 in the other) but the penalty drops off as the ranks being compared gets lower (so I don't particularly care if something is rank 500 in one experiment and rank 1000 in the other)?
I could simply do a top-N overlap approach, but this seems a little bit simplistic and I don't know how to pick 'N' in an unbiased fashion
I just thought is the correlation between
dataSetA.t
anddataSetB.t
is higher than that ofdataSetA.t.vs.dataSetC
in vig (http://www.bioconductor.org/packages/release/bioc/vignettes/matchBox/inst/doc/matchBox.pdf)?