Entering edit mode
3.8 years ago
Jay-Run
▴
30
Why exactly TMM normalization (implemented in edgeR) is more robust than RLE normalization (implemented in DESeq2)?
Why exactly TMM normalization (implemented in edgeR) is more robust than RLE normalization (implemented in DESeq2)?
Yes, @ATpoint was asking for a reference to support your supposition: "TMM normalization is more robust than RLE". You can read about about the differences in "In Papyro Comparison of TMM (edgeR), RLE (DESeq2), and MRN Normalization Methods" and "Why are normalization methods not interchangeable?". Both contain descriptions of situations that might favor one or the other.
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Care to elaborate please? Is this a "citation request" to edgeR and DESeq? :)
Who said it was?
Most real world explorations of normalization methods find that RLE and TMM perform more or less the same.
Thanks! Maybe I should have asked: "Is TMM more robust than RLE?" If yes, in what situations?