A colleague saw this method presented at a recent conference and mentioned it to me:
As a beginner in RNASeq, currently working with Tophat/Cufflinks I am wondering if anyone else has heard of this or has any comments?
A colleague saw this method presented at a recent conference and mentioned it to me:
As a beginner in RNASeq, currently working with Tophat/Cufflinks I am wondering if anyone else has heard of this or has any comments?
whoever greenlighted that poster should be given the spanish inquisition.
sounds like they are basically claiming you need 500M reads to get an unbiased analysis for lowly expressed genes - that shouldn't be a problem in 2011
if they have a bone to pick about normalization then should just discuss that and devise another model and also discuss the underlying biology (i.e. whether the most highly expressed genes in plants and animals are shorter for efficiency, whether structural proteins throw the numbers off, etc.)
the good thing this that since this is written in R you can just compare it to the results from edgeR or DESeq without much effort
Not only was it a poster, but it was also a talk (given twice). As an experimentalist, it was really difficult for me to hear the speaker tell the (rather large) audience that no replicates were required. They made a point of saying that the word NO is built into the title for No replicates and No assumptions (i.e. it's a non-parametric method). There are many ways in which a data set can get funky during an experiment. I don't know how one can assess variation, in a way that I trust, without observing something more than once. If replicates are present, they compare them to generate a background distribution (noise). If they are not present, they simulate replicates using a multinomial distribution. (so in the first case, replicates are required, and in the second, they assume that their simulation will accurately account for experimental noise. So much for no replicates and no assumptions).
Anyway they show some promising results comparing Brain and Universal Human Reference (by the way the UHR is a completely whacky and artificially well represented sample with regard to gene expression - it's like describing how easy it is to fish by fishing in a stock pond) from data sets with about 45 million reads, and putting their method up against others (edgeR, DESeq, etc.), and using RT-PCR data from previous experiments to assess the results (not exactly the tightest experimental set up).
I would be interested to know how it works in the real world, with real samples (the kind that people actually compare to each other), on multiple organisms, with data sets of different depth, before going boldy forward eschewing any requirement for replication. But it's another tool, and sometimes, like it or not, we don't have replicates, so it might be useful (as described however, it sounds too much like magic - which unfortunately, some people like).
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
When I first saw that headline "Sequencing technology does not eliminate biological variability" I did a double take (and thought it was from the Onion) since it states the completely obvious. "Despite the invention of the airplane, scientists conclude that gravity still exists."
Especially ironic in light of this I guess: http://rna-seqblog.com/publications/sequencing-technology-does-not-eliminate-biological-variability/