Hi everyone,
I have been using dexseq to perform differential exon usage analysis under 2 conditions (3 samples each) but some of the results seem confusing to me. (I am including deseq2 in the title because from my understanding dexseq and deseq2 use very similar methods except deseq2 does differential gene expression analysis and dexseq focuses on alternative splicing). For example, dexseq result for one of the exon bins looks like:
groupID featureID exonBaseMean dispersion stat pvalue padj mut wt log2fold_wt_mut
ENSG00000000003 E004 172.7 0.009791388 14.07 0.0001760253 0.0007018868 13.32384 13.31194 -0.004006433
You can see that it is highly significant but estimated exon usage/splicing coefficient is basically the same for mut and wt conditions (13.32384 and 13.31194). I thought likelihood ratio test to compare the full and reduced model (whether model has the exon*condition interaction term) is significant only when condition does have an effect on exon usage. In this case though, why is variance due to condition not absorbed into the mean exon usage base level across all samples? Should I filter on fold change as well?
Thank you very much!
I guess testing for whether the full model fits better is in general different from assessing whether fold change is different from 1 but, under what conditions would fold change be close to 1 but the full model still fits significantly better? Also how can I visualize model fit in dexseq?