WGCNA adjust p value
1
1
Entering edit mode
3.4 years ago

Hi everyone,

I have a very general question about WGCNA:

One critical step in WGCNA pipeline is to calculate the correlation between eigen-genes and traits. Besides the correlation, we also calculate a p value for each correlation using corPvalueStudent(). I thought that’s a typical multiple hypothesis testing scenario since we are testing many traits against many eigen-genes. Now the question is: should we adjust the p-value?

I didn’t see many people discussing this, although WGCNA is a very popular software. Even in the official tutorial, they are using the un-adjusted p-value. Isn't this problematic?

Best,
Taotao

statistics WGCNA • 4.3k views
ADD COMMENT
1
Entering edit mode
ADD REPLY
1
Entering edit mode
3.4 years ago
adam.faranda ▴ 110

I don't think so, How many traits are you planning to test?

If your alpha is 0.05, the family wise error rate for 10 traits
1 - (1 - alpha)^num_traits = 1 - (1 - 0.05)^10 = 0.401

or about a 40% probability that for at least one of your significant traits, its correlation with the Eigengene is due to chance and not due to your experimental manipulation. The simplest (and strictest) correction is Bonferroni's method which for 10 traits would mean using an alpha of 0.005 for each test giving you an alpha of ~0.05 for the whole family of tests.

I suppose you could correct the p-values, but to be honest if you have enough samples to be doing WGCNA and the correlation is strong enough to be worth looking at, it will likely have a very very low p-value anyway.

What you want to avoid is a situation where you are using a nominally "significant" p value to justify a weak correlation that likely has no biological relevance to the trait.

My recommendation is that instead of emphasizing p-values, that you look at the modules that correlate most strongly with the traits in your study and try to develop a deeper understanding of how those genes might interact with one another, or otherwise influence the physiology of your system.

Once you've identified module(s) of interest, you can look at which genes in the module are differentially expressed with respect to the given trait using edgeR/DESeq2/Limma etc (even if your trait is a continuous variable). I think testing for significant genes associated with an interesting module will be more robust than the correlation p-value anyway.

ADD COMMENT
0
Entering edit mode

Thanks Adam. It's common to get 5~20 eigengenes from WGCNA, and the metadata may include 5-20 columns (especially for observational clinical studies). I think it’s always better to correct the p-value, given the number of correlations we calculated.

Also, I don’t think multiple hypothesis testing problems have anything to do with sample size. It's analogous to the DEG analysis. It's always better to do a "10 treatment vs 10 control study" than "3 treatment vs 3 control study", but increasing the sample size won’t solve the multiple hypothesis problems. In other words, a small sample size doesn’t increase the type 1 error rate.

ADD REPLY
0
Entering edit mode

I found a forum from stack exchange that discusses this type of problem in general. It seems the answer is clear: We should use FDR to correct correlation p values.

ADD REPLY
0
Entering edit mode

I think that makes sense, thanks for the clarification on Type 1 error. In the mouse liver WGCNA paper, which Steve Horvath is an author on, it looks like they used a random sub-sampling strategy to validate module discovery, while in the stack exchange post, user whuber recommends shuffling values to destroy existing correlations. I still have more to learn about this type of analysis, and would be interested to know what method of FDR correction you think is most appropriate.

ADD REPLY
0
Entering edit mode

Thanks for pointing Steve Horvath's paper and I will definitely read it.

I do not have a clear answer about which specific FDR correction should be used, but I am leaning towards Benjamini Hochberg (aka FDR). Permutation and sub-sampling are too much of a hassle, while Bonferroni correction is too strict.

ADD REPLY
0
Entering edit mode

Hi Jason. I am having the same discussion for a while, can I ask how did you solve the adjustment problem in the end?

ADD REPLY
0
Entering edit mode

You're correct that Bonferroni is too strict for WGCNA module-trait correlation, but the Benjamini-Hochberg correction is too liberal and often will not effectively control the FDR in this case. B-H assumes p-values are derived from independent tests, but module eigengenes are often highly correlated with each other, so their trait correlation p-values are correlated as well. You either need the Benjamini-Yekutieli correction (still pretty conservative) or a novel correction that considers the correlation matrix of module eigengenes. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4894362/ and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2276357/

ADD REPLY

Login before adding your answer.

Traffic: 1800 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6