p.adjusted p value
1
0
Entering edit mode
3.8 years ago
camillab. ▴ 160

Hi!

I want to calculate the p.adjusted p-value for all genes in my dataset. I have a total of 3294 genes with p.vlue <= 0.05 but when I try to calculate the adjusted p-value all I get is just 1 for all genes. what am I doing wrong?

here me code (shown only for those 3194 genes but it gives me the same results also if I do it on the unfiltered dataset):

#load dataset # not shown
library(dplyr)
#FILTER pvalue <= 0.05
test_pvalue <- filter(test, test$p_value <= 0.05 ) #3284 total genes with p.value <= 0.05

#pvalue
pvalue <- test_pvalue$p_value #select column of p.value
f1 <- p.adjust(pvalue,method="bonferroni")

and I got this result (just the few 30 rows printed):

 [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[23] 1 1 1 1 1 1 1 1

I followed this padjusted

What am I doing wrong? Is something that am I missing?

thank you for the help!

Camilla

r pvalue FDR RNA-Seq • 9.1k views
ADD COMMENT
1
Entering edit mode

Is something that am I missing?

Yes. You shouldn't assume that because you have p-values below 0.05, adjusted ones will still be below 0.05. It's perfectly possible for adjusted p-values to be (all) 1, in particular if you choose the most conservative method of correction. Also before you try to adjust your p-values, check the distribution of the original p-values. See How to interpret a p-value histogram

ADD REPLY
0
Entering edit mode

thank you very much! I will !

ADD REPLY
1
Entering edit mode
3.8 years ago
ATpoint 85k

What you see is expected. The Bonferroni correction works by multiplying the pvalues with the number of tests, here 3294.

#/ Example (four values, so everything gets multiplied by 4, and 1 stays 1 of course):
p.adjust(c(0.05, 0.0005, 0.1, 1), "bonferroni")
> 0.200 0.002 0.400 1.000

That means that in order to get an adjusted p-value of 0.05 or smaller your uncorrected p-values must be at least 0.05/3294 = 0.00001517911. All larger p-values will turn non-significant if using 0.05 as cutoff. It can well be that your experiment does not have the statistical power to return these kinds of significances. Bonferroni is quite a stringent correction method. You might want to look into the Benjamini-Hochberg method, but if you lack statistical background knowledge it is generally best to either work with a statistician who gives counsel or to use software packages that do the stats work for you internally.

What kind of experiment is that you are analyzing?

ADD COMMENT
0
Entering edit mode

thank you. I have analysed bulk-RNAseq from human samples identifying 500 interesting genes to follow up but considering the intrinsic variability of the human samples (quality/age and status of donor etc), I want to try to set a more stringent threshold for significance (for example using corrected p-values, or q-values) and see if and how many genes I will identify / overlap with the one I originally found.

ADD REPLY
0
Entering edit mode

Hmm, it is troublesome that you even work with the non-adjusted pvalues at all. Usually, a standard RNA-seq differential analysis goes like this:

First, you obtain your count matrix, e.g. with featureCounts, salmon/tximport or any other method. Then you get DRGs by feeding it into established differential analysis software, most commonly but not exclusively edgeR, DESeq2 or limma, with a FDR threshold of e.g. 0.05. Based on this you perform whatever downstream analysis you like. There is no argument here to use the uncorrected p-values and I hope that 1) you did not obtain DE genes with any custom approach and 2) you used FDR right away. Otherwise your results most likely will be spawned with false calls. Actually FDR of 0.05 is not really stringent at all. Stringency could be added by testing against certain fold change thresholds, but lets first discuss how you did your analysis so far if you want.

ADD REPLY
0
Entering edit mode

I know the approach you described is the gold standard but at the moment I do not have access to the raw data and only to FPKM data (normalized reads). Obviously you/bioinformatic will say that my results are not valid if I do not apply the approach you described and I knowI should get the raw data (I really do) but at the moment I do not have access to the FASTQ data so I have to work with the data I have. Apologies to disappoint you.

That's the reason I am trying to validate/ confirm with different approaches the results I obtained initially. We are not going to validate all 500 genes but a small significant subset of genes so my hypothesis is that if X genes resulted significantly different using different test/approaches, potentially those genes are effectively changed in my dataset (treatment vs untreated).

ADD REPLY
1
Entering edit mode

Then at least use the limma-trend pipeline on the normalized data on the log2 scale. How many samples do you have per group? How did you do the testing, something like pairwise t-tests? It is ok to try to use workarounds when not having raw data but be careful not to be overly liberal with your approaches. I personally think that creating loads of false-positives does not help at all, it is rather chasing ghosts. Something like this maybe: https://support.bioconductor.org/p/125144/#125156 (and do not get distracted by Gordon Smyth calling the first one "bad", it is still probably the "best-possible" choice given that you only have FPKMs available. Be sure to put them on the log scale though. It is just not as good at all as raw data but I'd give that pipeline in the link a try).

ADD REPLY
1
Entering edit mode

thank you! I will definitely try it! (I have 6 samples in total, 3 per condition)! and hopefully,(when) I will be able to get the FASTQ and I will run the gold standard pipeline with DESeq2 or limma.

ADD REPLY
0
Entering edit mode

That sounds like a good plan :)

ADD REPLY

Login before adding your answer.

Traffic: 2701 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6