Demonstrating lack of difference between two peaksets
3
0
Entering edit mode
8 months ago
Aspire ▴ 370

Hello, I have ChIP-Seq data of two different cell-types, and from each of these - two ChIP samples and an Input sample. There is a batch effect (a specific sample from cell-type A goes with a specific sample from cell-type B).

It seems that there is not much difference between the peak sets across the cell-types.

  1. Would demonstrating that DiffBind does not find significant differentially bound sites - be sufficient, and acceptable for publication?

  2. If not, what statistical procedures would test for lack of difference?

DiffBind ChIP-seq • 561 views
ADD COMMENT
2
Entering edit mode
8 months ago

Generally, I'd say that DiffBind showing no differentially bound sites and then maybe tossing a set of heatmaps for each group in the supplement made with deepTools to probably be sufficient. For each cell type's peaks, plot the signal in both groups. It should be pretty apparent that they're more or less identical if things are properly normalized.

ADD COMMENT
2
Entering edit mode
8 months ago
ATpoint 85k

If not, what statistical procedures would test for lack of difference?

DESeq2 has a test against differences, called lessAbs. I do not know whether DiffBind has an interface for this. If not then just pull out the matrix of raw counts and do the analysis yourself. I predict though that 2 vs 2 is hardly powerful enough for this. I second jared.andrews07 to maybe just show a heatmap, and if in the big picture that looks reasonable "similar" then call it a day.

Strictly speaking, no evidence for difference is not the same as evidence for no difference, since the former can be due to noise. Maybe show the heatmap here, like Z-score for all peaks for every of the four samples (ignore the IgG/inputs), and then we see how it looks.

ADD COMMENT
0
Entering edit mode
8 months ago
LauferVA 4.5k

I'd like to frame things a bit more generally.

Overall, what you're describing is referred to as a null hypothesis significance test. these tests have been criticized (mostly in the psychology/psychiatry literature) for being prone to misinterpretation. however, if used carefully, they enable testing a variety of phenomena without necessarily even changing the statistical test of choice - i.e. you could still use a Mann-Whitney, or an ANOVA, or a good old t-test...rather the difference is with respect to how the hypotheses are framed:

Null Hypothesis (H0): mu1 - mu2 = 0.

Alternative Hypothesis (H1): H1: mu1 - mu2 != 0 (two-tailed)

or if one-tailed:

H1: mu1 - mu2 < 0 or H1: mu1 - mu2 > 0

Depending on the exact nature of the comparison desired, people go with different approaches. One is called the two one-sided tests (TOST) procedure. With TOST, you conduct two tests to evaluate whether the true difference between groups is both less than and greater than the bounds of this equivalence margin. If both tests are significant, you can conclude that the groups are equivalent within the specified margin.

Another route is confidence intervals analysis. If the 95% CI for the 2 groups falls entirely within a practically insignifcant range of values, an argument can be made that the groups are not meaningfully different (which is not to say the two are similar or the same).

Finally, where matters of belief are concerned, a Bayesian framework is frequently helpful. Here, the belief structure shifts from believing A is more plausible to rejecting that belief fluidly and on a single continuum. This can help with a variety of non-standard hypothesis tests.

These tests, though criticized, were originally proposed for (and are still used for) controlling type I error rate, which I think fairly closely resembles your use case. In terms of looking specifically at chip-seq data, i might recommend you use, therefore, a package that has been developed, benchmarked, and so forth, specifically for this purpose. Consider, for example, RECAP.

ADD COMMENT

Login before adding your answer.

Traffic: 1811 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6