You can't evaluate assemblies this way because the errors in reads can cause mis-assemblies where the effect cannot be described with classical probabilities.
The theoretical problem of most reads ending up with the same error at the exact same position by sheer chance will be so small that is not worth accounting for.
This is not to say that this event does not happen, it is just that when it does it won't be due to random chance but a systematic problem in which case probabilistic estimation does not help.
That is right, there are weak relationships between minor errors in sequencing and errors in assembly -- even with hundreds of X coverage, a "simple" problem of repeated sequence could affect the quality of a resulted assembly significantly. Thus, I would also guess that the minor sequencing error (100X coverage and 1% error) is dismissible.
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Wouldn't it be 1%^100? Of course, the error rate actually changes as a function of base-postion in the read length and then there are the Phred scores to think of, so I suspect the proper equation would be quite messy.
Edit: Err, 1%^100 would be the naive probability that all of the reads covering a base contain an error. Of course, you don't actually need all of them to contain an error and they wouldn't then all contain the same error. Mea culpa!