It's widely accepted that, pound for pound, using multiple short-read libraries with different insert sizes is more effective than a single insert size library for the generation of a _de novo_ assembly of short whole genome shotgun (WGS) reads. Is there a coherent, intuitive explanation why that is so? Does the effectiveness vary among de Bruijn graph (eulerian path) methods and overlap-consensus (hamiltonian path) methods? Is there any published research that discusses this with empirical results (e.g., simulations under varying parameters)?
Can we used just one library for genome assembly?
Hi buttonwood, your post does not look like an answer to this question, but is another question entirely. You should try posting it as another question as long as it does not appear to be a duplicate