This problem is more on the experimental side of sequencing. I'm a bioinformatician and just be curious about the this problem. How do we guarantee that for a sample we can sequence to a given depth? that is, how to we control the total amount of reads produced when sequencing? Answer for next-generation sequencing and third-generation sequencing are all appreciated
There is none. The depth is basically the total number of reads obtained divided by the number of loaded samples. Since loading will never be perfectly balanced it is not naively total/number_samples but somewhat close. The total depth of the run is determined by the flowcell, see e.g. Illumina documentation for that. Here for Novaseq.
So if you want say 50mio reads for a sample and you have n samples you need to load on a flowcell that is large enough or sequence the same pool of DNA over several flowcells or lanes. Often you do a first run and then resequence to get more reads if not sufficient. There is no botton to click to exactly get a certain number of reads. One can be creative during pooling. So if some samples need lot of depth and others just a shallow sequencing for QC then you can pool accordingly, so much of the former and little of the latter samples in molarity. That above is all for Illumina short-read sequencing.
Just to add to that that a good sequencing facility will have a feel for how variable their quantification/pooling is, and therefore how much extra capacity to allow in order to be fairly confident that you are sequencing enough to cover your minimum coverage spec. Our sequencing providers rarely have to sequence more to provide us with the minimum we require.
Just to add to that that a good sequencing facility will have a feel for how variable their quantification/pooling is, and therefore how much extra capacity to allow in order to be fairly confident that you are sequencing enough to cover your minimum coverage spec. Our sequencing providers rarely have to sequence more to provide us with the minimum we require.