Hi everyone, I'm quite new to metagenomic and I have a question regarding the methodological correctness regarding performing reads assembly. I have downloaded several datasets from NCBI obtained from the same environment and, from those, I am interested to study the occurrence and reconstruct the genome of a class of microorganisms which are in low abundance in this environment.
For this reason I was wondering regarding the methodological correctness of performing a single assembly to include all the downloaded reads (after QC of course). I understand that single-assembly is more suitable to recover micro variation (as strains) which are somewhat "averaged" during co-assembly. However, I don't know if the differences in the sampling methods, DNA extraction methods and even sequencing technologies among the different datasets might affect negatively the result of their co-assembly.
I'm also wondering how this could affect the downstream binning of the generated contigs, as the coverages of the reads would be for sure affected by the different analytical strategies used to generate each dataset.
Thank you in advance!