I know that technical replicates are never acceptable replacements for biological replicates. How about the reverse? Are a large number of biological replicates ever an acceptable replacement for technical replicates?
In my own experience in personalized medicine/biomedical big data, there is often a push to say that we have "catalogued over X unique transcriptomes/proteomes/etc. from Population Y (e.g., cancer, autism, multiple sclerosis)". If there is money to sequence more samples, it is a fair bet that the priority is not to run quadruplicates. The attitude of several generalist data scientists/machine learning specialists I have worked with is that, as the data becomes larger, worry about the quality of any single sample becomes less.
Is this an issue that contemporary biostatisticians have an opinion on? To the extent that I am seeing increasing numbers of guys who have more background in building deep ANNs to identify cats than transcriptomics cavalierly dive into this field, this issue worries me a bit. What does this imply for the field?