I'm analysing an experiment where the samples have been sequenced in four batches. The first batch had five and the other three had six samples multiplexed using the barcoded antibodies(cell hashing). Each batch overload the 10x well with ~30k cells. Now, the outcome can be seen below:
The 400-500 genes/cell doesn't make much sense give the samples that have been sequenced. The sample composition was also similar between the batches, so such huge between-batch difference is unexpected as well. We've been blaming the overloading gone wrong for every batch except the first one (there were many issues with clogging). I know this is only tangentially related to bioinformatics, but I'd appreciate any ideas about what could have gone wrong here. We've been discussing the possibility to repeat the experiment, but I have no idea how the difference could have arisen in the first place. This makes it had to give suggestions for a do-over.
Would appreciate any tips!
@ Igor why did you delete this post?
Hi Ram,
I've been talking to several people off-line(including 10x Genomics FAS) and realised it's impossible to give an answer here.
In that case, you should add an answer with that information and accept that answer. "This question is impractical to answer" is a valid answer to your question; deleting it is not a valid response especially when you've already received feedback from a bunch of people.