We performed a CRISPR-screen in mice injected with tumor cells that were transduced with our sgRNA library. Hits were identified using MAGeCK.
We are discussing how to better validate our hits in the same mouse models. One approach we thought was to pool 3 targeting gRNAs for one gene with 7 non-targeting gRNAs in a mini-library to inject in each mouse to validate each gene individually, and let the tumors grow for some weeks just like in the initial screen before harvesting and sending the samples for NGS.
So one question is, assuming individual gene validation in each mouse, what percentage of the total library should be neutral sgRNAs? Is 30% targeting and 70% ok?
Another question would be regarding approaches to analyze such small libraries. We are not sure if MAGeCK would be appropriate for this 10-gRNAs mini-library we have in mind, as we don't know if it would be able to fit the model well in this small dataset. One approach I considered would be to perform a simple t-test to compare the means of reads from the targeting vs non-targeting gRNAs in each mouse. Or a chi-square test to compare the reads of targeting/non-targeting at baseline/endpoint. But I'm open to suggestions.
I have been searching for studies but no look so far besides studies that performed the screen in vitro and validated with KO mice, which is not the same as our setup as we are doing the KO in the tumor prior to injection in mice. Would anyone have any experience or input in how to proceed to validate CRISPR-screen hits in mice?
Our original library was around 650 genes. Around 30 mice total for different genotypes/timepoints.
Engraftment was assessed by tumor formation as judged by physiological symptoms. MRI was not feasible for the scale of our experiment.
The output for validation that we are considering would be the proportion of (reads) targeting vs non-targeting gRNAs at a later timepoint (i.e., which cell survived less/better, the ones with the targeting KO or NTC). We could use a targeting negative control to account for the effect of cutting itself, but then that would go into a different mouse, as we are planning just one target gene per mouse, so our pool would have only the targeting gRNAs + NTC.
Besides the experiment design itself, another issue for us would be how to analyze such output, as I mentioned in the original post. MAGeCK seemed to work fine for the original screen as it was developed exactly for that context of detecting promising hits in larger screens with many gRNAs and many genes. We still need to figure out how to validate our hits in vivo and analyze the results.
I will check the CRISPRball package, I'm always looking for different ideas on how to visualize and assess our results! Thanks!
This is roughly similar to what we've done proportionally (~400 genes, 20 mice).
I guess I meant did you do anything to discern how many unique cells that you're injecting are actually contributing to the tumor. In brain, we definitely have instances where a very small number of clones (e.g. <100 by random barcoding) seem to dominate the final tumor. Unsurprisingly, this scales with the number of cells injected (fewer cells injected, more likely massive outgrowth). Even with 250k cells injected, we can see instances where either engraftment seems poor or there's massive clonal outgrowth just due to the read distribution from each tumor being non-normal (orange on the below plot is one such example):
I guess I just really don't see the benefit of a small library here rather than just straight-up comparisons of a negative targeting control and your target gene in two sets of mice. So long as you validate cutting in your input cells, this feels more straightforward to me.
If you sequence your input cells, it seems totally validate just to say that 90% of reads from input cells show cutting in target gene and only 10% of reads in the tumor (or whatever). And then say the same about the negative targeting control, which would presumably be roughly similar. I do not feel the NTCs are useful controls in this instance.