Hi Amy ,
LChart and bk11 have discussed the issue of inflated test statistics (e.g., LChart 's answer), however, I want to take a bit of a step back before jumping into that. Specifically, I'll frame my response in a slightly different way: through the lens of mega-analysis versus meta-analysis for GWA studies. What the others have described are best practices for mega-analysis. Mega-analysis refers to combining studies into one harmonized dataset, which is imputed, tested for association, and so together (jointly).
The alternative is to conduct a meta analysis: here, you run the GWAS pipeline you have end to end for each study (each genotyping chip) separately, including imputation, post imputation QC, and association testing. One then combines the data at the level of summary test statistics - not at the level of raw or processed data.
The degree to which this is possible depends on the relative content of each study - e.g. do each of your chips have cases and controls, are they ancestry matched, etc. etc. etc. Unfortunately, the study design you propose is nothing short of atrocious. I am not trying to be mean and I know you cannot change this, but you have what is essentially a problem of perfect separation here: because case genotyping chip/study also segregates perfectly with case control status, teasing out whether thorny data merging steps have been done well is problematic; however without additional data a meta-analytic framework will also run into problems...
While it is an oversimplification, generally speaking mega-analysis outperforms meta-analysis when done very well. However, the increase to statistical power is not always appreciable, and are many thorny issues with a mega-analytic approach (as you seem to have discovered and as LChart describes). Thus, while mega-analysis is in theory usually better, practically it tends to be 1) labor intensive and 2) to introduce the possibilty that a kind of error unlikely to occur in meta-analysis could hamper results (through exactly the kind of imperfect data processing of the disparate studies that others describe).
In your study, if you do end up pooling any raw data, I'd also recommend controlling for study batch ID as a covariate during association testing over and above the best practices for imputation others have mentioned. There is a slim chance this would actually "fix" your problem without much other work, but of course you'd need to confirm that ...
--Personal opinion only--
For myself, if I am publishing a dedicated paper that is just a GWAS study, I do the legwork for Mega. However, if for instance this is one of many steps and data analyses that will be cross-indexed against other omics assays, functional studies, etc., anyway, then the modest bump to statistical power may not be worth the more rigorous data prep and QC steps.
even these measures will very often not fix the problem in the context of a problem of a problem of perfect separation such as OP describes.