If you look at a genotype quality distribution from a VCF then you often see a small peak of genotypes with zero quality.
This means that multiple genotypes (HOM_REF, HET, HOM_ALT) are just as likely and one has been chosen more or less at random.
Are downstream applications (tools / libraries) in general tolerant for low quality genotypes, do they understand the difference between a qual 100 and a qual 0 genotype? Do they make use of this genotype quality information?
Or does it make more sense to set the qual 0 genotypes to missing for some downstream purposes? As you don't have any conclusive data for that genotype.
And just have the downstream tool / application work with high quality genotypes?
It would be so nice if there were a simple answer to this question! The answer depends very much on what you want to do with the data. For example, some analyses are very sensitive to missing heterozygotes while others do not mind at all. Sometimes you want to have the full genome represented, or instead just a very reliable, much smaller, set of variants. The answer depends on what are you going to do with the data, and you usually have to try and compare different combinations.
Just to add that there are also tools like ANGSD that consider uncertainty within a probabilistic framework - they don't rely on actual calls/no-calls.