you first have to understand that the HapMap project started back in 2002-2003, and that the high-throughput genotyping technologies were emerging. when it was defined at the very beginning, the aim was to aquire the most variation from the less effort. have in mind that TaqMan was somehow stablished a few years before with the single-SNP typing, then Sequenom led the way of the medium-throughput, then SNPlex thought that they were going to eat the whole high-throughput genotyping cake, but suddenly Illumina appeared and a couple of years after was Affymetrix and their current gold-standard for large genotyping.
when it all started the HapMap project only had a list of potential candidate sites for variation coming from the recent human genome assembly, and typing was very expensive, so the main goal was to discover not only variations but haplotypes, groups of variations that would move as blocks, so finding one of them would be enough to describe the rest of the block (here you may want to read something about Linkage Disequilibrium and Tag SNPs). as the number of samples and the amount of variations they were aiming were limited, they definitely had to focus their effort on common variation. and they did it great, in my honest opinion: they evolved their goals with the techniques, they genotyped more samples (x10 of what it was first thought), they reached the ~4M variants typed in some of the original populations CEU, CHB, JPT and YRI (which is ~15% of the variation stored currently on dbSNP), and among many other things they did a pretty nice job with the data analysis, spotting LD blocks through the genome, finding selective sweeps, ...
I know now things are much easier: several databases have already plenty of data (I even remember when we had to pay yearly to access the Celera's database as they had a different source of variation that was sometimes needed to complete a genotyping design, until they "released it to the world" by bulk loading it on dbSNP), the technologies are able to genotype fast and relatively cheap, the next generation sequencing is getting close to be the substitute of those genotyping techniques when regions need to be deeply characterize or when those regions are too large, ...
the initial HapMap soul has somehow been resurrected a couple of years ago: it is called the 1000 Genomes project. the goal of both projects is the same (finding human variation), although the first one was aware of the technology limitations and tried to deal very intelligently with them, and the second one knows that the ultrasequencing will soon deliver millions of variants that need to be characterized in a very fast and easy manner, so they aim also for rare variants by analyzing thousands of samples, trying to have some nice statistics at the end that may be used as a reference.
I started doing bioinformatics when HapMap was born, so I wanted to share here some of my thoughts (sorry for that huge answer). regarding your exact question, HapMap does not intend to characterize more than what they typed, and those were common variants, with known positions, and as they have deeply study them they can do such estimation. rare variants or individual mutations are more unpredictable, and for that reason the estimation for functional locations can't include that 5%. I hope I didn't bother you too much, and that I brought you some light on the issue ;)
Thank you Jorge Amigo for that excellent summary. Some of the interesting papers are linked here. http://bit.ly/gciL9Y
Jorge, thank you for your great answer. I feel very bad now but i asked the question wrong :(. I meant to say the 1000 genomes project in the title but said HapMap by mistake. I was referring to the recently published paper on human variation predictions. I thought it would be easier to open a new question rather than confuse this one. However your answer on the HapMap was really helpful to provide some context and history to the past 10 years