I think that FastQC is a nice tool for graphically displaying various aspects of a library's quality. Several times, when a person was experiencing analysis difficulties, I asked them to post the FastQC output to help diagnose the reasons for the problem. Presumably, if they had used FastQC in the first place, they could have avoided some of the time spent on trying to do the analysis and initially failing.
That said, I don't consider FastQC to be sufficient alone. Here are a few things I find useful in data QC:
1) Insert-size distribution. This may let you know, for example, why the bases in the base-frequency histogram are diverging toward the read end.
2) Synthetic/spike-in contaminant metrics. For example, PhiX, molecular weight markers, etc.
3) Organism-hit metrics. E.g., the results of BLASTing 1000 reads to nt / RefSeq, or mapping all reads to custom databases of known organismal contaminants, such as human when working on non-human genomes.
2 and 3 will help you spend far less time figuring out why only 80% of your reads map if you already know 18% of your reads are Delftia.
4) True quality metrics. Illumina quality scores are not accurate; to know the quality of the data, you need to look further - e.g., map the reads and count matches/mismatches.
5) Library-complexity metrics. This is situational.
6) Kmer-frequency histogram, GC-content histogram, or even both combined. Along with 2 and 3, this can allow you to spot contamination early and decontaminate before assembling and performing an incorrect analysis.
Because I think these things are important, I've written tools to calculate most of them. They are autogenerated by our pipelines and available as graphs when an analyst wants to look at the library, which saves a lot of time. Some are generated from a random subsample of the reads to save compute time, while others (like how much PhiX or human is present) are generated as a side-effect of removing the artifact when processing all reads.
That statement while true from the perspective of your colleague immediately brings to mind the famous story of "blind men and an elephant".
Nice analogy, and appropriate for this case. Thanks for your answer, @genomax2.
Do you disagree because you think the Illumina quality report (I assume from SAV or BaseSpace) is not sufficient or because you think that everyone should check the QC metrics?