This distinction has more to do with machine learning algorithm categories. While clustering is considered a subcategory of "machine learning," in your case what you're doing is mostly considered linear algebra.
Pre-filtering does not affect the category: the algorithm sees only the data, which in this case is an N-dimensional geometric space from which some sort of sample-wise distance is calculated. You can influence the way that clustering happens within pheatmap by using a different distance metric (e.g. "euclidean", "maximum", "manhattan", "canberra", "binary" or "minkowski") or by changing algorithm parameters (pheatmap uses k-means, so changing k).
You can also read more about different hierarchical joining methods by reading up on hclust, which is the function underlying pheatmap:
Ward's minimum variance method aims at finding compact, spherical clusters. The complete linkage method finds similar clusters. The single linkage method (which is closely related to the minimal spanning tree) adopts a ‘friends of friends’ clustering strategy. The other methods can be regarded as aiming for clusters with characteristics somewhere between the single and complete link methods. Note however, that methods "median" and "centroid" are not leading to a monotone distance measure, or equivalently the resulting dendrograms can have so called inversions or reversals which are hard to interpret, but note the trichotomies in Legendre and Legendre (2012).
Supervised in most machine learning contexts means using prior information (prior data) in order to inform a decision about new data, given some category of algorithm.
Unsupervised means using only the data itself to make some decisions about the data, again given some category of algorithm.
Don't worry too much about this distinction for practical purposes, unless you're curious about the subject matter itself.
Since you are using outside knowledge (differences between 2 known groups of samples) this would fall under supervised or semi-supervised clustering... However in a paper you could describe it as unsupervised clustering of differentially expressed genes and everyone would understand that it was semi supervised.
Sorry for re-upping this post (it is always better than creating a new thread, I guess). So, if I got it right:
you're asking how these genes cluster together then you are doing an unsupervised hierarchical clustering, correct?
I've moved your post to a comment since it is not an answer. Use the "add comment" button to request clarifications.
Clustering is typically an unsupervised approach. Unsupervised means you don't use external information to group your data points/items, i.e. grouping is based only on the data. In supervised learning, you make use of external information to form the groups, typically category labels to train a classifier. There are also intermediate situations called semi-supervised learning in which clustering for example is constrained using some external information.
So if you apply hierarchical clustering to genes represented by their expression levels, you're doing unsupervised learning.
Thanks so much, Jean-Karim. Very helpful.
regards,