Rationale For Pca Analysis
5
18
Entering edit mode
13.8 years ago
Deena ▴ 280

Hello, I would appreciate comments/advice on when to use Principal Component Analysis and what PCA data represents. My understanding of the algorithm is that a set of correlated variables are represented as uncorrelated variables, from which one can derive an understanding of variation in the data. BUT,

  1. Once you have represented your data as a set of principal components, is there some way to determine which features are actually represented in each principal component? In other words, what does each principal component (PC) actually represent? If I understand correctly, the first 2 PCs will always be the most important to show the variation of the dataset, but how can I tie that back to the actual variables/features that I was mapping in the first place?

  2. I wish to determine which features, from a set of features (ex: hydrophobicity, amino acid composition, etc.) are the "best" to predict whether a protein sequence will adopt a desired protein fold (a specific fold I have in mind). Accordingly, if PCA does not do that, what is the best technique to use? I have heard of "feature selection" but I am not very familiar with it. If anyone can elaborate on if/how it differs from PCA that would be very appreciated. Are there known examples (ex: articles, reviews) in protein structure prediction that address this?

I am intending to use R for this analysis, so any suggestions for R libraries that will do the job are most welcome! Thank you very much for your advice and responses!

-Deena

pca feature • 7.1k views
ADD COMMENT
0
Entering edit mode

[?]Thank you all very much for your fantastic and detailed responses![?]

ADD REPLY
21
Entering edit mode
13.8 years ago

"Is there some way to determine which features are actually represented in each principal component?"

Yes. Each PC is basically a linear combination of the original variables. A loading plot is typically used to plot the old variables in the new space. When combined with a score plot (where the old objects are plotted in the new space), you get a so-called biplot.

"If I understand correctly, the first 2 PCs will always be the most important to show the variation of the dataset"

Correct. This is the whole purpose of PCA. The methods finds orthogonal axes that explain the most variation. It basically finds the first PC by finding an line in the original space along which the variation in the data is maximal. This line if the first principle component. Each next PC is the line that maximizes again the variance, given that it must be orthogonal to all previous components. This is the graphical explanation; the matrix operation one is equivalent and used by software.

"how can I tie that back to the actual variables/features that I was mapping in the first place?"

Via the loading plot or biplot.

"if PCA does not do that, what is the best technique to use?"

The regression variant of PCA is PCR, Principle Component Regression. However, mind you, that there is not best way. What the optimal approach is, you cannot say on beforehand, neither based on theoretical grounds, and it depends on your data, representation, preprocessing, etc, etc.

"I have heard of "feature selection" but I am not very familiar with it."

There are very many feature selection methods. Step-forward selection, backward-elimination, genetic algorithms, just to name a few that do the selection independently from the modeling. Again, your best choice depends on your data.

"If anyone can elaborate on if/how it differs from PCA that would be very appreciated."

PCA does feature selection only in such a way that it decides how important a variable is to maximizing the variance in the dependent data (your sequences). However, when employing feature selection, you are mostly more interested in how important a variable is with respect to some independent property (your structures).

"any suggestions for R libraries that will do the job are most welcome"

I would recommend the pls: Partial Least Squares Regression (PLSR) and Principal Component Regression (PCR) package.

ADD COMMENT
0
Entering edit mode

Nice response, very helpful.

ADD REPLY
9
Entering edit mode
13.8 years ago

PCA aims to capture the maximal variance in a dataset in a single variable, the first Principal Component. Variance that can not be captures is then put in the 2nd PC and so on, thus your statement that

My understanding of the algorithm is that a set of correlated variables are represented as uncorrelated variablesrepresented as uncorrelated variables

is exactly right.

Now, PCA is one of the so-called "exploratory" techniques of multivariate anaylsis. You can easily plot a quick overview of the variance in your dataset but only if it is captured in the first two (for a 2D plot) PCs. PCA by itself can not be used for classification, but there are PCA-based classification algorithms.

There is, however, a way to visualize how much your features contribute to the PCs, which is done by plotting columns of the loadings matrix (analogous to plotting the PCs from the scores matrix; see the example below, represented by the red arrows: the directions and magnitude in PC1 and PC2 are their respective contributions - again, this is a 2D plot so only the first PCs are shown).

[?] [?] [?] [?] [?] PCA biplot

For the 2nd part, "feature selection" algorithms is also what you should be looking for. Generally, they use a classification algorithm (e.g. simple Bayesian statistics or SVMs) on a subset of your features and compare the algorithm performance to the whole or another subset in order to obtain the optimal features. There seem to be quite some R libraries that are able to do that and as always, there is also a Wikipedia Article including helpful references.

edit: here are some articles employing SVMs (there are other possibilites as well!):

Very good introduction to classification using SVMs

Feature selection experiment for determining catalytic residues

R library for feature selection by penalizing

edit2: addressing Egon's point

ADD COMMENT
1
Entering edit mode

I probably should not try to answer questions after 2am. You are right in both points, although I did not say that PCA is limited to 2 components, just the 2D plot is. I fixed the paragraph.

ADD REPLY
0
Entering edit mode

PCA is not limited to two PCs, and is used has been used for classification a lot (and regression too), using multiple PCs.

ADD REPLY
4
Entering edit mode
13.8 years ago
Rajarshi Guha ▴ 880

If you're looking for variable importance, random forests can be useful. I use randomForest in R. The variable importance measure is derived by scrambling descriptors individually and looking at how predictive performance degrades. In that sense the importance is in the context of the predictive ability of model

ADD COMMENT
0
Entering edit mode

Rajarshi, please add a link for your favorite RF package; perhaps with a link to a vignette? I guess it is based on how often variables show up in the RF?

ADD REPLY
3
Entering edit mode
13.8 years ago

The best layman's explanation of PCA I've read was recently posted on CrossValidated.

ADD COMMENT
2
Entering edit mode
13.8 years ago
Stephen 2.8k

I'll second Rajarshi's comment on random forests (randomForest package in R). I think it will help with what you're after. I don't think there's a vignette included with the package, but here's a very short demo of randomForest.

This paper offers the best explanation I've come across of exactly what RF is doing.

This is a more lay explanation of what RF is doing.

ADD COMMENT

Login before adding your answer.

Traffic: 1809 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6