# How to select the components that show the most variance in PCA

129 ビュー (過去 30 日間)
Faraz 2016 年 2 月 27 日

I have a huge data set that I need for training (32000*2500). This seems to be too much for my classifier. So I decided to do some reading on dimensionality reduction and specifically into PCA.
From my understanding PCA selects the current data and replots them on another (x,y) domain/scale. These new coordinates don't mean anything but the data is rearranged to give one axis maximum variation. After these new coefficients I can drop the cooeff having minimum variation.
Now I am trying to implement this in MatLab and am having trouble with the output provided. MatLab always considers rows as observations and columns as variables. So my inout to the pca function would be my matrix of size (32000*2500). This would return the PCA coefficients in an output matrix of size 2500*2500.
The help for pca states:
Each column of coeff contains coefficients for one principal component, and the columns are in descending order of component variance.
In this output, which dimension is the observations of my data? I mean if I have to give this to the classifier, will the rows of coeff represent my datas observations or is it now the columns of coeff?
And how do I remove the coefficients having the least variation? And thus effectively reduce the dimension of my data

サインインしてコメントする。

### 採用された回答

the cyclist 2016 年 2 月 27 日

Here is some code I wrote to help myself understand the MATLAB syntax for PCA.
rng 'default'
M = 7; % Number of observations
N = 5; % Number of variables observed
X = rand(M,N);
% De-mean (MATLAB will de-mean inside of PCA, but I want the de-meaned values later)
X = X - mean(X); % Use X = bsxfun(@minus,X,mean(X)) if you have an older version of MATLAB
% Do the PCA
[coeff,score,latent,~,explained] = pca(X);
% Calculate eigenvalues and eigenvectors of the covariance matrix
covarianceMatrix = cov(X);
[V,D] = eig(covarianceMatrix);
% "coeff" are the principal component vectors.
% These are the eigenvectors of the covariance matrix.
% Compare "coeff" and "V". Notice that they are the same,
% except for column ordering and an unimportant overall sign.
coeff
coeff = 5×5
-0.5173 0.7366 -0.1131 0.4106 0.0919 0.6256 0.1345 0.1202 0.6628 -0.3699 -0.3033 -0.6208 -0.1037 0.6252 0.3479 0.4829 0.1901 -0.5536 -0.0308 0.6506 0.1262 0.1334 0.8097 0.0179 0.5571
V
V = 5×5
0.0919 0.4106 -0.1131 -0.7366 -0.5173 -0.3699 0.6628 0.1202 -0.1345 0.6256 0.3479 0.6252 -0.1037 0.6208 -0.3033 0.6506 -0.0308 -0.5536 -0.1901 0.4829 0.5571 0.0179 0.8097 -0.1334 0.1262
% Multiply the original data by the principal component vectors to get the
% projections of the original data on the principal component vector space.
% % This is also the output "score". Compare ...
dataInPrincipalComponentSpace = X*coeff
dataInPrincipalComponentSpace = 7×5
-0.5295 0.0362 0.5630 0.1053 -0.0428 0.2116 0.6573 -0.1721 -0.0306 -0.1559 0.6427 -0.0017 0.2739 -0.1635 0.2203 -0.6273 0.0239 -0.3678 -0.0710 0.2214 0.1332 0.0507 -0.0708 0.2772 0.0398 0.3145 -0.4825 -0.2080 0.1496 -0.0842 -0.1451 -0.2840 -0.0182 -0.2670 -0.1987
score
score = 7×5
-0.5295 0.0362 0.5630 0.1053 -0.0428 0.2116 0.6573 -0.1721 -0.0306 -0.1559 0.6427 -0.0017 0.2739 -0.1635 0.2203 -0.6273 0.0239 -0.3678 -0.0710 0.2214 0.1332 0.0507 -0.0708 0.2772 0.0398 0.3145 -0.4825 -0.2080 0.1496 -0.0842 -0.1451 -0.2840 -0.0182 -0.2670 -0.1987
% The columns of X*coeff are orthogonal to each other.
% This is shown with ...
corrcoef(dataInPrincipalComponentSpace)
ans = 5×5
1.0000 -0.0000 0.0000 -0.0000 -0.0000 -0.0000 1.0000 0.0000 -0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000 1.0000 -0.0000 -0.0000 0.0000 0.0000 -0.0000 1.0000
% The variances of these vectors are the eigenvalues of the covariance matrix,
% and are also the output "latent". Compare these three outputs
var(dataInPrincipalComponentSpace)'
ans = 5×1
0.2116 0.1250 0.1009 0.0357 0.0286
latent
latent = 5×1
0.2116 0.1250 0.1009 0.0357 0.0286
sort(diag(D),'descend')
ans = 5×1
0.2116 0.1250 0.1009 0.0357 0.0286
The first figure on the wikipedia page for PCA (which I have copied) is helpful in understanding the method.
There is variation along the original (x,y) axes. The superimposed arrows show the principal axes. The long arrow is the axis that has the most variation; the short arrow captures the rest of the variation.
Before thinking about dimension reduction, the first step is to redefine a coordinate system (x',y'), such that x' is along the first principal component, and y' along the second component (and so on, if there are more variables).
In my code above, those new variables are dataInPrincipalComponentSpace. As in the original data, each row is an observation, and each column is a dimension.
These data are just like your original data, except it is as if you measured them in a different coordinate system -- the principal axes.
Now you can think about dimension reduction. Take a look at the variable explained. It tells you how much of the variation is captured by each column of dataInPrincipalComponentSpace. Here is where you have to make a judgement call. How much of the total variation are you willing to ignore? One guideline is that if you plot explained, there will often be an "elbow" in the plot, where each additional variable explains very little additional variation. Keep only the components that add a lot more explanatory power, and ignore the rest.
In my code, notice that the first 3 components together explain 87% of the variation; suppose you decide that that's good enough. Then, for your later analysis, you would only keep those 3 dimensions -- the first three columns of dataInPrincipalComponentSpace. You will have 7 observations in 3 dimensions (variables) instead of 5.
I hope that helps!
##### 38 件のコメント36 件の古いコメントを表示36 件の古いコメントを非表示
the cyclist 2021 年 9 月 15 日
I did not get any notification, and can't find that question. Please post a link to the new question here, if you'd like me to take a look. (But hopefully someone else will have helped.)
Tom 2021 年 9 月 16 日
I already solved my first issue on my own after a few hours of trial and error. Now I got another one. Maybe you can take a look.

サインインしてコメントする。

### その他の回答 (3 件)

naghmeh moradpoor 2017 年 7 月 1 日
Dear Cyclist,
I used your code and I was successful to find all the PCAs for my dataset. Thank you! On my dataset, PC1, PC2 and PC3 explained more than 90% of the variance. I would like to know how to find which variables from my dataset are related to PC1, PC2 and PC3?
Please could you help me with this Regards, Ngh
##### 1 件のコメント-1 件の古いコメントを表示-1 件の古いコメントを非表示
Abdul Haleem Butt 2017 年 11 月 3 日
dataInPrincipalComponentSpace is same as in the original data, each row is an observation, and each column is a dimension.

サインインしてコメントする。

Sahil Bajaj 2019 年 2 月 12 日
Dear Cyclist,
Thansk a lot for your helpful explanation. I used your code and I was successful to find 4 PCAs explaining 97% variance for my dataset, which had total 14 components initially. I was just wondering how to find which variables from my dataset are related to PC1, PC2, PC3 and PC4 so that I can ignore the others, and know which parameters should I use for further analysis?
Thanks !
Sahil
##### 9 件のコメント7 件の古いコメントを表示7 件の古いコメントを非表示
the cyclist 2021 年 2 月 2 日

I'm happy to hear you have found my answer to be helpful.
The way you are trying to interpret the results is a little confusing to me. Using your example of school subjects, I'll try to explain how I would interpret.
Let's suppose that the original dataset variables (X) are scores on a standardized exam:
1. Math (column 1)
2. Writing
3. History
4. Art
5. Science
[Sorry I changed up your subject ordering.]
Each row is one student's scores. Row 3 is the 3rd student's scores, and X(3,4) is the 3rd student's Art score.
Now we do the PCA, to see what combination of variables explains the variation among observations (i.e. students).
coeff is the coefficients of the linear combination of original variables . coeff(:,1) are the coefficients to get from the original variables to the first new variable (which explains the most variation between observations):
-0.5173*Math + 0.6256*Writing -0.3033*History + 0.4829*Art + 0.1262*Science
At this point, the researcher might try to interpret these coefficients. For example, because Writing and Art are very positively weighted, maybe this variable -- which is NOT directly measured! -- is something like "Creativity".
Similarly, maybe the coefficients coeff(:,2), which weights Math very heavily, corresponds to "Logic".
And so on.
So, interpreting that single value of 0.6256, I think you can say, "Writing is the most highly weighted original variable in the new variable that explains the most variation."
But, it also seems to me that to answer a couple of your questions, you actually want to look at the original variables, and not the PCA-transformed data. If you want to know which school subject had the largest variance -- just calculate that on the original data. Similarly for the correlation between subjects.
PCA is (potentially) helpful for determining if there is some underlying variable that explains the variation among multiiple variables. (For example, "Creativity" explaining variation in both Writing and Art.) But, factor analysis and other techniques are more explicitly designed to find those latent factors.
Darren Lim 2021 年 2 月 3 日
Thanks @the cyclist !
Crystal Clear! I think many others will find this answer helpful as well , thanks again for your insights and time!
Darren

サインインしてコメントする。

Salma Hassan 2019 年 9 月 18 日
i still not understand
i need an answer for my question------> how many eigenvector i have to use?
from these figures
##### 3 件のコメント1 件の古いコメントを表示1 件の古いコメントを非表示
Salma Hassan 2019 年 9 月 19 日
if we say that the first two components which explain about 44% enough for me, what does this mean for latent and coff . how can this lead me to the number of eigen vectors
the cyclist 2019 年 9 月 20 日

It means that the first two columns of coeff are the coefficients you want to use.

サインインしてコメントする。

### カテゴリ

Help Center および File ExchangeDimensionality Reduction and Feature Extraction についてさらに検索

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by