Does the pca function restrict the number of components to be kept?

16 ビュー (過去 30 日間)
elid latf
elid latf 2018 年 6 月 25 日
コメント済み: elid latf 2018 年 6 月 26 日
Hi there,
I'm using pca function to reduce the number of variables in a huge given datasets. It works well but when I want to change the number of components to be kept, i can't go beyond the number of observations.
To put it differently, let's say that my dataset has 500x24300 dim, I want to reduce it into 500x16100. However, the function works only for a number less than 499 and gives an error otherwise.
I'm using Matlab 2016a.
Does anyone have an idea ? Thanks for your help.
  2 件のコメント
John D'Errico
John D'Errico 2018 年 6 月 26 日
Please get used to using comments, instead of adding answers for every response you make.
elid latf
elid latf 2018 年 6 月 26 日
Yes, you're right. I haven't realized till after posting it. thanks.

サインインしてコメントする。

採用された回答

Anton Semechko
Anton Semechko 2018 年 6 月 25 日
The eigenvectors computed by PCA (and its generalized version called probabilistic PCA) only span the subspace of the ambient space containing the sample data; and are therefore based on linear combinations of the sample datapoints. If N and D are the number of samples and dimensionality of the data, respectively, then min(N-1,D) is the maximum number of principal components (PCs) you will be able to extract. The number of PCs will be even smaller if the data points are linearly dependent.
In principle, you can always find the complement of the PCA subspace (i.e., set of eigenvectors orthogonal to the PCs), but that it is very rarely done in practice, especially when dealing with high-dimensional spaces like yours (i.e., 24300 dimensions).
  5 件のコメント
Anton Semechko
Anton Semechko 2018 年 6 月 26 日
That min(N-1,D) is the maximum number of PCs that can be extracted from a D-by-N data matrix is a "theoretical" limitation. It is simply not possible to extract more than min(N-1,D) PCs that contain ANY information what so ever about your data.
Note that even though the maximum number of PCs may be much smaller than dimensionality of the data, when put together, they represent the original data with 100% accuracy. However, real data often contains noise, and the information carried by the "higher-order" PCs will be increasingly dominated by noise. When performing dimensionality reduction, which is what I am assuming you want to do, you
1) Select the first K<min(N-1,D) PCs that retain as much of the underlying structure carried by the data as possible; the remaining min(N-1,D)-K PCs will be dominated by noise
2) Project observed data (after centering it on the mean) on the K PCs, to get the so-called "feature vectors" (or scores in statistics literature). These K-dimensional feature vectors are low-dimensional representations of your data.
Various methods have be developed to determine the optimal value of K (e.g., Horn's rule, cross-validation), but none of them work 100% of the time; because real data rarely meets underlying assumption of the PCA model (see [1] and [2] for details).
[1] Roweis, 1998, EM algorithms for PCA and SPCA
[2] Tipping & Bishop, 1999, Probabilistic principal component analysis
elid latf
elid latf 2018 年 6 月 26 日
Again, thank you all so much for the time you've given to my question. It was so interesting.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeDimensionality Reduction and Feature Extraction についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by