relation between principal eigenvector and principal generalized eigenvector
5 ビュー (過去 30 日間)
古いコメントを表示
Hello,
why this function:
[v, d] = eigs(A, B, 1);
and
[v2, d2] = eigs(B\A, 1);
return different results ? I thought that the difference between v and v2 should have been only a scaling factor..
Thank you!
0 件のコメント
採用された回答
David Goodmanson
2018 年 7 月 22 日
Hi vr,
When all the eigenvalues are distinct, the sets of eigenvectors v and v2 indeed indeed differ only by some scaling factors. A complication is that for the eigs and eig, the eigenvalues (which I will denote by lambda and not d) are identical but may not be in the same order for eigs and eig. If that's the case, the order of the eigenvectors (columns) of v and v2 will not be the same either. But if you sort the eigenvalues in each case, you can compare the corresponding eigenvectors:
A = rand(5,5)+i*rand(5,5);
B = rand(5,5)+i*rand(5,5);
[v lambda] = eigs(A,B);
[v2 lambda2] = eig(B\A);
[~, ind] = sort(diag(lambda))
v = v(:,ind);
[~, ind2] = sort(diag(lambda2))
v2 = v2(:,ind2);
R = v2./v
R =
0.1464 + 0.7369i -0.0218 - 0.6930i 0.7180 - 0.3831i 0.4606 + 0.7239i 0.6188 - 0.7948i
0.1464 + 0.7369i -0.0218 - 0.6930i 0.7180 - 0.3831i 0.4606 + 0.7239i 0.6188 - 0.7948i
0.1464 + 0.7369i -0.0218 - 0.6930i 0.7180 - 0.3831i 0.4606 + 0.7239i 0.6188 - 0.7948i
0.1464 + 0.7369i -0.0218 - 0.6930i 0.7180 - 0.3831i 0.4606 + 0.7239i 0.6188 - 0.7948i
0.1464 + 0.7369i -0.0218 - 0.6930i 0.7180 - 0.3831i 0.4606 + 0.7239i 0.6188 - 0.7948i
The two sets of eigenvectors are scaled (with a different constant factor for each one). The eig function scales the eigenvectors to norm 1, but it's hard to say what the eigs function is doing.
sum(abs(v).^2)
ans = 1.7717 2.0803 1.5100 1.3584 0.9855
sum(abs(v2).^2)
ans = 1.0000 1.0000 1.0000 1.0000 1.0000
With repeated eigenvalues the situation is more complicated. It's only necessary the the eigenvectors span the same space, so for example for N repeated eigenvalues, each eigenvector of eigs is a linear combination of N eigenvectors of eig.
1 件のコメント
Walter Roberson
2018 年 7 月 23 日
"A complication is that for the eigs and eig, the eigenvalues (which I will denote by lambda and not d) are identical"
I would not expect them to be identical. A different algorithm is used for eigs() that tends to find one eigenvalue at a time and "factor it out", which is a process that is going to have different numeric properties. When eigenvalues are small magnitude and close together, the numeric round-off effects could potentially result in large relative error.
その他の回答 (1 件)
Christine Tobler
2018 年 8 月 9 日
What may be causing the differences you see is that eigs(A, B, k) first checks if the matrix B is symmetric positive definite. In that case, it computes the Cholesky factorization R'*R = B, and solves the eigenvalue problem R^(-T)*A*R^(-1)*x = lambda*x instead. The advantage of this is that, if A is symmetric, that symmetry is preserved. If B is not SPD, EIGS solves B^(-1) * A * x = lambda * x.
Independent of this, one difference is that EIGS doesn't compute (B\A)*x, it instead computes B\(A*x), since this is typically much cheaper (B\A for sparse matrices A and B it often be a dense matrix). This will result in slight numerical differences between the two cases, and the scaling of eigenvectors can easily be affected by these small differences.
As long as A*v - B*v*d is small, the result is still correct, even though each column of v may be scaled differently.
0 件のコメント
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!