Why does eig return different eigen values depending on whether eigen vector outputs are specified ?

13 ビュー (過去 30 日間)
I have the below 11x11 matrix (it is stored in P_sym.mat file attached). I get different eigen values from eig depenind on how I call it. If it computes eigen vectors it returns different eigen values than if I call eig with just eigen values specified as outputs.
For the following code, where i = 4
[v, d] = eig(P_sym,'vector');
[dxx] = eig(P_sym);
p_sym_ispostitive_definate(i) = issymmetric(P_sym) && all(d > 0.0);
if(min(d) > 0 && min(dxx) < 0)
save('P_sym.mat','P_sym');
fprintf(1,'P_sym : %15.8e %15.8e %15.8e %15.8e %15.8e %15.8e %15.8e %15.8e %15.8e %15.8e %15.8e\n',squeeze(P_sym));
fprintf(1,'\n');
p_sym_ispostitive_definateXX = issymmetric(P_sym) && all(dxx > 0.0);
fprintf(1,[' P_sym(%d) SPD %d, symmetric %d, min_eig %15.8e, eig: ' repmat(' %15.8e',1,length(dxx)) '\n'],i,p_sym_ispostitive_definateXX, issymmetric(P_sym), min(dxx), dxx);
fprintf(1,'\n');
keyboard
end
I get the folllowing results:
P_sym(1) SPD 1, symmetric 1, min_eig 8.90819724e-23, eig: 8.90819724e-23 8.01292127e-16 7.71071265e-12 1.83508980e-11 2.71554045e-09 3.24460252e-09 4.31101082e-03 4.04842334e-02 2.91492831e-01 1.00000000e+00 9.47720644e+00
P_sym : 2.86443726e-01 1.25605442e-01 -3.05677616e-02 3.17320309e-05 -1.41410366e-07 1.37827270e-05 0.00000000e+00 0.00000000e+00 -1.93544065e-02 -4.36768322e-07 -3.54926546e-12
P_sym : 1.25605442e-01 7.12565604e+00 -4.08080640e+00 4.76436787e-04 8.15871206e-04 -2.98861979e-04 0.00000000e+00 0.00000000e+00 -7.64780930e-02 -5.97081660e-07 -2.54538005e-12
P_sym : -3.05677616e-02 -4.08080640e+00 2.38619512e+00 -2.69887758e-04 -4.67183802e-04 1.83109375e-04 0.00000000e+00 0.00000000e+00 2.29324963e-02 -4.59895084e-07 -6.02605228e-12
P_sym : 3.17320309e-05 4.76436787e-04 -2.69887758e-04 3.61675592e-08 5.22270400e-08 -1.98733607e-08 0.00000000e+00 0.00000000e+00 -8.24258261e-06 -2.22346600e-10 -1.93099984e-15
P_sym : -1.41410366e-07 8.15871206e-04 -4.67183802e-04 5.22270400e-08 9.67373610e-08 -3.56665330e-08 0.00000000e+00 0.00000000e+00 -8.10157851e-06 -2.75926708e-11 -5.27412767e-17
P_sym : 1.37827270e-05 -2.98861979e-04 1.83109375e-04 -1.98733607e-08 -3.56665330e-08 1.78452469e-08 0.00000000e+00 0.00000000e+00 -6.41479546e-07 -6.32077553e-11 -5.51581323e-16
P_sym : 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
P_sym : 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e-18 0.00000000e+00 0.00000000e+00 0.00000000e+00
P_sym : -1.93544065e-02 -7.64780930e-02 2.29324963e-02 -8.24258261e-06 -8.10157851e-06 -6.41479546e-07 0.00000000e+00 0.00000000e+00 1.51994891e-02 6.47038249e-07 6.14697206e-12
P_sym : -4.36768322e-07 -5.97081660e-07 -4.59895084e-07 -2.22346600e-10 -2.75926708e-11 -6.32077553e-11 0.00000000e+00 0.00000000e+00 6.47038249e-07 4.01852424e-11 4.26210532e-16
P_sym : -3.54926546e-12 -2.54538005e-12 -6.02605228e-12 -1.93099984e-15 -5.27412767e-17 -5.51581323e-16 0.00000000e+00 0.00000000e+00 6.14697206e-12 4.26210532e-16 4.76317051e-21
P_sym(1) SPD 0, symmetric 1, min_eig -3.56878675e-17, eig: -3.56878675e-17 8.91986312e-23 7.71020397e-12 1.83488208e-11 2.71554106e-09 3.24460241e-09 4.31101082e-03 4.04842334e-02 2.91492831e-01 1.00000000e+00 9.47720644e+00
This is rather perplexing, because the minimum eigen value in the first call is positive, whereas in the 2nd call it is negative. This defines whether the matrix is positive definate or not. The difference that causes the output change is not the "vector" output option for eig, but rather whether eigen vectors are requested in the return.
Why the differing values? Which eig can be used with assurance to determine SPD (symmetric positive definate) matrices??
P_sym : 3.91971950e-01 -1.84707196e-02 9.64010847e-02 1.49471956e-05 -2.95043186e-05 8.41762906e-06 0.00000000e+00 0.00000000e+00 -2.63518244e-02 -8.89506787e-07 -7.63950369e-12
P_sym : -1.84707196e-02 1.10109548e+01 -5.54536247e+00 7.12947029e-04 5.49185331e-04 9.97089694e-06 0.00000000e+00 0.00000000e+00 -1.00683513e-01 -8.04754882e-07 -3.84179497e-12
P_sym : 9.64010847e-02 -5.54536247e+00 2.97740884e+00 -3.66348740e-04 -2.70925828e-04 2.66675852e-05 0.00000000e+00 0.00000000e+00 1.37117248e-02 -7.52066065e-07 -8.08110708e-12
P_sym : 1.49471956e-05 7.12947029e-04 -3.66348740e-04 4.89069796e-08 3.30828980e-08 -1.44659120e-09 0.00000000e+00 0.00000000e+00 -8.18935868e-06 -1.82559412e-10 -1.48511352e-15
P_sym : -2.95043186e-05 5.49185331e-04 -2.70925828e-04 3.30828980e-08 3.19074916e-08 1.18059503e-09 0.00000000e+00 0.00000000e+00 -4.85944277e-06 -1.22536049e-11 -4.65286216e-17
P_sym : 8.41762906e-06 9.97089694e-06 2.66675852e-05 -1.44659120e-09 1.18059503e-09 6.54728043e-09 0.00000000e+00 0.00000000e+00 -4.94291834e-06 -1.19356169e-10 -9.10766108e-16
P_sym : 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
P_sym : 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e-18 0.00000000e+00 0.00000000e+00 0.00000000e+00
P_sym : -2.63518244e-02 -1.00683513e-01 1.37117248e-02 -8.18935868e-06 -4.85944277e-06 -4.94291834e-06 0.00000000e+00 0.00000000e+00 1.90931002e-02 7.98240784e-07 7.33246401e-12
P_sym : -8.89506787e-07 -8.04754882e-07 -7.52066065e-07 -1.82559412e-10 -1.22536049e-11 -1.19356169e-10 0.00000000e+00 0.00000000e+00 7.98240784e-07 4.49272102e-11 4.51931653e-16
P_sym : -7.63950369e-12 -3.84179497e-12 -8.08110708e-12 -1.48511352e-15 -4.65286216e-17 -9.10766108e-16 0.00000000e+00 0.00000000e+00 7.33246401e-12 4.51931653e-16 4.76317051e-21

回答 (1 件)

John D'Errico
John D'Errico 2024 年 6 月 27 日
編集済み: John D'Errico 2024 年 6 月 27 日
Gosh, the eigenvalue questions are coming fast today. You have an 11x11 matrix. The eigenvalues of that matrix are equivalent to solving for the roots of a polynomial, here of degree 11.
Can you know those eigenvalues to better than double precision limits, given that the matrix is composed of numbers that certainly had double precision origins? Sorry, but no.
load P_sym
whos
Name Size Bytes Class Attributes P_sym 11x11 968 double ans 1x33 66 char cmdout 1x33 66 char gdsCacheDir 1x14 28 char gdsCacheFlag 1x1 8 double i 0x0 0 double managers 1x0 0 cell managersMap 0x1 8 containers.Map
P_sym
P_sym = 11x11
0.2864 0.1256 -0.0306 0.0000 -0.0000 0.0000 0 0 -0.0194 -0.0000 -0.0000 0.1256 7.1257 -4.0808 0.0005 0.0008 -0.0003 0 0 -0.0765 -0.0000 -0.0000 -0.0306 -4.0808 2.3862 -0.0003 -0.0005 0.0002 0 0 0.0229 -0.0000 -0.0000 0.0000 0.0005 -0.0003 0.0000 0.0000 -0.0000 0 0 -0.0000 -0.0000 -0.0000 -0.0000 0.0008 -0.0005 0.0000 0.0000 -0.0000 0 0 -0.0000 -0.0000 -0.0000 0.0000 -0.0003 0.0002 -0.0000 -0.0000 0.0000 0 0 -0.0000 -0.0000 -0.0000 0 0 0 0 0 0 1.0000 0 0 0 0 0 0 0 0 0 0 0 0.0000 0 0 0 -0.0194 -0.0765 0.0229 -0.0000 -0.0000 -0.0000 0 0 0.0152 0.0000 0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 0 0 0.0000 0.0000 0.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
What is that polynomial? Simple enough.
syms lambda
expand(det(P_sym - lambda*eye(11)))
ans = 
vpa(expand(det(P_sym - lambda*eye(11))))
ans = 
Now, can you compute the roots of that polynomial? Of course. You will get a result that is equivalent to what eig tells you.
vpa(eig(P_sym))
ans = 
Do you see the trash down in the 16th, 17th significant digits? That comes from the fact that your matrix has origins in double precision. Is the matrix positive definite? That is essentially impossble to say with any degree of reliability since the matrix is numerically singular in double precision arithmetic. And remember, the matrix appears to have started out life as a matrix of doubles.
cond(P_sym)
ans = 1.0627e+23
I'd suggest the only thing you can say is the matrix is numerically posiitive semi-definite.
  11 件のコメント
M Biggs
M Biggs 2024 年 6 月 29 日
移動済み: Torsten 2024 年 6 月 29 日
Thanks for the continued info!
My comments about nearest SPD is because I need to come up with an algorithm that can "repair" these nearly singular matrices without failing or introducting some huge errors when going through the Cholesky deomposition process. It seems most algorithms out there are for finding nearest SPD for input matrices that are reasonably well conditioned. I am finding with a little mods, they can work with my unstable matrices too. But bounding the error introduced to the final reconstructed covariance, U' * U, is the most difficult part.
Right now, I am getting some really good results with the fact that A*V = V*D , which yields A = V*D*V^-1. Taking some partials of A with respect to D, I find that dA = V*dD*v^-1. When I used dD as the negative of the smallest negative eighen value, I find the resulting change to input covariance, dA, is very good at making a new matrix A + dA that is SPD. And the changes to the matrix A are very small. If there are multiple negative eigenvalues, multiple iterative calls to the algorithm may be neccessary. The resulting problem is probably best suited to a linear programming solution, but the commercial C application I am working on can't affort all that new code. I am currently thinking through this part for multiple negative eigenvalues.
Christine Tobler
Christine Tobler 2024 年 7 月 1 日
Thank you for the suggestion, I've passed it on to our doc writer.
There is a different algorithm used for the one-output case and the multi-output case, for the simple eigenvalue problem when A is hermitian. This is the case where the algorithms for EIG and for SVD are extremely similar, so it reflects what we already document for the SVD.
The difference in behavior was introduced some years ago, when the LAPACK library that we call into introduced a more accurate algorithm for computing the eigenvalues which doesn't have an option to also compute the eigenvectors. The function that we call in LAPACK was modified so that if we don't request eigenvectors, the eigenvalues are computed using this higher-accuracy algorithm.
So are the one-output eigenvalues more accurate than the multi-output eigenvalues? Only in some very specific cases. Basically, if your matrix is symmetric tridiagonal and has values on some very different scales, the one-output case will avoid cutting off the eigenvalues at eps*max(eig(A)) like the multi-output case does.
For the original question, it's a numerically singular matrix, so whether it's detected to have all-positive eigenvalues or a few tiny negative ones is going to be down to luck. I would use whatever decomposition is going to be needed in the remainder of the algorithm - if it succeeds, use that factorization. If it fails, knowing that another factorization has better luck with round-off error won't help.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeLinear Algebra についてさらに検索

製品


リリース

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by