Translated by このページのコンテンツは英語から自動翻訳されています。自動翻訳をオフにする場合は「<a class="turn_off_mt" href="#">ここ</a>」をクリックしてください。

Why is det(x) better than rcond(x) in determining non-singularity here?

Darren Wethington

Darren Wethington (view profile)

さんによって質問されました 2019 年 3 月 25 日

John D'Errico (view profile)

さんによって 編集されました 2019 年 3 月 26 日
John D'Errico

John D'Errico (view profile)

さんの 回答が採用されました
The code below demonstrates the following problem:
Analytically, the matrix x is NOT singular . Matlab believes it is singular due to limits on floating point precision. Documentation dictates that using det(x) to determine if a matrix is singular is poor practice, and using rcond(x) (for instance) is better . However, in this case, det(x) yields the analytical solution of the determinant. Why does det(x) accurately determine that x is not singular while rcond(x) does not?
a = -1.3589e-10;
b = -1.7108e-9;
c = 12.5893;
d = -1e11;
x = [a b; c d];
analytical_determinant = a*d - b*c; %
matlab_determinant = det(x);
eigens = eig(x);
eigen_determinant = eigens(1)*eigens(2); %
rcond_result = rcond(x);
disp('Let''s try to take an inverse:')
inv(x);
fprintf('rcond result: %f\nanalytical det: %f\ndet(x) result: %f\ndeterminant from eig: %f\n',rcond_result,analytical_determinant,matlab_determinant,eigen_determinant);

David Goodmanson

David Goodmanson (view profile)

2019 年 3 月 25 日
Hi Darren,
It's not the case that rcond is saying that x is singular. The warning message merely say's it's close.
Since a*d is near 1 and b*c is tiny, det is easy to determine. But eig has the tougher job of calculating two eigenvalues so their product can be taken. Eig can't do so here and is Is coming up with 0 for the smaller eigenvalue. Actually both rcond and cond do much better than eig in this regard.

サインイン to comment.

R2016a

2 件の回答

2019 年 3 月 25 日

John D'Errico (view profile)

2019 年 3 月 25 日
採用された回答

Let me suggest that you misunderstand det, and why it is a terribly poor tool to use to learn if a matrix is singular or not. Yes, we are taught to use det by teachers, in school. Hey it works there, so why not use it in practice? The problem is those same teachers stopped short of explaining the real issues. I've often said that the use of det is a bad meme, one that just keeps propagating forever. One person learns about it, so they teach others. That it is a bad idea? Nobody ever told them. So det lives on - IT'S ALIVE! (With my apologies to the movie where that line was used. Thanks Mel.)
a = -1.3589e-10;
b = -1.7108e-9;
c = 12.5893;
d = -1e11;
x = [a b; c d];
So, is the 2x2 matrix singular or not? It is NUMERICALLY singular, thus in floating point double precision arithmetic, it can be risky to use a numerically singular matrix to solve a system of linear equations reliably, even though an inverse exists in theory.
A big problem with det is that it is sensitive to scaling. What result would indicate to you that a matrix is singular based on the determinant? Remember that a numerically computed determinant will virtually NEVER be exactly zero. For example, consider this matrix, A:
A = rand(4,5);
A = [A;sum(A,1)];
A
A =
0.031833 0.82346 0.034446 0.7952 0.64631
0.27692 0.69483 0.43874 0.18687 0.70936
0.046171 0.3171 0.38156 0.48976 0.75469
0.097132 0.95022 0.76552 0.44559 0.27603
0.45206 2.7856 1.6203 1.9174 2.3864
det(A)
ans =
3.0549e-18
Is A singular? It darn tootin is! The last row of A is the sum of the other 4 rows. But wait a second, det(A) is not returned as zero. Wwwwhhhhaaaatttt happened?
And, if you claim that 3e-18 is close enough to zero, then just multiply A by 10000.
det(10000*A)
ans =
5005.1
Gosh, 10000*A is surely not singular based on what det just told us. But I thought that we decided A was singular? Mathematically, A is indeed singular. We know that from theory. Theory would never lie, would it?
Or try this:
A = eye(1200);
det(A)
ans =
1
det(2*A)
ans =
Inf
det(A/2)
ans =
0
Hmm. While I am sure you know that A is remarkably well-posed for solving ANY system of equations, or computing a matrix inverse, for that matter, the same applies to A/2 as well as 2*A. So why does det tell us such a confusing story on such simple matrices?
Or, how about this matrix? Lets see, I think I recall that magic squares of even order are all singular matrices. It is certainly true for magic(4), at least.
A = magic(4)
A =
16 2 3 13
5 11 10 8
9 7 6 12
4 14 15 1
rank(A)
ans =
3
cond(A)
ans =
8.148e+16
So rank and cond both think that A is singular. Composed of purely integers, the determinant must also be an exact integer. The symbolic toolbox can do it for us exactly.
det(sym(A))
ans =
0
det(A)
ans =
-1.4495e-12
But, det fails here on A itself, since large scale determinants cannot be efficiently computed using any routine in a reasonable amount of time. So, what det actually does is to use linear algebra to compute the determinant. So, if we were to look at the internal code for det, it would look something like this:
[~,U] = lu(A);
prod(diag(U))
ans =
-1.4495e-12
Here we can compute the determinant of a matrix using no more than a call to LU. And the LU factorization is an O(n^3) operation in terms of flops, whereas computing the determinant using traditional methods as you were taught in school is an O(factorial(n)) operation. So det probably uses the high school formula for a 2x2 or 3x3 determinant as special cases. But it should use LU for matrices of size 4x4 or larger. The end result is that det is not something you can even trust to compute the determinant you so very much desire - to see if your matrix is singular.
The problem is that det is NOT indeed a good tool to identify if a matrix is well-posed for such a problem. Yes, again, I know that your teacher said, that mathematically, if a matrix is singular, then the determinant is zero. But in fact, det is a bad tool to identify when a matrix is singular, because we cannot trust what it reports. Just because a tool works in theory, does not make it a good tool for practical use. And here, practical use involves floating point arithmetic. So just bcause a computed determinant is near zero or is not near zero, it still does not tell you if the matrix was actually singular or not.
Instead, we tell you rely on tools like cond, rcond, svd, and rank to identify singular matrices and avoid det like the plague on mathematics it rightfully is. (There are some circumstances where it is indeed arguably appropiate to use det. However testing for singularity is a problem where you should essentially NEVER use det.)

Matt J

Matt J (view profile)

2019 年 3 月 25 日
@Darren,
Note that neither of your determinant calculations can rightly be called "analytical". The command,
analytical_determinant = a*d - b*c
is in fact a floating point calculation just like any other, with the same vulnerabilities. For example, consider the alternative data
a=10, d=1e15+0.1, b=1e15, c=10
We can see here that the theoretical determinant is 1, but executing the calculation as you've done gives the wrong answer,
>> a*d-b*c
ans =
2
Darren Wethington

Darren Wethington (view profile)

2019 年 3 月 26 日
Wonderful point, thank you. Interestingly, det([a b; c d]) using these values generates a value of -1.25, so it must not be as straightforward as using the formula I used.
John D'Errico

John D'Errico (view profile)

2019 年 3 月 26 日
Another important point to make is that there are different ways a matrix can appear as numerically singular. For example, the matrix
A = [1e-20, -1e-20; 1, 1];
is not singular mathematically.
Yet both det and rcond will think it is poorly conditioned, even though really, it is just a problem of scaling. If you just multiply the first row by 1e20, things improve greatly in the eyes of either tool.
This recognizes that if row or column rescaling could be appropriate in context of how a matrix will be used, then it is often a good idea. That row or column rescaling is equivalent to a pre or post multiplication of your matrix with a diagonal matrix, so it could be viewed as a simple change of units.
Other matrices are not so easily dealt with.
B = [1e20, 1e20 + 1;1, 1];
Again, both det and rcond will not be happy with this matrix.
Simple rescaling of the rows of B will not help, as the fundamental problem with B lies in the limits of a double precision number.
But, now change the first row of B only slightly.
C = [3e15, 3e15 + 1;1, 1];
det(C)
ans =
-0.66613
det(sym(C))
ans =
-1
rcond(C)
ans =
3.7007e-32
The matrix C is still nearly as bad as it can possibly be in terms of using that matrix in double precision arithmetic, yet det suddenly would have you think all is good in the world. And the computed determinant is not even that far off from the symbolic one.
Not all problems can be solved the same way, but det is too risky to use for this purpose.
Anyway, thanks for the opportunity for an interesting discussion. :-)

サインイン to comment.

2019 年 3 月 25 日

Matt J (view profile)

2019 年 3 月 25 日

rcond is telling you how easy it is to invert a set of linear equations based on x. det() does not give you that kind of information. To get a sense of this, observe in the example below what a poor job the Matlab linear solver does with your proposed matrix x when I add just a tiny bit of noise:
>> noise=1e-12*rand(2,1);
>> x\(x*[1;1]+noise)
ans =
0.9946
1.0000
>> inv(x)*(x*[1;1]+noise)
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate.
RCOND = 1.358900e-21.
ans =
0.9946
1.0000
Now contrast that with a different matrix with approximately the same determinant, but much better rcond,
>> x=[26,13;1,1]; det_x =det(x), rcond_x=rcond(x),
det_x =
13
rcond_x =
0.0123
>> inv(x)*(x*[1;1]+noise)
ans =
1.0000
1.0000

0 件のコメント

サインイン to comment.