Accuracy of eig in support of complex step differentation: Derivative estimate accuracy degrades when step gets too small

1 回表示 (過去 30 日間)
Consider the uses of complex step differentiation to estimate the derivative of an eigenvalue of a real non-symmetric matrix, using eig in MATLAB double precision, for the calculation. The traditional recommendation is to use step = 1e-20 * 1i. But perhaps extended precision calculation of eig is needed to support this?
I noticed that for steps smaller than perhaps about 1e-14 *1i (or maybe even less), the accuracy of the complex step differentiation estimate seems to degrade, and become unstable. Is this due to accuracy limitation in eig in MATLAB double precision? The matrix in question has various norms in the ballpark of 1. Furthermore, the same calculation showed greater degradation in MATLAB R2013A WIN32 than in R2014A WIN 64. Is there any reason to think the accuracy of eig should be different between these configurations?
The calculation is carried out as d = step length (say 1e-14 or 1e-20 or whatever). For simplicity, I'll show this for the maximum eigenvalue. Consider a real matrix A. Then to compute the (i,j) component of the gradient of the maximum eigenvalue with respect to A,
B = A;
B(i,j) = A(i,j) + d*1i;
derivative_estimate = imag(max(eig(B))/d)
Complex step differentiation is not "supposed to" have a problem with small step size, and in fact, that's supposed to be its advantage over forward or central differences, whose accuracy is limited by a step size below which accuracy degrades.
Thanks.
  3 件のコメント
Matt J
Matt J 2015 年 5 月 19 日
編集済み: Matt J 2015 年 5 月 19 日
The traditional recommendation is to use step = 1e-20 * 1i. But perhaps extended precision calculation of eig is need to support this?
Or extended precision in the addition operation,
B(i,j) = A(i,j) + d*1i;
It seems to me that, at the very least, the step magnitude needs to be chosen adaptively based on eps(A(i,j)) . If d is less than that, for example, the operation above will merely give B=A due to floating point precision thresholds.
Mark Stone
Mark Stone 2015 年 5 月 19 日
編集済み: Mark Stone 2015 年 5 月 19 日
Matt,
I stated that the matrix A is real. Therefore, the machine precision limitation is not encountered on adding a small imaginary number. So imag(B(i,j)) = d*1i, and there is no loss of precision in this addition. That's the only reason why anyone would ever recommend a step magnitude of 1e-20 when calculations are being performed in double precision. Therefore, I believe the key matter here is the accuracy of eig, and what that means about smallest step size.

サインインしてコメントする。

採用された回答

Cleve Moler
Cleve Moler 2015 年 5 月 19 日
I have to admit that I am unsure about what is going on here. Complex step differentiation relies upon the unperturbed function values for real arguments being real. Then the complex perturbation causes the software to produce a complex result, so numerical differencing is not a concern. But eig of a real nonsymmetric matrix can be complex and taking max just makes things worse. Consider A = gallery(5). The exact eigenvalues are all zero. But eig(A) doesn't reveal that. And a complex step isn't any help. The eigenvalue condition numbers are crucial. For multiple eigenvalues without a full set of eigenvectors, such as gallery(5), the condition numbers are infinite. Taking a complex step, or any step, in the vicinity of a badly conditioned eigenvalue will have numerical difficulties. -- Cleve

その他の回答 (3 件)

Cleve Moler
Cleve Moler 2015 年 5 月 19 日
But you should not have to resort to complex step differentiation because an analytic expression for the derivative of an eigenvalue is known. If y and x are the left and right eigenvectors of a matrix A corresponding to a particular eigenvalue lambda, then d(lambda) = (y'*d(A)*x)/(y'*x)
  3 件のコメント
Cleve Moler
Cleve Moler 2015 年 5 月 20 日
I pretty much agree with your assessment of the left eigenvector situation and don't have much more to offer. -- Cleve
Mark Stone
Mark Stone 2015 年 5 月 21 日
編集済み: Mark Stone 2015 年 5 月 21 日
Well, after additional consideration, i guess that
{V,D] = eig(A);
U = inv(V)';
U will work as a perfectly matched left eigenvector matrix, so long as V is not defective (i.e., full rank, therefore non-singular), so this is not really a general purpose approach.
Prior to my accidental discovery that left eigenvectors are not supportd in eig in R2013A, I had been thinking that they were supported since time immemorial, for instance back to when I first used the Fortran-based MATLAB as a student in early 1982. I guess this passed through the cracks of my knowledge because i hadn't really started dealing with left eigenvectors until fairly recently.

サインインしてコメントする。


Cleve Moler
Cleve Moler 2015 年 5 月 20 日
I had forgotten about Nick Higham's comment after my blog post referencing his paper, "The complex step approximation to the Fréchet derivative of a matrix function", http://link.springer.com/article/10.1007/s11075-009-9323-y. Unfortunately, it's behind a Springer pay wall. But I think it says that Mark is on the right track.
  1 件のコメント
Mark Stone
Mark Stone 2015 年 5 月 20 日
The as submitted, but not as revised Tech Report version of that paper is at http://eprints.ma.man.ac.uk/1260/01/covered/MIMS_ep2009_31.pdf .
It contains this interesting quote in the concluding remarks: "The main weakness of the CS approximation is that it is prone to damaging cancellation when the underlying method for evaluating f employs complex arithmetic. But for many algorithms, such as those based on real polynomial and rational approximations or matrix iterations, this is not a concern." So I guess this was what you were referring to about eig, the use of complex arithmetic, which therefore makes it subject to non-trivial cancelation error at small step sizes, effectively limiting minimum step size. I now have a deeper understanding of complex step differentiation and its limitations. Actually, in the scalar case anyway, as neat as it seems, I'm not really sure what the big attraction is, since, in the best case, it seems to amount essentially to automatic differentiation, which does not suffer from these little quirks. Of course, back before automatic differentiation (such as in 1967), it would have been a different story.

サインインしてコメントする。


Matt J
Matt J 2015 年 5 月 15 日
I don't think eigenvalues are in general, differentiable as a function of the matrix elements. However, even where they are, shouldn't your derivative estimate involve the difference eig(B)-eig(A) ?
  3 件のコメント
Matt J
Matt J 2015 年 5 月 19 日
編集済み: Matt J 2015 年 5 月 19 日
I should have stated that I was performing these calculations only for non-repeated eigenvalues.
And you're sure that the eigenvalues are non-repeated in modulus as well? Even if eig() is differentiable at non-repeated eigenvalues, the combined operation max(eig()) may not be. When the maximum modulus is attained by multiple (even if distinct) eigenvalues, the max() function will face an ambiguity on how to sort them.
In any case, it will probably be worth posting your tests so that we can reproduce what you are seeing. It defies my intuition a bit that any differentiation method can allow arbitrarily small step sizes, and certainly that the step needn't depend on the matrix A. The Taylor remainder term in the derivation of the method at the link you posted should depend on A, it seems to me.
Mark Stone
Mark Stone 2015 年 5 月 20 日
In the case I was looking at, in fact the eigenvalues in question were real and unique in magnitude. There was no tie or ambiguity in order, even though there were ties for other eigenvalues. Such an eigenvalue is supposed to be infinitely differentiable.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeLinear Algebra についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by