How to compute efficiently A^(-1)*(1-exp(-A*h))?

1 回表示 (過去 30 日間)
Wouter
Wouter 2019 年 12 月 9 日
コメント済み: Wouter 2019 年 12 月 11 日
I'm experimenting with exponential integrators for ODEs and SDEs, and try to avoid the built-in matlab solvers. As defined here,
I need to compute the object efficiently, where
  • A is a large (up to 1e4 x1e4) sparse, complex and symmetric matrix.
  • h is the small numerical timestep with typical value 1e-4.
What would be the best way do do so?
  • Explicit ' A\(1-expm(A*h))' takes about 2000 sec on my desktop. (and I believe it is unable to handle singlular A)
  • Diagonalizing ([V,D]=eig(full(A)))) and working with 'V*D^(-1)(1-exp(D*dt))*V^(-1)' is an other approach. I tried it before and don't remamber the computing time, but it won't be too much faster than the first alternative. In this method, singular eigenvalues can also be regularized manually with l'Hopital's rule.
Which approach is recommended? Is there perhaps a better alternative? Ideally, the approach should be able to handle singular A, but even without this feature, other approaches would be interesting.
I somehow have the feeling there must be a faster way, because the series expansion of the expression of interest only contains positive matrix powers, so there should be no necessity to invert matrices? It seems to me as such that the complexity of this object should inherently not be worse than that of () alone.
  2 件のコメント
Steven Lord
Steven Lord 2019 年 12 月 11 日
I'm experimenting with exponential integrators for ODEs and SDEs, and try to avoid the built-in matlab solvers.
I'm curious why you're trying to avoid the built-in ODE solvers. What do they do that you don't need or want? Or do they not do what you need or want?
Wouter
Wouter 2019 年 12 月 11 日
編集済み: Wouter 2019 年 12 月 11 日
Hi, there are a few reasons.
  • Writing down the problem in the proper way to be an input argument to the solvers: I'm studying a system with a large number of (complex) variables. The most convenient way to represent them is in one vector of length N and two square NxN matrices (and their complex conjugate also enters the equations). In terms of these matrices, I can write down the evolution in a quite readable way. Plain forward Euler worked well in the parameter regimes considered so far, but now I'm studying a case where there is some stiffness arising from particular linear terms. Luckily, these linear terms don't couple the different matrices, so that after a (:) of these matrices, the exponential part of the integration can be done separately. (Some further study seems to suggest that the quadratic term in dt is sufficient actually). I believe writing the problem in the good format for matlab-solvers is hard and messy. Because there are complex numbers and a large number of variables, but even more so because N is variable. I am interested in cases for different N up to 100. And I think properly stating the problem would alone need some kind of metaprogramming...
  • In the end, I'm doing SDEs with multiple complex noise processes and not ODEs. Now in many cases, it might be possible to account for this noise by event handling in an ODE solver, but this seems somewhat hacky. I have some bad experience with using toolboxes with very fancy algorithms for a similar system (although not in matlab), so I prefer to keep it simple and have full control at the expense of potential performance.
If you are interested: the equations considered are the ones on the page two of the Supplemental material in this paper (with \eta=0)

サインインしてコメントする。

採用された回答

Christine Tobler
Christine Tobler 2019 年 12 月 9 日
For most sparse matrices expm(A) will be dense, so that should be expected to be expensive with a 1e4-by-1e4 matrix. If you are computing A^(-1)*(1-exp(-A*h)) with the goal of multiplying it with a vector, as I remember there are Krylov subspace methods for computing expm(A)*x directly without building the matrix.
To avoid the singularities, you may be able to use the Taylor expansion explicitly, in a function handle passed to funm. There is a helper function expm1(x) which computes (exp(x) - 1) with smaller round-off error. However, there is no such helper for (exp(x)-1)/x. An alternative to a Taylor expansion would be to check for cases of x being close to zero, and just modifying the value for that case.
  3 件のコメント
Christine Tobler
Christine Tobler 2019 年 12 月 11 日
Dear Wouter,
Yes, the combination of function names expm and expm1 can be really confusing, unfortunately.
I'm wondering about the goal of keeping the resulting matrix sparse: For some special matrices this could be close to sparse, but in general I would expect there to be very few elements that are close to zero, just like usually hapens for the inverse or the eigenvectors of a sparse matrix. So before spending too much time on dropping small elements to zero, I would suggest computing the value for a moderately-sized dense matrix similar to your example and checking how many elements are close to zero.
Using a Taylor expansion of the function (1-exp(h*x))/x seems like a good approach, it will certainly remove the problems with singularities.
An excellent book on the topic of computing matrix functions is "Functions of Matrices: Theory and Computation" by N. J. Higham. You can also type "edit expm" or "edit funm" to see how these are implemented in MATLAB code.
Wouter
Wouter 2019 年 12 月 11 日
Dear Christine,
thanks for your answer. The original matrix A is only tridiagonal, and the contribution of higher powers in the series expansion are suppressed because h is small, so also remote diagonals have and exponentially small value here. The first term corresponds to the standard forward-euler method, adding the $O(h^2)$ contributions already solves my stiffness problem.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeLinear Algebra についてさらに検索

製品


リリース

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by