How to efficiently solve a matrix eqation when the matrix is updated?

7 ビュー (過去 30 日間)
Benson Gou
Benson Gou 2021 年 1 月 31 日
編集済み: John D'Errico 2021 年 3 月 24 日
Dear All,
I need to solve a matrix linear equation: Ax = b. I know the most fast way to solve x is x = A\b. My question is: for a updated matrix A1 = A + deltaA, how can I make use of x=A\b to solve the updated equation A1x=b? The size of A is 5000 x 5000, and there are a very few non-zero elements in deltaA.
Thanks a lot.
Benson
  7 件のコメント
Benson Gou
Benson Gou 2021 年 2 月 2 日
Hi, Stephen,
Thanks for your suggestions. I tried linesolve. For the case of Bruno, it took 0.92 s while \ took 0.98. It is really faster. Thanks a lot again.
Benson
Benson Gou
Benson Gou 2021 年 3 月 4 日
Hi, Stephen,
I found linsolve can not handle sparse matrix. I have to convert my sprase matrix into full matrix first. I found linsolve is much slower than \ for sparse matrices.
Do you know how to handle this isssue?
Thanks a lot.
Benson

サインインしてコメントする。

回答 (1 件)

John D'Errico
John D'Errico 2021 年 3 月 24 日
編集済み: John D'Errico 2021 年 3 月 24 日
The problem is, it very much depends. If you can write the change in A as a rank 1 update, AND you write the code well, then you probably can gain by use of the infamous Sherman-Morrison formula. It will depend on the sparsity of the problem. Updates to a sparse matrix itself can be slow, so that alone can be a significant cost in time if new elements are created.
However, you state that the changes are few, but you do NOT state they can be written in the form of a rank 1 update. And if that is not the case, then you won't gain much by a rank 1 update formula. Now you would need to use Sherman-Morrison-Woodbury, a generalization. Where the break appears in efficiency is difficult to know. Again, it will be problem dependent. It will depend on the sparsity of your matrices. It will depend on their structure.
You may well be better off NOT computing the matrix inverse at all, instead computing a factorization of the matrix.
For example, consider this matrix:
n = 1000;
A = sparse(eye(n) + diag(rand(n-1,1),1) + diag(rand(n-1,1),-1));
A is tridiagonal. But the inverse?
spy(inv(A))
The inverse of A is full. And that will make it very slow to compute for large matrices.
However, if we look at an LU factorization of A, both factors are very well behaved, and still very sparse.
[L,U] = lu(A);
spy(L)
spy(U)
It is almost always the case that you should at least consider NOT computing a matrix inverse, as there are almost always better alternatives.
timeit(@() inv(A))
ans = 0.0136
timeit(@() lu(A))
ans = 1.8138e-04
And even for a matrix as small as 1Kx1K, the LU is way faster. The greater speed of the sparse LU may make it fast enough that you won't even bother with an update at all. And for much of what you may do that needs the matrix inverse, you can often use an LU factorization just as easily. (We have not been told why you THINK you need the inverse, only that you do. And many times, the real solution to a problem is to know why you think you need a particular tool, but to tell you there are better tools out there.)
My point is, while you may think you need to compute the inverse, understanding linear algebra will help you in many cases to not need to do so at all.

カテゴリ

Help Center および File ExchangeMathematics についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by