What is the space complexity of built-in eigs function in MATLAB
16 ビュー (過去 30 日間)
古いコメントを表示
Hi Everyone,
I am calculating the dominant eigenvalues and eigenvectors by eigs(). The matrix is very large so that eigs(fun, N) is used where fun(x) returns A*x. The vector size is nearly 24,000,000 ~ 200MB memory. However, after 3 iteration, the job is crushed on cluster as the error message indicates it exceeds the memory limit (2GB).
I want to know what is the space complexity of eigs(). Is it linear with the vector size N; and does the memory usage increase linearly with iteration steps?
Thanks
0 件のコメント
採用された回答
Christine Tobler
2018 年 2 月 26 日
The largest memory requirement for eigs is for an array that stores a subspace of vectors, which has size(A, 1) rows and a number of columns automatically determined based on the requested number of eigenvalues. You can set this number of columns directly using the 'SubspaceDimension' Name-Value pair in MATLAB R2017b, or using the p field of the options structure in older versions.
After allocating this array, there should be only small amounts of extra memory necessary in each iteration. I would typically expect eigs to go out of memory when allocating the workspace, not later in the middle of the iteration. Is it possible that your function handle requires extra memory every time it is applied?
If you have MATLAB R2017b, could you run eigs with the 'Display' option turned on, and post the results here? This gives some additional information about which algorithm was used.
2 件のコメント
Christine Tobler
2018 年 2 月 27 日
Great to hear that you could work around the problem.
In the newest version (R2017b), the display has been changed to give some more information about the algorithm than is shown here - happily they won't be needed, since you have already fixed the problem.
その他の回答 (1 件)
Chandani Madnani
2018 年 2 月 26 日
We don't generally document the order of complexity of MATLAB functions, and for eigs we don't think it's possible to give a straight-up order of complexity. For eig, the complexity is O(n^3), but we cannot guarantee that eigs is within O(n^3).
The main problem is that eigs uses an iterative method, which is supposed to decrease the residual of the eigenvalues and eigenvectors in each step, until a tolerance tol is reached. So the computational complexity depends on the convergence speed of the method.
The straight-up Arnoldi method for computing the largest eigenvalues of A*x = lambda*x has a cost of O(n*p^2 + p^3) + p*[cost of A*x]. How large p needs to be depends on the convergence speed, which in turn depends on the relative gap between the eigenvalues. But this is typically prohibitively expensive because of the factor O(p^3) involved. In eigs, we use the "implicitly restarted Arnoldi" method as implemented in ARPACK, which uses several tricks that make the algorithm faster in the general case, but also make its convergence behavior and computational complexity much more difficult to analyze.
Additionally, if the smallest magnitude eigenvalues are requested, UMFPACK is used to factorize A, which is O(n^3) in the general case; you would need to know the sparsity structure of A to find any smaller bound.
If you want to investigate further, the following two papers are relevant:
[1] Lehoucq, R.B. and D.C. Sorensen, "Deflation Techniques for an Implicitly Re-Started Arnoldi Iteration," SIAM J. Matrix Analysis and Applications, Vol. 17, 1996, pp. 789–821.
[2] Sorensen, D.C., "Implicit Application of Polynomial Filters in a k-Step Arnoldi Method," SIAM J. Matrix Analysis and Applications, Vol. 13, 1992, pp. 357–385.
and there's also the ARPACK and UMFPACK documentations which should be available online.
0 件のコメント
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!