フィルターのクリア

How to make FMINUNC work faster

10 ビュー (過去 30 日間)
Karthik
Karthik 2015 年 1 月 8 日
編集済み: Matt J 2015 年 1 月 11 日
Hello, I there anyway to make FMINUNC program to run faster than normal. I am using the following command.
c(:,i) = fminunc(@(c) (chi1(c, K0_com)), cint, options);
chi1 is the function that it accepts from a function file. Thanks.

採用された回答

Matt J
Matt J 2015 年 1 月 8 日
編集済み: Matt J 2015 年 1 月 8 日
Lots of potential ways, but only broad recommendations are possible given the minmal info provided about your problem. You could use the 'UseParallel' option in conjunction with the Parallel Computing Toolbox, if you have it. You could also do your own calculation of the gradient and Hessian with the 'GradObj' and, if applicable, the 'Hessian' options. Not only can this speed convergence, but often you can recycle intermediate quantities from your objective function calculation to make the derivative calculations more efficient than what the default finite differencing methods do.
  4 件のコメント
John D'Errico
John D'Errico 2015 年 1 月 9 日
編集済み: John D'Errico 2015 年 1 月 9 日
Often supplying a gradient is as computationally intensive as it would cost fminunc to estimate that same gradient. And it often gives little gain in accuracy. This is not always the case, but you might look at that gradient. Do some timing tests. Does it cost as much for a gradient call as roughly n calls to the basic objective function? If so, then you may be getting no gain. The point is, do some timing tests. Too many people just assume that because they supply the analytical gradient, that it will run faster and more accurately. This is not always true.
Far more likely to give gain is to optimize your function itself. Or find better starting values. If you can cut the number of function evaluations because of a better start point, you come out ahead.
Matt J
Matt J 2015 年 1 月 9 日
編集済み: Matt J 2015 年 1 月 11 日
That doesn't look right. In your grad and hess calculation, you are pretending that G is constant, independent of x, and that fun() is therefore quadratic. In fact, however, G depends on x through Hv. Moreover, the dependence is non-differentiable, since the heaviside function is non-differentiable. That's a problem, I'm afraid. FMINUNC is a derivative-based solver, so the function has to be totally differentiable.
In any case, there are things you can be doing to better vectorize the code, e.g.,
K0_com_t=K0_com.'; %avoid repeated transposition
Hv=heaviside(K0_com_t*x);
Also, instead of creating the really big matrix diag(Hv), you could do
G = bsxfun(@times,K0_com, Hv(:).')*K0_com_t
Finally, you should be using speye() instead of eye().

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeSolver Outputs and Iterative Display についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by