How to prevent error or catch error in fmincon while satisfying non-linear constraints?

3 ビュー (過去 30 日間)
Chris Baker
Chris Baker 2016 年 12 月 20 日
編集済み: Matt J 2016 年 12 月 22 日
I am performing a minimisation using fmincon. I am minimising the sum of squared errors between a pricing surface and market prices. The parameters that fmincon is minimising form a correlation matrix. Hence I require that not only do the elements lie between -1 and 1, but also that the correlation matrix is positive definite i.e. all eigenvalues are positive. I have imposed the positive definite constraint in the 'nlcon' argument of fmincon, but I suspect that as part of the function, fmincon 'bumps' the parameters (in order to calculate gradients) in such a way that the resulting correlation matrix is no longer positive definite, and then my code errors out.
x0 = 0.5.*ones(3,1);
A = [];
b = [];
Aeq = [];
beq = [];
lb = -1.*ones(size(x0));
ub = ones(size(x0));
soln = fmincon(@(x)f(x),x0,A,b,Aeq,beq,lb,ub,@mycon,[]);
The function f will involve the following (which is where I am erroring out):
corr_mat = zeros(3);
corr_mat(1,:) = [0 x(1) x(2)];
corr_mat(2,:) = [0 0 x(3)];
corr_mat = corr_mat + corr_mat.' + eye(3);
chol_decomp = chol(corr_mar,'upper')
My constraint is as follows:
function [c,ceq] = mycon(x)
corr_mat = zeros(4);
corr_mat(1,:) = [0, x(1), x(2), x(3)];
corr_mat(2,:) = [0, 0, 0.0275, x(4)];
corr_mat(3,:) = [0, 0, 0, x(5)]; %vd
corr_mat = corr_mat + corr_mat.' + eye(4);
c = eig(corr_mat);% Compute nonlinear inequalities at x.
c = -c;
ceq = [];
end
Any help is appreciated.

回答 (2 件)

Matt J
Matt J 2016 年 12 月 20 日
編集済み: Matt J 2016 年 12 月 21 日
I don't think you can impose constraints on the eigenvalues explicitly, because the eig() operator is not differentiable (I don't think). I also don't think you can count on the ordering of eig's output to be continuous in x.
I would instead parametrize the Cholesky decomposition directly and constrain L.'*L to have the form you want:
function [c,ceq] = mycon(x)
L=[x(1), x(2), x(3), x(4);
0 x(5), x(6), x(7);
0 0 x(8), x(9);
0 0 0 x(10);]
corr_mat=L.'*L; %Cholesky decomposition
ceq=[diag(corr_mat)-1;corr_mat(10)-0.0275];
c=[];
end
  4 件のコメント
Walter Roberson
Walter Roberson 2016 年 12 月 21 日
"Suppose nonlinear constraints are not satisfied, and an attempted step causes the constraint violation to grow. The sqp algorithm attempts to obtain feasibility using a second-order approximation to the constraints. The second-order technique can lead to a feasible solution. However, this technique can slow the solution by requiring more evaluations of the nonlinear constraint functions. "
This does not require differentiability of the nonlinear constraints, since they are being approximated anyhow.
"If either the objective function or a nonlinear constraint function returns a complex value, NaN, Inf, or an error at an iterate xj, the algorithm rejects xj. The rejection has the same effect as if the merit function did not decrease sufficiently: the algorithm then attempts a different, shorter step. "
Again this does not require differentiability.
Matt J
Matt J 2016 年 12 月 21 日
編集済み: Matt J 2016 年 12 月 22 日
For SQP, it is shown in equation (6-30) that you need the gradients of the nonlinear constraints g_i(x) to set up the QP sub-problem. The constraint gradients are also used in equation (6-48) to initialize penalty parameters. Also, the excerpt you have quoted refers to "a second-order approximation" to the constraints. This is presumably a 2nd order Taylor approximation, so derivatives of the constraints would need to be evaluated to compute that.
The section "Barrier Function" that you have cited is in reference to fmincon's interior point algorithm. The quote talks about how the algorithm shortens its step magnitude if it hits NaNs or Infs. However, the calculation of the step direction is discussed a few paragraphs further down (under Direct Step and Conjugate Gradient Step). Expressions involving gradients and Hessians of the constraints g_i(x) abound in those calculations.
Finally, remember that, in addition to making iterative steps, the Optimization Toolbox solvers need to evaluate stopping criteria like the First Order Optimality Measure to know when to stop. Equation (3-7) at this link shows that these depend on derivative calculations of both the objective and the nonlinear constraints.

サインインしてコメントする。


Walter Roberson
Walter Roberson 2016 年 12 月 20 日
If I recall correctly, during the initial gradient estimation phase, fmincon() will ignore the nonlinear constraints -- just does not call it.
You can put a try/catch into your routine and return a large value.

カテゴリ

Help Center および File ExchangeSystems of Nonlinear Equations についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by