# fmincon with sqp algorithm but without inclduing a gradient

92 ビュー (過去 30 日間)
Ali Aghaeifar 2019 年 2 月 28 日

I used fmincon with sqp as solver to optimize a constrained nonlinear problem. However, I didn't Include gradient evaluation in the objective function (not possible to calculate).
I'm wondering if sqp doesn't need function gradient at all or gradient is estimated numerically in the solver (which method is used to estimate?).
I already read the following description provided by Matlab, but still not sure what is happening with sqp.

#### 0 件のコメント

サインイン to comment.

### 採用された回答

John D'Errico 2019 年 2 月 28 日

SQP does use the gradient, but it is not necessary to provide one. When none are provided, then finite differences often provide a sufficiently good approximation, although not always. We can probably even verify what it does, like this:
opts = optimset('fmincon');
opts.Algorithm = 'sqp';
And now a really simple objective function:
function y = MySQPTestFun(X)
disp(X)
y = abs(X - 2).^3*[1 2 3]';
end
So, a simple quasi-parabolic shape, with elliptical contours. The min will lie at [2 2 2] of course. My start point will be [3 4 5].
Now watch the iterations, and think about what fmincon is doing at each objective call.
format long g
[X,fval] = fmincon(@MySQPTestFun,[3 4 5],[],[],[],[],[],[],[],opts)
3 4 5
3.00000004470348 4 5
3 4.00000005960464 5
3 4 5.00000007450581
So the first 4 calls are a finite difference approximation to the gradient. fmincon gets the objective at the start point. Then it tweaks each unknown by asmall amount. A simple forward finite difference approximation for each partial derivative.
Given that information, now we see a line search.
0 -20.0000007152557 -76.0000019073486
0.9 -12.800000500679 -51.700001335144
1.53 -7.76000035047531 -34.6900009346008
1.971 -4.23200024533272 -22.7830006542206
2.2797 -1.7624001717329 -14.4481004579544
2.49579 -0.0336801202130301 -8.61367032056808
2.647053 1.17642391585088 -4.52956922439765
2.7529371 2.02349674109562 -1.67069845707836
2.82705597 2.61644771876693 0.33051108004515
It looks like fmincon has done as much as it can given the chosen search direction. What now?
2.82705601212642 2.61644771876693 0.33051108004515
2.82705597 2.61644775775504 0.33051108004515
2.82705597 2.61644771876693 0.330511094946311
0.582939009207905 -4.25615707365741 2.82873455774357
1.25617409744553 -2.19437563593011 2.07926751443404
1.72743865921187 -0.751128629520994 1.55464058411737
2.05732385244831 0.259144274965383 1.18740173289571
2.05732388310483 0.259144274965383 1.18740173289571
2.05732385244831 0.259144289866545 1.18740173289571
2.05732385244831 0.259144274965383 1.18740175058937
0.513678914322123 2.68874883027502 1.6472208919233
0.513678929223284 2.68874883027502 1.6472208919233
0.513678914322123 2.6887488703405 1.6472208919233
0.513678914322123 2.68874883027502 1.64722091646881
...
2.0003348451581 2.00030710570102 1.99991255777073
Eventually, it arrives at/near the point we all knew it should find in the first place. Dumb computers. It could have just asked me.
So as I said, fmincon just does a finite difference gradient approximation. You can see that happening, IF you just look carefully enough.

#### 0 件のコメント

サインイン to comment.

### その他の回答 (1 件)

Alan Weiss 2019 年 2 月 28 日
See Benefits of Including Derivatives in the documentation.
Alan Weiss
MATLAB mathematical toolbox documentation

#### 0 件のコメント

サインイン to comment.

サインイン してこの質問に回答します。

R2018a

Translated by