(OPTIMIZATION) Initial point is a local minimum - PROBLEM!!

43 ビュー (過去 30 日間)
Daniel Valencia
Daniel Valencia 2021 年 10 月 3 日
コメント済み: Alan Weiss 2021 年 10 月 21 日
Greetings everybody,
My code is about fitting two curves (one from calculated values and the other from laboratory data) by optimizing one parameter. The thing is that the result is always the same:
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the value of the optimality tolerance.
<stopping criteria details>
Optimization completed: The final point is the initial point.
The first-order optimality measure, 0.000000e+00, is less than
options.OptimalityTolerance = 1.000000e-05.
After that result, the parameter never changes and of course the optimization process never gets complete and the curves never fit, and I don't really know what to do. The summary of the code is on the next lines (I'm trying with Trust-Region-Reflective and levenberg-Marquardt, but the results are the same):
%% Optimization command
x0=0.20; %% Initial value of the parameters [u, X, br]
% Vectors of lower and upper bounds on the design variables in x. Then, lb ? x ? ub
% For Levenberg-Marquardt algorithm use lb=[-inf] and ub=[inf]
lb=-inf;
ub=inf;
% optimization options: The first one is levenberg-Marquardt, and the
% second, by default is Trust-Region-Reflective
% options=optimset('Algorithm','levenberg-Marquardt', 'LargeScale','off', 'DiffMaxChange',0.01, 'DiffMinChange',0.0001, 'TolFun',1e-5, 'TolX',0.001);
options=optimset('LargeScale','off', 'DiffMaxChange',0.01, 'DiffMinChange',0.0001, 'TolFun',1e-5, 'TolX',0.001);
[x,resnorm,residual,exitflag,output,lambda,jacobian]=lsqnonlin(@HSfun,x0,lb,ub,options);
%% Show Results and Optimization Details
dim=1; %make sure to use same values a fun file
X=x.*dim;
resnorm;
exitflag;
output;
HSfun basically takes the vectors (the mentioned curves) and calculates the difference in order to reduce that gradient:
%% Read Calculated raw data from CalculatedData.xls file
calc_sh_strain=xlsread('C:\Users\Afuribeh\Desktop\UNAL-MED-DFV\U TXC TEST - Only u\CalculatedData.xls','C46:C235')*100; % strain in percentage
calc_q=xlsread('C:\Users\Afuribeh\Desktop\UNAL-MED-DFV\U TXC TEST - Only u\CalculatedData.xls','D46:D235'); % calc q
%% Interpolate Calculated data to get values at same strain as observed data
sh_strain_transp=transpose(lab_sh_strain); % sames strain values as observed data
q_calculated=interp1(calc_sh_strain,calc_q,sh_strain_transp,'spline');
%% calculate the difference between observed and calculated response
F=lab_q-q_calculated';
I would really appreciate if somebody can help me, maybe I'm not using the best optimization tool, or stopping criteria are not correct.
Thanks in advance.

採用された回答

Alan Weiss
Alan Weiss 2021 年 10 月 4 日
I suggest that you plot the square of the objective function over a reasonable range of the parameter and over a small range such as from 1 to 1.5. It is possible that the objective function is either flat or stair-stepped, meaning locally flat. In either case you will not get a good solution using lsqnonlin. If the objective function is stair-stepped, use fminbnd on the square of the function as the optimizer, recognizing that there are many local minima. If the objective function is flat, well, you'll have to debug it.
Alan Weiss
MATLAB mathematical toolbox documentation
  16 件のコメント
Daniel Valencia
Daniel Valencia 2021 年 10 月 21 日
Greetings mr Weiss,
The result of the evaluation you were asking for is D1=0 and D2 different from zero, so our function is stair-stepped.
How could this be interpreted?
Alan Weiss
Alan Weiss 2021 年 10 月 21 日
Finally, we understand what is happening. Your objective function does not change at all when your optimization parameters change by a tiny amount. That is what D1 = 0 means.
The way that lsqnonlin works (in one dimension, higher dimensions are exactly similar) is that it starts at x0 and evaluates F(x0). It then adds a tiny amount delta and evaluates F(x0+delta) and computes (F(x0+delta) - F(x0))/delta, which is an estimate of the gradient of the function. If this estimate is exactly zero, which you have now established, then the solver cannot move. It knows that the function is flat near x0, meaning if the solver samples the function values near x0 they are all the same.
How can this be? It is a typical result of a simulation; many simulations are not sensitive to small changes in the evaluation point. It also arises in many other circumstances. I don't know why it is happening with your objective function, but it is what I suspected from the start because it happens in many circumstances.
The point is, what can you do about it? You can use a different solver such as patternsearch from Global Optimization Toolbox. In that case you will have to change your objective function to an explicit sum of squares:
fun = @(x)sum(HSfun(x).^2,'all');
Another thing you can try is to set the finite difference step size to a larger-than-default value, such as
options = optimoptions('lsqnonlin','FiniteDifferenceStepSize',1e-3);
For a discussion of all this, see Optimizing a Simulation or Ordinary Differential Equation, the first sections about problems in finite differences.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeSurrogate Optimization についてさらに検索

製品


リリース

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by