Problems with fminsearch giving startvalues as result

16 ビュー (過去 30 日間)
Marc Laub
Marc Laub 2022 年 4 月 25 日
編集済み: Matt J 2022 年 4 月 28 日
Hey,
I am trying to minimize Gibbs enthalpie dependant an phase fraction and phase compositions. So i set up an equation which has all dependencys in it and is dependent on 2 variables.
The problem is that fminsearch is doing nothing, it always gives ma my start values back as results. From the outout I can see that it did 39 iterations ans tells me that the result lie within the TolX and Tolfun, but thats not the case. With a simple parameter sweep I get better results than fminsearch.. I also changes Tolx and Tol fun to very small values but that didnt help either. No matter how stupid my starting values are, thats its result, no matter how bad it is.
I also had this phenomen when doing fits with custom functions, sometimes als here that start values were given back as fit parameters without any improvement.
Does anybody know what I am doing wrong?
Many Thanks in advance.
Best regards.
  4 件のコメント
John D'Errico
John D'Errico 2022 年 4 月 25 日
編集済み: John D'Errico 2022 年 4 月 25 日
Very often this happens because people don't understand optimization tools. For example, is your function discrete in some way, quantized? Fminsearch CANNOT solve such a problem, because it assumes the objective is a well-behaved function of the parameters (essentially, smooth.) This will cause it to terminate, despite there being better solutions elswhere, since in the vicinity of your start point, the function is essentially constant.
Similarly, even if the function is indeed well defined and everywhere differentiable, your function might just be so flat that it cannot see a way to move that is any improvement, to within the tolerance. So it gives up, returning your start point.
Another common failure is when people use random numbers in the objective. Doing so makes the function not smooth in any respect. And again, fminsearch will almost certainly fail to converge to a good solution. (It might work for a bit if there is a sufficiently large signal beyond the random component in the function, but it will eventually get hung up.)
All of these cases will cause fminsearch, or indeed, most optimization tools to fail to iterate. Is your problem among the general classes I mentioned? Who knows? You may just have a bug in your code, and are calling the optimization tool incorrectly.
Walter Roberson
Walter Roberson 2022 年 4 月 26 日
Are you truly working with polynomials? Or are you working with multinomials? Do you have any terms which end up using variable_1 * variable_2, or could it be separated out into the sum of two polynomials each in a single variable?

サインインしてコメントする。

採用された回答

Matt J
Matt J 2022 年 4 月 26 日
編集済み: Matt J 2022 年 4 月 27 日
With a simple parameter sweep I get better results than fminsearch.
I don't know how you've implemented the sweep, but I don't see why you don't use that as your solution, or at least use it to initialize fminsearch. Since you know a local region where the minimum is located, I picture the sweep done in a vectorized fashion like below. It should be easy to vectorize the operation in fun() if they are just polynomial operations.
[var1,var2]=ndgrid(linspace(__), linspace(___))
Fgrid=fun(var1,var2) ; %vectorize fun() to accept array-valued input.
[~,iopt]=min(Fgrid(:));
var1_optimal=var1(iopt);
var2_optimal=var2(iopt);
  18 件のコメント
Marc Laub
Marc Laub 2022 年 4 月 28 日
yeah I end up with log() of the unknows.
Thats also why I have to go with fminsearchbnd, since fminserach finds minimums for negative values, beaucause log() of some negative values gives easy highly negative and therfore probaply minimum values... but negative values of the unknown dont make sence physicaly, therefore the necessity of a bound.
The ...T^(-1) and T^(-3) terms are only dependend on T and therefore constant for each individual case. They are pre calculated and then just insertet. The results of the ...T^(-1) and T^(-3) terms are only weighted by the unknown either just by multiplication with one unknown or more, multiplication with the difference of unknowns or weighted by the difference of unknowns to the power of 2
Matt J
Matt J 2022 年 4 月 28 日
編集済み: Matt J 2022 年 4 月 28 日
Just make the change of variables var1-->var1^2, var2-->var2^2 to ensure they only present positive values to the objective function. That's what fminsearchbnd does anyway.

サインインしてコメントする。

その他の回答 (1 件)

Torsten
Torsten 2022 年 4 月 25 日
編集済み: Torsten 2022 年 4 月 25 日
polynom_1 = @(variable_1,variable2) polynom(variable_1,variable2,input_1,input_2,..,input_n);
polynom_2 = @(variable_1,variable2) different_polynom(variable_1,variable2,input_1,input_2,..,input_n);
fun = @(variable_1,variable2) polynom_1(variable_1,variable_2)-polynom_2(variable_1,variable_2);
fun = @(x)fun(x(1),x(2));
x0 = [startvalue_1, startvalue2];
x = fminsearch(fun,x0,options)
  3 件のコメント
Walter Roberson
Walter Roberson 2022 年 4 月 26 日
File Exchange has fminsearchbnd. By John D'Errico if I recall correctly.
Marc Laub
Marc Laub 2022 年 4 月 26 日
Unfortunately it did not get the correct answer. Difference from its found f(x,y) solution to the known value was more than 10^5, whereas the start value has only a difference of 90.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeLinear Algebra についてさらに検索

タグ

製品


リリース

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by