Problems with fminsearch giving startvalues as result
16 ビュー (過去 30 日間)
古いコメントを表示
Hey,
I am trying to minimize Gibbs enthalpie dependant an phase fraction and phase compositions. So i set up an equation which has all dependencys in it and is dependent on 2 variables.
The problem is that fminsearch is doing nothing, it always gives ma my start values back as results. From the outout I can see that it did 39 iterations ans tells me that the result lie within the TolX and Tolfun, but thats not the case. With a simple parameter sweep I get better results than fminsearch.. I also changes Tolx and Tol fun to very small values but that didnt help either. No matter how stupid my starting values are, thats its result, no matter how bad it is.
I also had this phenomen when doing fits with custom functions, sometimes als here that start values were given back as fit parameters without any improvement.
Does anybody know what I am doing wrong?
Many Thanks in advance.
Best regards.
4 件のコメント
John D'Errico
2022 年 4 月 25 日
編集済み: John D'Errico
2022 年 4 月 25 日
Very often this happens because people don't understand optimization tools. For example, is your function discrete in some way, quantized? Fminsearch CANNOT solve such a problem, because it assumes the objective is a well-behaved function of the parameters (essentially, smooth.) This will cause it to terminate, despite there being better solutions elswhere, since in the vicinity of your start point, the function is essentially constant.
Similarly, even if the function is indeed well defined and everywhere differentiable, your function might just be so flat that it cannot see a way to move that is any improvement, to within the tolerance. So it gives up, returning your start point.
Another common failure is when people use random numbers in the objective. Doing so makes the function not smooth in any respect. And again, fminsearch will almost certainly fail to converge to a good solution. (It might work for a bit if there is a sufficiently large signal beyond the random component in the function, but it will eventually get hung up.)
All of these cases will cause fminsearch, or indeed, most optimization tools to fail to iterate. Is your problem among the general classes I mentioned? Who knows? You may just have a bug in your code, and are calling the optimization tool incorrectly.
Walter Roberson
2022 年 4 月 26 日
Are you truly working with polynomials? Or are you working with multinomials? Do you have any terms which end up using variable_1 * variable_2, or could it be separated out into the sum of two polynomials each in a single variable?
採用された回答
Matt J
2022 年 4 月 26 日
編集済み: Matt J
2022 年 4 月 27 日
With a simple parameter sweep I get better results than fminsearch.
I don't know how you've implemented the sweep, but I don't see why you don't use that as your solution, or at least use it to initialize fminsearch. Since you know a local region where the minimum is located, I picture the sweep done in a vectorized fashion like below. It should be easy to vectorize the operation in fun() if they are just polynomial operations.
[var1,var2]=ndgrid(linspace(__), linspace(___))
Fgrid=fun(var1,var2) ; %vectorize fun() to accept array-valued input.
[~,iopt]=min(Fgrid(:));
var1_optimal=var1(iopt);
var2_optimal=var2(iopt);
18 件のコメント
その他の回答 (1 件)
Torsten
2022 年 4 月 25 日
編集済み: Torsten
2022 年 4 月 25 日
polynom_1 = @(variable_1,variable2) polynom(variable_1,variable2,input_1,input_2,..,input_n);
polynom_2 = @(variable_1,variable2) different_polynom(variable_1,variable2,input_1,input_2,..,input_n);
fun = @(variable_1,variable2) polynom_1(variable_1,variable_2)-polynom_2(variable_1,variable_2);
fun = @(x)fun(x(1),x(2));
x0 = [startvalue_1, startvalue2];
x = fminsearch(fun,x0,options)
3 件のコメント
Walter Roberson
2022 年 4 月 26 日
File Exchange has fminsearchbnd. By John D'Errico if I recall correctly.
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!