Question on optimization problem and fminsearch,fminunc,lsqnonlin
8 ビュー (過去 30 日間)
古いコメントを表示
Hey all, I am trying to do an optimization problem where I import real life data and try to find the best combination of 6 unknown variables that describe the real life data. The function being run in the optimization call is a series of if/then statements and equations and the output evalutaion is based on the distance difference between real data and the simulated. There are as many equations as variables plus the if/then statements When I use fminsearch the program works just okay but not ideal to find the minimum. When i try fminunc or lsqnonlin, the output basically repeats the initial guess which is not really close to the actual solution. Why are these functions so dependant on the initial guess? Which of these functions should I be using? Any ideas on what I could do to solve this problem in my optimization?
2 件のコメント
Sargondjani
2012 年 6 月 6 日
and as sean notes: fminunc assumes your problem is differentiable... if it is not, than take his advice
but if your problem is differentiable and fminunc exactly returns the initial guess, then something is wrong. you should check the exit message... could be that the maximum number of function evaluations is reached, or something like that
回答 (2 件)
Sean de Wolski
2012 年 6 月 5 日
The initial guess is important because the above mentioned optimizers are trying to find a local minimum, i.e. the one closest to the initial guess that can be achieved using derivatives. From your above description, it sounds like there is a good chance that your function is not differentiable and thus a genetic algorithm, global search or patternsearch is required to find the global minimum. These functions are in the Global Optimization Toolbox:
0 件のコメント
Geoff
2012 年 6 月 5 日
Depending on how localised your minima are, you can sometimes get around this with a simplex-based solver like fminsearch. I start with a large simplex, run the solution and let it converge. Then I reduce the size of the simplex, "shake up" the result (offsetting by the simplex) and let the solution converge again. I repeat this several times. But then, I don't know if fminsearch does this already. Caveat on this is that I was using a Nelder-Mead implementation in C++, not MatLab... I think you may be able to use optimset to configure fminsearch with a bit more of a manual feel.
If you're not time-constrained, you may want to set a large number of random initial guesses, sampled across your solution space, solve each one and choose the best. But given you have 6 unknowns, it doesn't take much partitioning before the problem blows up. And if some variables are unconstrained, this can become quite impractical.
0 件のコメント
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!