fmincon with and without parallel computing yields different results

6 ビュー (過去 30 日間)
Carsten Asmanoglo
Carsten Asmanoglo 2018 年 3 月 27 日
コメント済み: Alan Weiss 2018 年 3 月 28 日
Hi all, I have a question concerning the use of fmincon (nonlinear constraints) with the parallelisation toolbox. I use the following code:
options=optimset('Display','iter','Algorithm','active-set','UseParallel','always');
[man_var,f,eflag,outpt] = fmincon(@calculate_obj,man_var_0,[],[],[],[],man_var_lb,man_var_ub,@calculate_ineqc,options);
In the functions calculate_obj and calculate_ineqc I evalute a process model and use its results to calculate the nonlinear constraints and objective value. My problem is that when I add 'UseParallel','always'in the options, I get a different result from fmincon and it seems that fmincon with 'UseParallel','always' solves the problem differntly compared to the fmincon without parallelisation. Here, the problem appers that in the case with parallel computing fmincon starts to oszillate and does not find a solution. As fare as I understood the parallelisation toolbox, it should only distribute the computation to different workes, but the final solution should remain the same. I'm currently using version 2013b, because the model was written in this version of Matlab.
It would be nice if someone could give me a hint how to solve this problem. Thanks a lot.
Best regards Carsten

回答 (1 件)

Alan Weiss
Alan Weiss 2018 年 3 月 27 日
The only difference between using parallel computing and serial computing is that, in parallel, finite difference approximations to the gradient are done in parallel.
So I believe that your calculate_obj function or calculate_ineqc function use random numbers in some way, or write to files that get overwritten in parallel, or some such thing. I can guarantee that the underlying fmincon algorithm is 100% identical in parallel and serial.
For caveats and similar issues, see Improving Performance with Parallel Computing, particularly the section on Factors That Affect Results.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation
  5 件のコメント
Carsten Asmanoglo
Carsten Asmanoglo 2018 年 3 月 28 日
Hi Alan,
thanks again for your fast answer. Yes the problem has (currently) only one manipulated variable, due to debugging purpose. However, I would like to do an optimization with about 7-10 manipulated variables.
I know that I can embed the functions to fasten the code, but during debugging I'm not using this option.
I have now clearly located the problem. It is caused during the use of ode15s. I compared the exchange of data between ode15s and my function (output to ode 15s of my function which calculates the differential for ode15s and input state variables form ode15s to my function) for the case with and without parallelisation. Do you have any idea, why with the same output of my function in timestep x ode15s at timestep x+1 sets another input to my function? (The input output behavior before timestep x is excalty the same).
Best regards Carsten
Alan Weiss
Alan Weiss 2018 年 3 月 28 日
Sorry, I really don't know what is going on. The only idea I have is that you might be running in parallel on a network of computers, rather than a multicore machine, and the miniscule error comes from an answer being off by machine eps, which is to be expected whenever you run software on different machines or software versions. These kinds of errors can quickly expand on some ODEs, as the well-known chaos examples show.
If I am right about this, then the answer in parallel is not wrong, any more than the answer in serial is wrong. It just means that your problem is sensitive to miniscule differences in computations, which is not a fault of the solution computations, but is an inherent feature of your problem.
Sorry, that's all I know.
Alan Weiss
MATLAB mathematical toolbox documentation

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeSolver Outputs and Iterative Display についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by