Optimize two objective functions using fmincon

I have two objective functions which would optimize same variables. I have done the optimization individually and they are working fine. Now, I need to incorporate those two objective functions in fmincon. Is there any way to do it?

3 件のコメント

Torsten
Torsten 2017 年 7 月 26 日
You must specify how you form the new objective function from the two individual objectives within the "mixed" optimization.
Best wishes
Torsten.
INDREEYAJEET SINGH
INDREEYAJEET SINGH 2022 年 1 月 21 日
Even I have same challenge, do you have solution for this now?
Torsten
Torsten 2022 年 1 月 21 日
If you want to buy a meal of high quality, you get a different food composition as if you want a low-priced meal.
If you define some standard for the quality, you can get the lowest-priced meal with this prescribed quality.
So there is a curve that - given a certain quality of the meal - gives you the meal with the lowest price for this quality. This is called "pareto-optimum":
So you must first decide how to weigh the two criteria from your two optimizations to get a common optimum.

サインインしてコメントする。

 採用された回答

Walter Roberson
Walter Roberson 2017 年 7 月 26 日

0 投票

23 件のコメント

safi58
safi58 2017 年 7 月 27 日
I understand what you are trying to say. But in my case it is possible to incorporate two objective functions theoretically. I am just not sure how to do the coding?
safi58
safi58 2017 年 7 月 27 日
eta=po/(po+loss)
this is my objective function for both methods though the loss equation is different.
Walter Roberson
Walter Roberson 2017 年 7 月 27 日
Can po or loss be negative? Are po and loss both functions of some variable(s) or is po possibly constant?
safi58
safi58 2017 年 7 月 27 日
po or loss cannot be negative. po and loss are both functions of same variable(s) and po is constant.
Walter Roberson
Walter Roberson 2017 年 7 月 27 日
If po is constant, then maximizing eta is the same as minimizing loss, and there is no need to calculate eta . (If you are trying to minimize eta, then that would be the same as maximizing loss, which seldom makes sense unless you are trying to produce something like Vanta Black.)
So... you have two loss functions and you want to minimize both of them. But how important is minimizing one compared to the other? Are the two even on the same scale? Is the scale linear?
But you do not seem to be concerned about that, only about getting some formula. So just minimize the sum of the squares of the two losses.
safi58
safi58 2017 年 7 月 28 日
編集済み: Walter Roberson 2017 年 7 月 28 日
yes, minimizing loss would be the main issue. These are my two individual equations for optimization.
% [x,fval,exitflag] = fmincon(@(x)OPTM_1(x),x0,A,b,Aeq,beq,lb,ub,@(x)Constraints_1(x),options);
[x,fval,exitflag] = fmincon(@(x)OPTM_2(x),x0,A,b,Aeq,beq,lb,ub,@(x)Constraints_2(x),options);
I am not sure how I would go from there?
Walter Roberson
Walter Roberson 2017 年 7 月 28 日
If each of those functions, OPTM_1 and OPTM_2 emit an eta value (rather than a loss directly), then they will be larger when the loss is least, so you would be wanting to minimize their negative when minimizing individually. However, if you square a negative then you get a positive, so you do not need the negative for the joint minimization:
joint_obj = @(x) OPTM_1(x).^2 + OPTM_2(x).^2;
joint_constraint = @(x) merge_constraints(x, @Constraints_1, @Constraints_2);
[x,fval,exitflag] = fmincon(joint_obj, x0, A, b, Aeq, beq, lb, ub, joint_constraint, options)
togther with
function [merged_c, merged_ceq] = merge_constraints(x, cfun1, cfun2)
[c1, ceq1] = cfun1(x);
[c2, ceq2] = cfun2(x);
merged_c = [c1(:); c2(:)];
merged_ceq = [ceq1(:); ceq2(:)];
In the special case where you have no linear equality constraints, instead of the merge_constraints function I show here, you could instead use
VEC = @(M) M(:);
joint_constraint = @(x) deal([VEC(Constraints_1(x); VEC(Constraints_2(x)], []);
This assumes that your constraint functions might return arrays or vectors of uncertain orientation. In some cases you might be able to reduce down to
joint_constraint = @(x) deal([Constraints_1(x), Constraints_2(x)], [])
safi58
safi58 2017 年 7 月 28 日
編集済み: Walter Roberson 2017 年 7 月 28 日
But I want to do minimize loss first individually and then calculate eta from that for each case and do something like this,
f(x)=eta1^2+eta2^2
and then the optimization problem will become like this.
min f(x)
where f(x) is again the function of the same variables.
Walter Roberson
Walter Roberson 2017 年 7 月 28 日
If you minimize the losses individually first and calculate eta values from the minimized loss, then you will get out two numeric values, not a function. Notice that you write
f(x) = eta1^2 + eta2^2
but x does not appear on the right hand side, because eta1 and eta2 are constant in x at that point.
safi58
safi58 2017 年 7 月 31 日
What I am trying to do is:
% eta1(x)=po/(po+loss1(x))
eta2(x)=po/(po+loss2(x))
then,
% optm1=arg(max(eta1))
optm2=arg(max(eta2))
after finding that the objective function will become
% f(x)=[optm1-eta1(x)]^2+[optm2-eta2(x)]^2
then the optimization problem will become like this.
% min f(x)
with constraint set
Walter Roberson
Walter Roberson 2017 年 7 月 31 日
[optm1, foptm1, exitflag] = fmincon(@(x) -eta1(x), x0, A, b, Aeq, beq, lb, ub, @(x)Constraints_1(x), options);
[optm2, foptm2, exitflag] = fmincon(@(x) -eta2(x), x0, A, b, Aeq, beq, lb, ub, @(x)Constraints_2(x),options);
f = @(x) (optm1 - eta1(x)).^2 + (optm2 - eta2(x)).^2
Look at that more carefully. arg(max(eta1)) is the argument that maximizes eta1, and your eta1 is a function of several variables so the argument that maximizes it is going to be a vector. Then in f, you are taking the vector of locations, and subtracting from it a function value . This should suggest to you that you are doing the wrong thing.
It would make more sense if you had
optim1 = eta1( arg(max(eta1)) )
that is, the function value at the place that it is maximized. For the code I gave on the first two lines, that would correspond to
f = @(x) (foptm1 - eta1(x)).^2 + (foptm2 - eta2(x)).^2
which would be a distance from the maximum points.
safi58
safi58 2017 年 8 月 1 日
I will try to do that and will let you know if I am having any problem.
safi58
safi58 2017 年 8 月 3 日
Hi Walter, just a bit of query. Should I put these two equation in two different .m files?
if true
% [optm1, foptm1, exitflag] = fmincon(@(x) -eta1(x), x0, A, b, Aeq, beq, lb, ub, @(x)Constraints_1(x), options);
[optm2, foptm2, exitflag] = fmincon(@(x) -eta2(x), x0, A, b, Aeq, beq, lb, ub, @(x)Constraints_2(x),options);
then where should I put this one?
if true
%f = @(x) (optm1 - eta1(x)).^2 + (optm2 - eta2(x)).^2
end
Walter Roberson
Walter Roberson 2017 年 8 月 3 日
All of those can go in the same file.
safi58
safi58 2017 年 8 月 4 日
It becomes so cumbersome. so I start to follow the old path.
joint_obj = @(x) OPTM_1(x).^2 + OPTM_2(x).^2;
joint_constraint = @(x) merge_constraints(x, @Constraints_1, @Constraints_2);
[x,fval,exitflag] = fmincon(joint_obj, x0, A, b, Aeq, beq, lb, ub, joint_constraint, options)
but it is giving me the this error
FMINCON requires all values returned by user functions to be of data type double.
Walter Roberson
Walter Roberson 2017 年 8 月 4 日
I will need your current code
safi58
safi58 2017 年 8 月 4 日
It seems to be working now. in my original one I wrote
[x,fval,exitflag] = fmincon(@(x) joint_obj, x0, A, b, Aeq, beq, lb, ub, joint_constraint, options)
that's why it was not working.
the correct code should be
[x,fval,exitflag] = fmincon(joint_obj, x0, A, b, Aeq, beq, lb, ub, joint_constraint, options)
safi58
safi58 2017 年 8 月 4 日
After the optimization, is it possible to find eta from this equation
eta=po/(po+loss)
where the objective function is
% joint_obj = @(x) loss_1(x).^2 + loss_2(x).^2;
Walter Roberson
Walter Roberson 2017 年 8 月 4 日
eta1 = po/(po + loss_1(joint_obj));
eta2 = po/(po + loss_2(joint_obj));
safi58
safi58 2017 年 8 月 7 日
While I was doing that it is giving me this error
Function 'subsindex' is not defined for values of class 'function_handle'
Walter Roberson
Walter Roberson 2017 年 8 月 7 日
eta1 = po/(po + loss_1(x));
eta2 = po/(po + loss_2(x));
safi58
safi58 2017 年 8 月 9 日
another question on that, how would I know that from these two methods optimization is following which one?
Walter Roberson
Walter Roberson 2017 年 8 月 9 日
Sorry, I do not understand the question?

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

ヘルプ センター および File ExchangeGet Started with Optimization Toolbox についてさらに検索

質問済み:

2017 年 7 月 26 日

コメント済み:

2022 年 1 月 21 日

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by