Optimising Limits of Chain integrals

3 ビュー (過去 30 日間)
MATLAB_Soldier 2023 年 1 月 26 日
編集済み: John D'Errico 2023 年 1 月 26 日
Hi All,
I am trying to optimise a problem by finding the maximum of the integral of several functions where they are linked by the limit of integration:
I have written a code based on examples, I have seen online but I can't get over the error message shown below.
I have seen solutions for this problem online but that would require toolboxes that I don't have. Currently, I have only these:
  • Curve Fitting Toolbox Version 3.8 (R2022b)
  • Global Optimization Toolbox Version 4.8 (R2022b)
  • Optimization Toolbox Version 9.4 (R2022b)
  • Parallel Computing Toolbox Version 7.7 (R2022b)
  • Signal Processing Toolbox Version 9.1 (R2022b)
  • Simscape Version 5.4 (R2022b)
  • Simscape Driveline Version 3.6 (R2022b)
  • Simulink Design Optimization Version 3.12 (R2022b)
  • Stateflow Version 10.7 (R2022b)
Here is the code I have written so far. It contains only two functions so far, as it is a test only:
close all
fun1 =@(g1) 208.4*exp(-0.02704.*g1)-293.9*exp(-0.1286.*g1);
fun2 =@(g2) 108.3*exp(-0.01137.*g2)-622.8*exp(-0.1578.*g2);
options = optimoptions('fminunc','Algorithm','quasi-newton');
A = optimvar('A',1,1);
prob.Objective = integral(fun1,10,A(1)) + integral(fun2,A(1),80);
Error using integral
Limits of integration must be double or single scalars.
cons1 = 20 <= A(1);
prob.Constraints.cons1 = cons1;
options.Display = 'iter';
Can someone help me overcoming this problem?
Many thanks.


Torsten 2023 年 1 月 26 日
編集済み: Torsten 2023 年 1 月 26 日
fun1 =@(g1) 208.4*exp(-0.02704.*g1)-293.9*exp(-0.1286.*g1);
fun2 =@(g2) 108.3*exp(-0.01137.*g2)-622.8*exp(-0.1578.*g2);
A = optimvar('A',1,1);
fun = @(A)integral(fun1,10,A) + integral(fun2,A,80);
obj = fcn2optimexpr(fun,A);
prob = optimproblem('Objective',obj,'ObjectiveSense','maximize');
cons1 = 20 <= A(1);
prob.Constraints.cons1 = cons1;
options = optimoptions('fminunc','Algorithm','quasi-newton');
options.Display = 'iter';
x0.A = 30;
Warning: You have passed FMINUNC options to FMINCON. FMINCON will use the common options and ignore the FMINUNC options that do not apply.
To avoid this warning, convert the FMINUNC options using OPTIMOPTIONS.
Solving problem using fmincon. First-order Norm of Iter F-count f(x) Feasibility optimality step 0 2 -4.775178e+03 0.000e+00 1.487e+01 1 4 -4.789149e+03 0.000e+00 1.323e+01 9.950e-01 2 6 -4.846253e+03 0.000e+00 3.651e+00 6.971e+00 3 8 -4.851941e+03 0.000e+00 6.749e-01 2.662e+00 4 10 -4.852157e+03 0.000e+00 3.773e-02 6.078e-01 5 12 -4.852158e+03 0.000e+00 4.500e-04 3.688e-02 6 14 -4.852158e+03 0.000e+00 9.357e-06 4.266e-04 7 16 -4.852158e+03 0.000e+00 9.945e-07 9.930e-06 Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = struct with fields:
A: 41.2725
  1 件のコメント
MATLAB_Soldier 2023 年 1 月 26 日
This is exactly what I was after. Thank you.


その他の回答 (2 件)

Matt J
Matt J 2023 年 1 月 26 日
編集済み: Matt J 2023 年 1 月 26 日
There is virtually no benefit to the problem based approach if you are going to use fminunc. The main advantage of problem-based is in the formulation of constraints, and maybe also grouping the problem into different unknown vectors.
options = optimoptions('fminunc','Algorithm','quasi-newton');
options.Display = 'iter';
First-order Iteration Func-count f(x) Step-size optimality 0 2 -4557.72 71.5 1 6 -4852.05 0.0498625 4.29 2 10 -4852.16 0.215666 0.0015 3 12 -4852.16 1 0 Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
A = 41.2725
function out=objective(x)
fun1 =@(g1) 208.4*exp(-0.02704.*g1)-293.9*exp(-0.1286.*g1);
fun2 =@(g2) 108.3*exp(-0.01137.*g2)-622.8*exp(-0.1578.*g2);
out = -(integral(fun1,10,A) + integral(fun2,A,80));

John D'Errico
John D'Errico 2023 年 1 月 26 日
編集済み: John D'Errico 2023 年 1 月 26 日
I can't imagine why you want to do this. BUT, it is an interesting problem, so I'll take a look.
Assume that all functions are continuous and at least once differentiable. It would probably not break things in theory if they are not, but I'll do so anyway. It could cause isses with the root finder.
Next, consider the simple case of two pieces, thus two functions and one break point. You want to find the unknown parameter B, such that
int(f(x),[A,B]) + int(g(x),[B,C])
is maximized. The maximum MUST occur at some point where the two functions cross. Yes, there may be multiple crossings. But the maximum will lie at ONE of those crossings of the two functions. For example, suppose we consider some general location where f(x) > g(x). Assume the optimum value for the break point (B) is at that location. But if f(B)>g(B), then we can trivially increase the value of B to increse the combined integral. And we can continue to increase the break point until exactly the location is found where the two functions are equal. If G is the larger of the two functions, then we can decrease the break by the same logic, until the two functions cross.
The point is, all we need to do is locate the points where f(x) and g(x) cross. The breakpoint must lie at ONE of the crossings.
If we have more then two functions, so now also h(x), then we also need to look for the crossings of the pair of functions g(x) and h(x). In some cases, if h(x) everywhere dominates g(x), so that h(x) > g(x) everywhere, then the problem may reduce to an integral where the length of the interval for g(x) is zero, so the upper and lower limits on the g integral may be the same. Effectively, this means we need to find the crossings of the pair of fnctions f(x) and h(x).
Anyway, all of this just means all we care about is how to locate the points where each of those functions cross, and then the solution will be a combination of those crossings. My point is, this is a purely combinatorial problem, and a fairly simple one at that. First locate all function crossings. Then test each combination, looking at where f and g cross first. Then look for the points where you transition to h, etc. Absolutely no optimization is necessary beyond the initial rootfinding, and that can be done using fzero. In fact, optimization is not appropriate for the breakpoints. Just find the breaks using fzero.
For example, consider these functions on the global interval [0,10].
f = @(x) 1.5*sin(x);
g = @(x) cos(x);
h = @(x) sin(sqrt(x)/2 + pi/3) - 0.5;
hold on
Again, all that matters is where the functions cross. So obtain a list of all such crossings. Then just test them all. Even better, integrate each function between each of those transition points in advance, but that is just a question of extra CPU cycles spent at a cost of coding time here.
Code to find all crossings of a pair of functions in 1-dimension is not that difficult to write. I've attached a simple code to this post that just uses fzero.
support = [0,10];
nsamples = 500;
Xfg = allcrossings(f,g,support,nsamples)
Xfg = 1×3
0.5880 3.7296 6.8712
Xgh = allcrossings(g,h,support,nsamples)
Xgh = 1×3
1.0473 5.0441 7.7042
Xfh = allcrossings(f,h,support,nsamples)
Xfh = 1×4
0.3195 2.8371 6.4404 9.4033
Now all you need to do is test those locations as potential break points. At this point, it is just a fairly simple search. I won't go any further, since at this point, you have already accepted an answer, even though I would suggest you may be taking the wrong path there. But the solution MUST lie at some set of those breakpoints. We know that in advance. Effectively, there are now just a few integrals we need to compute.


Find more on Programming in Help Center and File Exchange




Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by