Minimising coefficients using fminsearch, subject to normalisation condition
20 ビュー (過去 30 日間)
古いコメントを表示
I am currently trying fit a calculated signal to an measured signal by minimising the coefficients of the following function,
,
subject to the conditions,
,
.
However, I am stuck with how to minimise this function according to the normalisation condition and requirement for c3 above, I am not sure how to do this with fminsearch. I should also just do the minimisation in c1 and c2, and calculate the resulting value of c3 after too perhaps. The resulting fit of the calculated data to the observed data is not what I expect. Any help would be greatly appreciated!
I have the following code,
fi = zeros(326, 3);
fi(:,1) = pdW_Singlet; % some calculated function for c_i
fi(:,2) = pdW_Triplet;
fi(:,3) = pdW_Bound;
c = rand(2,1);
max_c = max(c);
min_c = min(c);
c(3) = 1 - c(1) - c(2);
c = c/ sum(c); % normalise to unity
fun = @(c)sseval_lineout(c, fi, observed_function); % minimise coefficients
options = optimset('Display','iter', 'PlotFcns',@optimplotfval);
best_c = fminsearch(fun, c, options)
where the function sseval_lineout is,
function [sse, f_sum] = sseval_lineout(c, fi, ground_truth)
[Nx, Nc] = size(fi);
f_sum = zeros(1,Nx);
for i=1:Nc
f_sum(1,1:end) = f_sum(1,1:end) + (c(i) .* fi(:,i))';
end
sse = nansum((f_sum - ground_truth).^2);
end
0 件のコメント
回答 (3 件)
Andreas Apostolatos
2021 年 3 月 29 日
編集済み: Andreas Apostolatos
2021 年 3 月 29 日
Hello,
You have an optimization problem at hand, where the objective function is subject to two sets of equality constraints. However, function 'fminsearch' can be used to solve unconstrained optimization problems, see the following documentation page,
where it is mentioned in the preamble "Find minimum of unconstrained multivariable function using derivative-free method".
Since your constraints are linear, I would recommend you look at function 'fmincon', which can be used to define linear equality constraints of the form 'Aeq*x = beq', see the following documentation page for more information,
The way that you have set up your optimization problem, namely,
c = rand(2,1);
max_c = max(c);
min_c = min(c);
c(3) = 1 - c(1) - c(2);
c = c/ sum(c); % normalise to unity
fun = @(c)sseval_lineout(c, fi, observed_function); % minimise coefficients
options = optimset('Display','iter', 'PlotFcns',@optimplotfval);
best_c = fminsearch(fun, c, options)
does not ensure that the constraints are satisfied. By calling 'fminsearch' in the following way,
best_c = fminsearch(fun, c, options)
you provide the minimizer the initial guess 'c' which is computed as follows,
c = rand(2,1);
max_c = max(c);
min_c = min(c);
c(3) = 1 - c(1) - c(2);
c = c/ sum(c); % normalise to unity
However, this is only the initial guess where the equality constraints are satisfied by construction. The minimizer 'fminsearch' will not respect these constraints while searching in the design space for a minimum.
Please look at other optimization methods that allow for such constraints, such as 'fmincon' mentioned above.
I hope that this information helps you to proceed.
Kind Regards,
Andreas
0 件のコメント
Matt J
2021 年 3 月 29 日
編集済み: Matt J
2021 年 3 月 29 日
You do not need iterative minimization to solve this problem, assuming your c(i) are not constrained to be positive (which seems safe to assume, seeing as you don't mention that in your description). The analytical solution is,
A=fi(:,1:2)-fi(:,3);
b=ground_truth(:)-f(:,3);
c=A\b;
c(3)=1-sum(c);
1 件のコメント
Matt J
2021 年 3 月 29 日
If the c(i) are bounded to [0,1] but you do not have the Optimization Toolbox, you can proceed very similarly with fminsearchbnd or fminsearchcon (Download). For example,
A=fi(:,1:2)-fi(:,3);
b=ground_truth(:)-f(:,3);
c=fminsearchbnd(@(c) norm(A*c-b), A\b, [0;0], [1;1]);
c(3)=1-sum(c);
William Rose
2021 年 3 月 29 日
You are fitting c, which is called x in the help. You have a linear equality constraint: c(1)+c(2)+c(3)=1.
Define
Aeq=[1, 1, 1];
beq=1;
Are c1, c2, c3 each restricted to [0,1]? If not, then c=[-99,-1000,+2000] is acceptable. If they are individually constrained to [0,1], then you have lower and upper bound constraints, in which case you must define
lb=[0;0;0];
ub=[1;1;1];
You need an initial guess, so use
c0=[.33; .33; .34];
or whatver.
Then in your code , having done the setup above, you call fmincon:
x=fmincon();
c = fmincon(fun,c0,[],[],Aeq,beq,lb,ub);
In the above replace lb,ub with [],[], if c(i) are not individually constrained to [0,1].
3 件のコメント
William Rose
2021 年 3 月 30 日
@Matt J is correct and I was wrong. The problem is linear in the coefficients, so it can be solved directly by matrix methods, with no initial guess needed - which is what lsqin() does. fmincon() is iterative and less elegant. You can use Aeq and beq as I suggested, when you call lsqlin(), in order to enforce the constraint c1+c2+c3=1.
Matt J
2021 年 3 月 30 日
No, lsqlin is an iterative solver, and I do agree that you need an iterative solution if you have bounds. The reason that lsqlin doesn't require an initial guess is because it knows that the minimization problem is convex, and so will always converge to a global minimum. fmincon on the other hand is designed to solve more general problems and cannot know whether the problem it is given is convex.
参考
カテゴリ
Help Center および File Exchange で Surrogate Optimization についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!