Fit a Model to Complex-Valued Data
This example shows how to perform nonlinear fitting of complex-valued data. While most Optimization Toolbox™ solvers and algorithms operate only on real-valued data, least-squares solvers and fsolve
can work on both real-valued and complex-valued data for unconstrained problems. The objective function must be analytic in the complex function sense.
Do not set the FunValCheck
option to 'on'
when using complex data. The solver errors. Do not use the 'interior-point'
algorithm with lsqcurvefit
or lsqnonlin
; this algorithm is primarily for handling constraints, and has not been validated to work with complex data.
Data Model
The data model is a simple exponential:
The is input data, is the response, and is a complex-valued vector of coefficients. The goal is to estimate from and noisy observations . The data model is analytic, so you can use it in a complex solution.
Artificial Data with Noise
Generate artificial data for the model. Take the complex coefficient vector as [2;3+4i;-.5+.4i]
. Take the observations as exponentially distributed. Add complex-valued noise to the responses .
rng default % for reproducibility N = 100; % number of observations v0 = [2;3+4i;-.5+.4i]; % coefficient vector xdata = -log(rand(N,1)); % exponentially distributed noisedata = randn(N,1).*exp((1i*randn(N,1))); % complex noise cplxydata = v0(1) + v0(2).*exp(v0(3)*xdata) + noisedata;
Fit the Model to Recover the Coefficient Vector
The difference between the response predicted by the data model and an observation (xdata
for and response cplxydata
for ) is:
objfcn = @(v)v(1)+v(2)*exp(v(3)*xdata) - cplxydata;
Use either lsqnonlin
or lsqcurvefit
to fit the model to the data. This example first uses lsqnonlin
.
opts = optimoptions(@lsqnonlin,'Display','off'); x0 = (1+1i)*[1;1;1]; % arbitrary initial guess [vestimated,resnorm,residuals,exitflag,output] = lsqnonlin(objfcn,x0,[],[],opts); vestimated,resnorm,exitflag,output.firstorderopt
vestimated = 2.1582 + 0.1351i 2.7399 + 3.8012i -0.5338 + 0.4660i resnorm = 100.9933 exitflag = 3 ans = 0.0018
lsqnonlin
recovers the complex coefficient vector to about one significant digit. The norm of the residual is sizable, indicating that the noise keeps the model from fitting all the observations. The exit flag is 3
, not the preferable 1
, because the first-order optimality measure is about 1e-3
, not below 1e-6
.
Alternative: Use lsqcurvefit
To fit using lsqcurvefit
, write the model to give just the responses, not the responses minus the response data.
objfcn = @(v,xdata)v(1)+v(2)*exp(v(3)*xdata);
Use lsqcurvefit
options and syntax.
opts = optimoptions(@lsqcurvefit,opts); % reuse the options
[vestimated,resnorm] = lsqcurvefit(objfcn,x0,xdata,cplxydata,[],[],opts)
vestimated = 2.1582 + 0.1351i 2.7399 + 3.8012i -0.5338 + 0.4660i resnorm = 100.9933
The results match those from lsqnonlin
, because the underlying algorithms are identical. Use whichever solver you find more convenient.
Alternative: Split Real and Imaginary Parts
To include bounds, or simply to stay completely within real values, you can split the real and complex parts of the coefficients into separate variables. For this problem, split the coefficients as follows:
Write the response function for lsqcurvefit
.
function yout = cplxreal(v,xdata) yout = zeros(length(xdata),2); % allocate yout expcoef = exp(v(5)*xdata(:)); % magnitude coscoef = cos(v(6)*xdata(:)); % real cosine term sincoef = sin(v(6)*xdata(:)); % imaginary sin term yout(:,1) = v(1) + expcoef.*(v(3)*coscoef - v(4)*sincoef); yout(:,2) = v(2) + expcoef.*(v(4)*coscoef + v(3)*sincoef);
Save this code as the file cplxreal.m
on your MATLAB® path.
Split the response data into its real and imaginary parts.
ydata2 = [real(cplxydata),imag(cplxydata)];
The coefficient vector v
now has six dimensions. Initialize it as all ones, and solve the problem using lsqcurvefit
.
x0 = ones(6,1);
[vestimated,resnorm,residuals,exitflag,output] = ...
lsqcurvefit(@cplxreal,x0,xdata,ydata2);
vestimated,resnorm,exitflag,output.firstorderopt
Local minimum possible. lsqcurvefit stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance. vestimated = 2.1582 0.1351 2.7399 3.8012 -0.5338 0.4660 resnorm = 100.9933 exitflag = 3 ans = 0.0018
Interpret the six-element vector vestimated
as a three-element complex vector, and you see that the solution is virtually the same as the previous solutions.