How to use implicit model functions in Curve fitting toolbox

I am trying to fit some data on a custom implicit equation (2nd degree polynomial in x and y; see below) and obtain the coefficients (a,b and c) for that data.
But the cftool does not have an implicit option in its custom equation tab.
Any help is appreciated on the topic. I understand that people are coding it in some ways and i would really appriciate if a GUI based answer is also possible.

2 件のコメント

Matt J
Matt J 2020 年 11 月 12 日
You need some sort of constraint on a,b,c because otherwise a trivial solution is just a=b=c=0.
kdb acml
kdb acml 2020 年 11 月 12 日
Yes. the constraints are that c is 1.0; and non-zero values for a and b

サインインしてコメントする。

 採用された回答

Ameer Hamza
Ameer Hamza 2020 年 11 月 11 日
編集済み: Ameer Hamza 2020 年 11 月 11 日

1 投票

You can consider the equation like this
where . Then define x, y, and z in MATLAB code like this
x; y; % your data pounts
z = zeros(size(x));
and then use cftool() with x, y, and z variables. And in custom equation, write
a*x^2+b*y^2+c*y

7 件のコメント

kdb acml
kdb acml 2020 年 11 月 11 日
I tried that but that will just fit a plane at z =0 for all values of a,b and c. I actually want to know what 2-d curve will fit the data.
Please refer to the image attached for clarification.
Ameer Hamza
Ameer Hamza 2020 年 11 月 11 日
Yes, it fit a plane at z=0. Your data is till confined to a 2D plane. You can just ignore the plane and create an implicit line. For example
x = linspace(-1, 1, 20);
y = x.^2;
z = zeros(size(x));
use cftool() estimate the model and export to the workspace with name 'fittedmodel' and then try following
f = @(x,y) fittedmodel.a*x.^2 + fittedmodel.b*y.^2 + fittedmodel.c*y;
fimplicit(f)
kdb acml
kdb acml 2020 年 11 月 11 日
But the constants I get (a,b and c) are just the initial guess values as they fit it just fine; making the whole curve fitting redundent.
Ameer Hamza
Ameer Hamza 2020 年 11 月 12 日
No, they are not the same as the initial guess. At least when I tried it with the sample data, I shared. Can you share your data points?
kdb acml
kdb acml 2020 年 11 月 12 日
編集済み: kdb acml 2020 年 11 月 12 日
Please see the attached file for x and y.
My problem is physics based so I know the coefficient for y needs to be 1.0; which will happen automatically as this is a homogeneous equation.
I'm doing a curve fitting for the other coefficients. I tried just using the quadratic root formula of y as well. That gives some results; but I wanted to see the changes when I use a different function for the curve fit.
Thank you for sticking with me despite my dumb mug :P
Ameer Hamza
Ameer Hamza 2020 年 11 月 12 日
If you load the data like this
data = load('new.txt');
x = data(:,1);
y = data(:,2);
z = zeros(size(x));
and use cftool to fit the model, then export the model to base workspace and then run the following
f = @(x,y) fittedmodel.a/fittedmodel.b*x.^2 + y.^2 + fittedmodel.c/fittedmodel.b*y;
fimplicit(f, [min(x), max(x) min(y) 0])
hold on
plot(x, y, '+')
You will get
The actual fit will depend on the initial condition.
kdb acml
kdb acml 2020 年 11 月 13 日
編集済み: kdb acml 2020 年 11 月 13 日
Thank You.
It seems I was just confused by the plot in cftool. I am getting the coefficients now. Again; thanks a lot.
All the 3 methos suggested in the 3 comments are working with some variance in the results.

サインインしてコメントする。

その他の回答 (2 件)

Matt J
Matt J 2020 年 11 月 12 日
編集済み: Matt J 2020 年 11 月 13 日

2 投票

An analytical solution is possible here,
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
s=std(A,1);
[~,~,V]=svd(A./s,0);
abc=V(:,end)./s(:);
abc=abc/abc(end);

6 件のコメント

Matt J
Matt J 2020 年 11 月 12 日
編集済み: Matt J 2020 年 11 月 12 日
For a GUI-based solution, you could also use fmincon in the Optimization App, but the ideal solution is the one above.
Bruno Luong
Bruno Luong 2020 年 11 月 12 日
This method does not seem to be robust with only part of the data is observed
%This method does not seem to be robust with only part of the data is observed
a=0.2
b=0.2
c=1
s=0.01; % noise std
% simulate some (x,y) data
x=2*rand(1,1000)-1;
d=1-4*b.*a*x.^2;
y=(-1+sqrt(d))./(2*b);
% add noise
x = x + s*randn(size(x));
y = y + s*randn(size(y));
% Matt's svd method
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
[~,~,V]=svd(A-mean(A,1),0);
abc=V(:,end);
abc=abc/abc(end);
% Plot
a=abc(1)
b=abc(2)
c=abc(3);
xx = linspace(-1,1);
yy = linspace(-1,1).';
z = a*xx.^2+b*yy.^2+yy;
close all
figure
contour(xx,yy,z,[0 0],'r');
hold on
plot(x,y,'.')
axis equal
legend('fit','data')
kdb acml
kdb acml 2020 年 11 月 13 日
I am still trying to understand how the Eigen value problem will give me the coefficients. I'll try this as well. Thanks
Matt J
Matt J 2020 年 11 月 13 日
編集済み: Matt J 2020 年 11 月 13 日
This method does not seem to be robust with only part of the data is observed
I think I've managed to improve it:
a=0.2;
b=0.2;
c=1;
s=0.01; % noise std
% simulate some (x,y) data
x=2*rand(1,1000)-1;
d=1-4*b.*a*x.^2;
y=(-1+sqrt(d))./(2*b);
% add noise
x = x + s*randn(size(x));
y = y + s*randn(size(y));
% Matt's svd method
sx=std(x); sy=std(y);
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
s=std(A,1);
[~,~,V]=svd(A./s,0);
abc=V(:,end)./s(:);
abc=abc/abc(end);
% Plot
a=abc(1)
a = 0.1860
b=abc(2)
b = 0.6876
c=abc(3),
c = 1
xx = linspace(-1,1);
yy = linspace(-1,1).';
z = a*xx.^2+b*yy.^2+yy;
close all
figure
plot(x,y,'k.')
hold on
contour(xx,yy,z,[0 0],'Color','c','LineWidth',2);
axis equal
legend('data','fit')
ylim([-0.3,0.3])
Bruno Luong
Bruno Luong 2020 年 11 月 13 日
編集済み: Bruno Luong 2020 年 11 月 13 日
Better indeed. I would argue this is even "more" correct (increase the noise s to 0.03 you'll see the effect, full code tlsqr.m attached).
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
C=cov(A);
S=sqrtm(C);
S=diag(diag(S));
% your method is equivalent to use
% S=diag(std(A)) % or
% S=diag(sqrt(diag(C)));
[~,~,V]=svd(A/S,0);
abc=S\V(:,end);
abc=abc/abc(end);
The method is still bother me great time, since the scalling affects both signal spread and noise spread. The noise in A is no longer Gaussian since it's not linear. etc...
kdb acml
kdb acml 2020 年 11 月 16 日
編集済み: kdb acml 2020 年 11 月 16 日
Thanks !
The variance of the coefficients using all the 3 methods is small for my data atm. So now I just need to remvoe some statistical errors before trying to find the actual differences in these methods.

サインインしてコメントする。

Bruno Luong
Bruno Luong 2020 年 11 月 12 日
編集済み: Bruno Luong 2020 年 11 月 12 日

1 投票

Why not just solving using the linear algebra, seem this straighforward method does the job
xy = load('new.txt');
x = xy(:,1);
y = xy(:,2);
P = -([x.^2,y.^2] \ y);
a = P(1);
b = P(2);
c = 1;
% Plot
xi = linspace(min(x),max(x));
yi = linspace(min(y),max(y)).';
z = reshape(a*xi.^2+b*yi.^2+c*yi,[length(yi) length(xi)]);
close all
figure
contour(xi,yi,z,[0 0],'r');
hold on
plot(x,y,'.')
legend('fit','data')

5 件のコメント

Matt J
Matt J 2020 年 11 月 12 日
編集済み: Matt J 2020 年 11 月 12 日
You have random noise in the coefficeint matrix [x.^2,y.^2], which means the solution will not be unbiased. Some sort of total least squares solver should give better results.
Bruno Luong
Bruno Luong 2020 年 11 月 12 日
Honestly; this is just for illustration. I have no idea of the noise characteristic of the experimental data provided by OP, and furthermore the algebric method has always a bias (I don't fit anything directly related to x/y metric here). I'm not even sure if x and y has the same unit (necessary condition for TLSQR). So before all those issue sorted out, no need to bother.
kdb acml
kdb acml 2020 年 11 月 13 日
Another way to try for me. Thanks !
As for your (Bruno Luong) sub-comment, I do not know what noice characteristics mean for a set of data. As for the units, both are non-dimensional quantities obtained directly from numerical experiments for stresses in a fluid.
@Matt J, I read up on some texts and it seems like this biased solution thing might create problems; I'll look into that. As for my question; all three methods worked giving results of the same order.
As a final icing on the gigantic cake of help you guys have been, could you please tell me which field of numerical analysis deals with these 'biased/nonbiased methods' and 'noice characteristics of the data' terms.
Bruno Luong
Bruno Luong 2020 年 11 月 13 日
編集済み: Bruno Luong 2020 年 11 月 13 日
Vast topic. All literature of regression deals somewhat with noise issue. For starter you can look at noise covariance matrix (normalized Gaussian noise), regularization technique: art of find the right balance between bias - systematic error - and statistics error - which depends on the inverse of the Hessian at the minimum, meaning the way to formulate the objective function to achieve the regression goal.
In practice noise sometime is not Gaussian, and it's a big mess even at the theoretical level.
If you really want to fire againts bias, you need first to understand/characterize your measurement noise. If you start to says "I do not know what noice characteristics means" then you have a long long way to go.
The total-variation-lsq try to deal with problem where the error is affected both measurement query points and values, where Matt try to inspire from by using SVD, but his nice idea - sorry to say - is flawed for various reasons.
EDIT: Matt tlsq method is much better after data "normalization".
kdb acml
kdb acml 2020 年 11 月 16 日
Thanks ! I'll look these up in my free time !

サインインしてコメントする。

カテゴリ

ヘルプ センター および File ExchangeGet Started with Curve Fitting Toolbox についてさらに検索

製品

リリース

R2020b

質問済み:

2020 年 11 月 11 日

編集済み:

2020 年 11 月 16 日

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by