フィルターのクリア

Creating Multiple Variable Polynomial Function Using Datapoints

47 ビュー (過去 30 日間)
Tamas Lanci
Tamas Lanci 2020 年 4 月 27 日
編集済み: John D'Errico 2020 年 4 月 28 日
Hi Everyone,
I have an X,Y,Z dataset. The value of Z depends on X and Y. I would like to create a multiple variable function in MATLAB, find its maximum Z value and its relevant X and Y coordinates.
As I know, polyfit command cannot deal with multiple variable functions. Is there another command or method to solve this problem?
Thank you for your help.
  2 件のコメント
KSSV
KSSV 2020 年 4 月 27 日
Tamas Lanci
Tamas Lanci 2020 年 4 月 28 日
I have managed to fit a surface to the datapoints and find its local minimum point. However, this minimum point is not accurate enough. I generate each datapoint by running my code which takes several seconds. On this photo you can see 441 datapoints. I used a for loop to create these points. It takes about 15 minutes to create this rough point cloud. I need to increase the accuracy around the minimum point by not increasing my computational time somehow. I was wondering if I could implement a gradient ascent method to find minimum point of this function with relatively high accuracy?
How could I do that?

サインインしてコメントする。

回答 (1 件)

John D'Errico
John D'Errico 2020 年 4 月 28 日
編集済み: John D'Errico 2020 年 4 月 28 日
You have not shown the actual data. A picture is worth a thousand words. But if you attach the data, perhaps now 2000 words. ;)
No. You cannot easily use an optimizer of a scattered set ot points, especially if you have any noise in the data.
However, the curve fitting toolbox can do a 2-d polynomial fit. As well, my polyfitn utility found on the file echange can to the same.
But really, it is probably not a good idea to fit the ENTIRE surface with a polynomial, since that just forces the polynomial to chase the shape of your function when well away from the area you really care about. You will then need to use too high an order polynomial to try to fit all the data. And the result will then be general crapola.
So I might choose only the n points that are closes to the lowest data point. Pick some value of n, perhaps 50 points, perhaps less. Then use a 2-d polynomial fit to the restricted data. Finally, find the minimum of the polynomial inside the region of the restriction. Even there, you will want to be careful about using too high an order polynomial.
Another option would be an inverse distance weighted fit. That is, for each point, compute the distance to the lowest data point. Then compute a weight function, perhaps something like:
W = 1//(D0 + D.^2)
Choose some value for D0 that will work for your problem. But this sort of thing will cause the most weight to be applied to the points that are closest to the lowest data point. Again, DO NOT USE A HIGH ORDER POLYNOMIAL FIT, AS THAT WILL PRODUCE COMPLETE CRAP AWAY FROM THE MIN.
Again, without any real data, this is somewhat tough to give you a good example, but you should get the general idea.
Perhaps an example:
xy = rand(441,2)*6 - 3;
z = peaks(xy(:,1),xy(:,2));
plot3(xy(:,1),xy(:,2),z,'o');
grid on
I don't know the location of the min of the peaks function. It is easy enough to find though. For a decent starting estimate, just take the lowest point:
[zmin,ind] = min(z)
zmin =
-6.036
ind =
299
xy0 = xy(ind,:)
xy0 =
0.0028297 -1.7244
Now, throw that into fminsearch:
[xmin, zminval] = fminsearch(@(XY) peaks(XY(1),XY(2)),xy0)
xmin =
0.22824 -1.6255
zminval =
-6.5511
Next, compute the distance to xy0, for each of our data points.
D = sqrt(sum((xy - xy0).^2,2));
If we take the closest 10% of those points, then fitting a surface ONLY to them...
prctile(D,10)
ans =
1.0272
So the 10% point in terms of distance is around a distance of 1. There are 44 points in this subset. It should be sufficient to fit a polynomial model with 20 terms, though I would really not wish to go higher than that.
ind = D < prctile(D,10);
sum(ind)
ans =
44
>> Smdl = fit(xy(ind,:),z(ind),'poly44')
Linear model Poly44:
Smdl(x,y) = p00 + p10*x + p01*y + p20*x^2 + p11*x*y + p02*y^2 + p30*x^3
+ p21*x^2*y + p12*x*y^2 + p03*y^3 + p40*x^4 + p31*x^3*y
+ p22*x^2*y^2 + p13*x*y^3 + p04*y^4
Coefficients (with 95% confidence bounds):
p00 = -10.05 (-15.65, -4.451)
p10 = -5.737 (-11.08, -0.3966)
p01 = -55.16 (-69.61, -40.7)
p20 = -15.55 (-19.1, -12.01)
p11 = 4.65 (-5.58, 14.88)
p02 = -76.69 (-89.98, -63.39)
p30 = 9.382 (7.558, 11.21)
p21 = -27.35 (-31.72, -22.98)
p12 = 5.117 (-1.081, 11.32)
p03 = -36.3 (-41.48, -31.11)
p40 = -2.463 (-3.116, -1.809)
p31 = 4.064 (3.034, 5.095)
p22 = -8.305 (-9.573, -7.037)
p13 = 0.9328 (-0.2582, 2.124)
p04 = -5.551 (-6.279, -4.823)
plot(Smdl)
hold on
plot3(xy(ind,1),xy(ind,2),z(ind),'ro')
Within the support of the data, the fit is reasonable. As long as the optimizer starts out near the minimum, the solution it finds will be the one we want, or pretty close to that.
[xminpoly, zminpoly] = fminsearch(@(XY) Smdl(XY),xy0)
xminpoly =
0.2381 -1.6573
zminpoly =
-6.3755
This is actually quite close to that fminsearch gave us, when applied directly to the peaks function. I'd claim success. With more data, I could have then chosen a tighter subset of points around the identified data min. That would have greatly increased the ability of the model to estimate the true min.
There is another option, to use griddata. The above polynomial fit idea is a good one as long as there is noise in the data. However, griddata can fit a smooth interpolant through your data.
[xmingr, zmingr] = fminsearch(@(XY) griddata(xy(:,1),xy(:,2),z,XY(1),XY(2),'v4'),xy0)
xmingr =
0.19877 -1.6422
zmingr =
-6.4197

カテゴリ

Help Center および File ExchangeDescriptive Statistics についてさらに検索

製品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by