Use known error bars in polyconf function

I would like to know if it's possible to add error bars (which I know the value for my experiment) to polyconf function in order to reestimate confidence bounds ?
Thanks in advance !

1 件のコメント

Andrew Newell
Andrew Newell 2011 年 2 月 11 日
Do you mean that you are fitting the data using polyfit(x,y,n) and you have error estimates for y?

サインインしてコメントする。

 採用された回答

Andrew Newell
Andrew Newell 2011 年 2 月 11 日

0 投票

Aaah, the old heteroscedasticity problem! @Jeremy, here is a sketch of what you need to do. First, you need to switch to a weighted least squares fit. If E is a vector of errors for your data, then your weights should be proportional to
w = E.^(-2);
There are a variety of tools that allow you to do weighted least squares fitting, depending on your problem and which toolboxes you have (see Weighted Curve Fitting Methods). Then you can call
[Y,DELTA] = polyconf(p,X,S)
where the structure S is described in the documentation for polyfit. This structure incorporates the information on your error bars.
EDIT: On closer examination, I see that robustfit does an iterative weighted least squares for problems where nothing is known about the weights.
If you don't have the Curve Fitting Toolbox or the Optimization Toolbox, you might have to use the do-it-yourself approach. This article on Least Squares by Cleve Moler describes how to do it.

その他の回答 (2 件)

Jeremy
Jeremy 2011 年 2 月 16 日

0 投票

Hi Andrew
Thanks for your answer! It sounds quite easy to solve my problem. However, two remaining questions : 1) Where does this formula (w = E.^(-2);) come from ? Is it empirical ? Is there a theoretical explanation ?
2) I have the Statistics Toolbox, and I don't see how to add my own weighting values with robustfit (which is the recommended function to do weighted LSF with this toolbox). I can only implement automatically calculated errors, which does not correspond to the one got experimentally. Do you have an idea on how to solve this issue?
Thanks
Jérémy

2 件のコメント

Andrew Newell
Andrew Newell 2011 年 2 月 16 日
You're welcome Jérémy! In answer to question 1: there is a theoretical explanation. See chapter 7 of "Regression Analysis by Example" by Chetterjee et al.
Andrew Newell
Andrew Newell 2011 年 2 月 16 日
As for question 2 - I expanded the answer above.

サインインしてコメントする。

Jeremy
Jeremy 2011 年 2 月 16 日

0 投票

I forgot one last question : if the error is the same on all data points, it means that there won't be any difference with and without weighting, so it won't take into account the experimental error bars. Do you have any comment about it ?
Thanks in advance

2 件のコメント

Andrew Newell
Andrew Newell 2011 年 2 月 16 日
I think the least squares method assumes that scatter in the measurements about the best-fitting line is due to measurement error, so there is no need to incorporate the errors explicitly. If there is some variation that is not measurement error, then the linear model needs to be revised. But I'm not a statistician.
Jeremy
Jeremy 2011 年 2 月 16 日
This explanation sounds good to me. I think I was trying to count the errors twice... Anyway, thank you for your fast and relevant answer !

サインインしてコメントする。

タグ

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by