How to avoid error calculation going to infinity when dividing by near-zero values?
18 ビュー (過去 30 日間)
古いコメントを表示
Hello,
I am calculating the relative error between two signals using the next equation:
relative_error(i) = abs((measured(i)-estimated(i))./measured(i))*100;
The problem arises when measured is zero or near zero that relative_error explodes.
Do you know a more appropriated way of calculating an error between two signals that avoids this problem?
Thanks,
Cerilet
0 件のコメント
回答 (1 件)
Honglei Chen
2017 年 10 月 24 日
There are several ways to handle this, for example, you can add one more line to set all those points to a predefined value
relative_error(i) = abs((measured(i)-estimated(i))./measured(i))*100;
relative_error(measured==0) = 1; % say predefined value is 1
Or just add an epsilon to avoid this issue
epsilon = 1e-8;
relative_error(i) = abs((measured(i)-estimated(i))./(measured(i)+epsilon))*100;
HTH
0 件のコメント
参考
製品
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!