Machine epsilon

277 ビュー (過去 30 日間)
Richard
Richard 2012 年 1 月 18 日
What does it mean for MATLAB to have a machine epsilon 2.2204e-16? Does that mean we can only be sure of a number up to the 15th decimal place, so something else?
Thanks.

採用された回答

the cyclist
the cyclist 2012 年 1 月 18 日
It roughly means that numbers are stored with about 15-16 digits of precision. If a number is approximately 1, then that means it can be stored with an error of around 10^(-16) or so. If the number is approximately 1000, then it is stored with an error around 10^(-13) or so.
This is a VERY rough, non-technical explanation.
  4 件のコメント
Walter Roberson
Walter Roberson 2012 年 1 月 18 日
A "double precision number" is one in which approximately twice as much storage is allocated compared to a "single precision number".
In modern computer systems, "double precision" _usually_ means 64 bits total storage for the number, and _usually_ means IEEE 754 Floating Point standards are being followed. However, Embedded Systems may use a different storage representation and no particular standard.
"single precision" these days _usually_ means 32 bits of storage for the number.
Double Precision was standardized before Single Precision: companies invented their own floating point representations Back Then that were good enough to get through on their own systems; IEEE then came along later and created a well-considered double precision floating point standard that did not tread on anyone's toes because no-one had a really usable double precision implemented.
If you are working with older computer systems, designed before 1985 or so, or are working with an embedded system, you might find that some weird and wonderful floating point or fixed-point system is implemented. Or if you are working with an IBM Z-series system, you might find that it also has a built-in _decimal_ floating point unit as well as a _binary_ floating point unit.
Richard
Richard 2012 年 1 月 18 日
Thanks everyone! :)

サインインしてコメントする。

その他の回答 (1 件)

Walter Roberson
Walter Roberson 2012 年 1 月 18 日
The value is the same as eps(1) which is described at http://www.mathworks.com/help/techdoc/ref/eps.html
  4 件のコメント
Walter Roberson
Walter Roberson 2012 年 1 月 18 日
"d = eps(X) is the positive distance from abs(X) to the next larger in magnitude floating point number of the same precision as X"
That says that d = eps(1) is the smallest positive value such that (1+d) is exactly representable and is different than 1.
In hex representation of the binary floating point numbers,
>> num2hex(1)
3ff0000000000000
>> num2hex(1+eps(1))
3ff0000000000001
1+eps(1) is the smallest representable number greater than 1, a single bit difference in the least significant (smallest change) bit.
The difference between 1 and 1+eps(1), which is to say eps(1), is the 2.22E-16 that you noted above.
This eps() value scales in linear jumps, so eps(2) is 2*eps(1), eps(16) is 16*eps(1), eps(1/16) is eps(1)/16 .
It is not a number of decimal places; it is _relative_ tolerance. eps(1/1024) is eps(1)/1024 so if you were working with base values in the approximate range of 0.001 then you would not still be limited to 2E-16 accuracy; you would be limited to about (2E-16)/1000 accuracy. And likewise, if you are working with values in the range of 1000, you do not get 1000 + 2E-16 accuracy, you get about 1000 + 1000*2E-16 = 1000 + 2E-13 accuracy
Now, there are also all kinds of rounding reasons and limits on accuracy of special functions like sin(), that can result in worse accuracy; eps() gives best-case accuracy.
Richard
Richard 2012 年 1 月 18 日
Thanks Walter! This is a GREAT explanation! :-)

サインインしてコメントする。

タグ

製品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by