Why Machine epsilon is defined this way?

In summary, the value of the epsilon for a floating point variable is the decimal presentation of the smallest ULP when the number stored in the variable is equal to one. It makes sense to define this value relative to 1.0, since that is the number that is stored in the IEEE format.
  • #1
Hernaner28
263
0
Hi. I'm studying numerical methods so I found this subforum the most correct for this question.
The machine epsilon for a computer is defined as the least number e such that 1 + e is different to 1.
I just wonder, why 1 + e? And not 2 + e for instance?

Thanks!
 
Technology news on Phys.org
  • #2
For each floating point datatype there are entities that programmers call ULPs. An ULP is the ultimate limit of precision for a given floating point implmenetation. As the number stored in the variable becomes larger (in magnitude) the size of the ulp changes in terms of how it will be represented in decimal format. ULPs are not fixed.

The EPSILON value is the decimal presentation of a single ULP when the number stored in the floating point variable, iff that value is equal to one. Since ULP's magnitude "changes" the decision was made to use a normal number, so the definition is based on one (1). This is arbitrary in a sense.

Here is what you should know, presented in the long somewhat rigorous way courtesy of the ACM and David Goldberg.

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
 
  • Like
Likes 1 person
  • #3
In most floating point formats, 1.0 is stored as 1.0 x 2^0, 2.0 is stored as 1.0 x 2^1, 4.0 as 1.0 x 2^2, ... . (In IEEE 32 bit and 64 bit formats, the 1 is not stored but assumed to be there). So it makes the most sense to define episilon relative to 1.0, since that is stored as a number multiplied by 2^0 which is the same as a number multiplied by 1.
 
  • Like
Likes 1 person
  • #4
Here's a diagram of the IEEE double precision floating point representation:

618px-IEEE_754_Double_Floating_Point_Format.svg.png


This format is specialized for expressing numbers in the form 1.<fractional_part>*2<exponent>. Note that the 1 that precedes the binary point is not stored. With an infinite amount of storage, every computable number could be expressed in this binary format. Computers don't have an infinite amount of storage, so the floating point representations are a poor man's alternative to the pure mathematical form.

Suppose the fractional part of some number is all zeros except for the very last bit, which is one. What's the difference between that number and the corresponding number in which the fractional part is all zeros? That's the ULP that Jim wrote about. Of course if the exponent is very large the ULP is going to be very large also. It makes a more sense to talk about the ULP when the exponent is zero than any other exponent, and that's how the machine epsilon is defined.
 
  • Like
Likes 1 person
  • #5
Thank you all! I think I understand now
 
  • #6
D H how do you express zero with the format you mentioned? Is that the normalized floating point representation?
Is 1.0 stored or not?
 
Last edited:
  • #7
+0 is all-bits zero. -0 (the IEEE floating point standard has +0 and -0) is all bits zero except for the sign bit.

The value that is stored as the exponent in the IEEE format is the true exponent plus some bias, 1023 in the case of doubles. The value of the exponent for the IEEE double that represents 1.0 is 1023. The special cases (zero is one of them) have a stored exponent value that is either all bits zero or all bits one. Everything else is treated as a normalized number.

The other special cases:
  • Denormalized numbers. These are numbers where the implied number before the binary point is zero rather than one. The denormalized numbers have a stored exponent of zero. Thus zero is just a special case of this special case.
  • Infinity. Infinities are represented with a fractional part of all bits zero and a stored exponent of all bits one. There are only two possibilities here, the sign bit clear (positive infinity) or set (negative infinity).
  • Not-a-number. What's 0/0? It's not a number. NaNs have the exponent all bits one and a fractional part that is not all bits zero. This means are lots of representations of NaN available, but the only ones that are used in practice are a fractional part that is all bits one.
 

FAQ: Why Machine epsilon is defined this way?

Why is machine epsilon important in computational science?

Machine epsilon is important in computational science because it represents the smallest possible number that can be stored and manipulated by a computer. This value is crucial for determining the accuracy and precision of numerical calculations, as it sets the limit for the smallest difference between two numbers that a computer can distinguish.

How is machine epsilon defined?

Machine epsilon is defined as the difference between 1 and the next largest number that can be represented by a computer's floating-point system. It is usually denoted by the Greek letter epsilon (ε) and varies in value depending on the computer's architecture and the data type used for storing numbers.

Why is machine epsilon different for different computer systems?

Machine epsilon is different for different computer systems because it is dependent on the precision and storage capacity of the system's floating-point representation. For example, a computer with a 32-bit floating-point system will have a different machine epsilon than a computer with a 64-bit system.

What is the relationship between machine epsilon and round-off error?

Machine epsilon is directly related to round-off error, which is the difference between the exact mathematical result and the result obtained through numerical calculations on a computer. The smaller the machine epsilon, the smaller the round-off error will be, resulting in more accurate calculations.

Can machine epsilon be reduced or eliminated?

No, machine epsilon cannot be reduced or eliminated completely. It is a fundamental limitation of how computers represent and manipulate numbers. However, there are techniques such as error analysis and careful selection of algorithms that can minimize the impact of machine epsilon on numerical calculations.

Back
Top