Why are these numbers not equal?

ghz 1years ago ⋅ 1549 views

Question

The following code is obviously wrong. What's the problem?

i <- 0.1
i <- i + 0.05
i
## [1] 0.15
if(i==0.15) cat("i equals 0.15") else cat("i does not equal 0.15")
## i does not equal 0.15

Answer

General (language agnostic) reason

Since not all numbers can be represented exactly in IEEE floating point arithmetic (the standard that almost all computers use to represent decimal numbers and do math with them), you will not always get what you expected. This is especially true because some values which are simple, finite decimals (such as 0.1 and 0.05) are not represented exactly in the computer and so the results of arithmetic on them may not give a result that is identical to a direct representation of the "known" answer.

This is a well known limitation of computer arithmetic and is discussed in several places:

Comparing scalars

The standard solution to this in R is not to use ==, but rather the all.equal function. Or rather, since all.equal gives lots of detail about the differences if there are any, isTRUE(all.equal(...)).

if(isTRUE(all.equal(i,0.15))) cat("i equals 0.15") else cat("i does not equal 0.15")

yields

i equals 0.15

Some more examples of using all.equal instead of == (the last example is supposed to show that this will correctly show differences).

0.1+0.05==0.15
#[1] FALSE
isTRUE(all.equal(0.1+0.05, 0.15))
#[1] TRUE
1-0.1-0.1-0.1==0.7
#[1] FALSE
isTRUE(all.equal(1-0.1-0.1-0.1, 0.7))
#[1] TRUE
0.3/0.1 == 3
#[1] FALSE
isTRUE(all.equal(0.3/0.1, 3))
#[1] TRUE
0.1+0.1==0.15
#[1] FALSE
isTRUE(all.equal(0.1+0.1, 0.15))
#[1] FALSE

Some more detail, directly copied from an answer to a similar question:

The problem you have encountered is that floating point cannot represent decimal fractions exactly in most cases, which means you will frequently find that exact matches fail.

while R lies slightly when you say:

1.1-0.2
#[1] 0.9
0.9
#[1] 0.9

You can find out what it really thinks in decimal:

sprintf("%.54f",1.1-0.2)
#[1] "0.900000000000000133226762955018784850835800170898437500"
sprintf("%.54f",0.9)
#[1] "0.900000000000000022204460492503130808472633361816406250"

You can see these numbers are different, but the representation is a bit unwieldy. If we look at them in binary (well, hex, which is equivalent) we get a clearer picture:

sprintf("%a",0.9)
#[1] "0x1.ccccccccccccdp-1"
sprintf("%a",1.1-0.2)
#[1] "0x1.ccccccccccccep-1"
sprintf("%a",1.1-0.2-0.9)
#[1] "0x1p-53"

You can see that they differ by 2^-53, which is important because this number is the smallest representable difference between two numbers whose value is close to 1, as this is.

We can find out for any given computer what this smallest representable number is by looking in R's machine field:

 ?.Machine
 #....
 #double.eps     the smallest positive floating-point number x 
 #such that 1 + x != 1. It equals base^ulp.digits if either 
 #base is 2 or rounding is 0; otherwise, it is 
 #(base^ulp.digits) / 2. Normally 2.220446e-16.
 #....
 .Machine$double.eps
 #[1] 2.220446e-16
 sprintf("%a",.Machine$double.eps)
 #[1] "0x1p-52"

You can use this fact to create a 'nearly equals' function which checks that the difference is close to the smallest representable number in floating point. In fact this already exists: all.equal.

?all.equal
#....
#all.equal(x,y) is a utility to compare R objects x and y testing ‘near equality’.
#....
#all.equal(target, current,
#      tolerance = .Machine$double.eps ^ 0.5,
#      scale = NULL, check.attributes = TRUE, ...)
#....

So the all.equal function is actually checking that the difference between the numbers is the square root of the smallest difference between two mantissas.

This algorithm goes a bit funny near extremely small numbers called denormals, but you don't need to worry about that.

Comparing vectors

The above discussion assumed a comparison of two single values. In R, there are no scalars, just vectors and implicit vectorization is a strength of the language. For comparing the value of vectors element-wise, the previous principles hold, but the implementation is slightly different. == is vectorized (does an element-wise comparison) while all.equal compares the whole vectors as a single entity.

Using the previous examples

a <- c(0.1+0.05, 1-0.1-0.1-0.1, 0.3/0.1, 0.1+0.1)
b <- c(0.15,     0.7,           3,       0.15)

== does not give the "expected" result and all.equal does not perform element-wise

a==b
#[1] FALSE FALSE FALSE FALSE
all.equal(a,b)
#[1] "Mean relative difference: 0.01234568"
isTRUE(all.equal(a,b))
#[1] FALSE

Rather, a version which loops over the two vectors must be used

mapply(function(x, y) {isTRUE(all.equal(x, y))}, a, b)
#[1]  TRUE  TRUE  TRUE FALSE

If a functional version of this is desired, it can be written

elementwise.all.equal <- Vectorize(function(x, y) {isTRUE(all.equal(x, y))})

which can be called as just

elementwise.all.equal(a, b)
#[1]  TRUE  TRUE  TRUE FALSE

Alternatively, instead of wrapping all.equal in even more function calls, you can just replicate the relevant internals of all.equal.numeric and use implicit vectorization:

tolerance = .Machine$double.eps^0.5
# this is the default tolerance used in all.equal,
# but you can pick a different tolerance to match your needs

abs(a - b) < tolerance
#[1]  TRUE  TRUE  TRUE FALSE

This is the approach taken by dplyr::near, which documents itself as

This is a safe way of comparing if two vectors of floating point numbers are (pairwise) equal. This is safer than using ==, because it has a built in tolerance

dplyr::near(a, b)
#[1]  TRUE  TRUE  TRUE FALSE

Testing for occurrence of a value within a vector

The standard R function %in% can also suffer from the same issue if applied to floating point values. For example:

x = seq(0.85, 0.95, 0.01)
# [1] 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93 0.94 0.95
0.92 %in% x
# [1] FALSE

We can define a new infix operator to allow for a tolerance in the comparison as follows:

`%.in%` = function(a, b, eps = sqrt(.Machine$double.eps)) {
  any(abs(b-a) <= eps)
}

0.92 %.in% x
# [1] TRUE

dplyr::near wrapped in any can also be used for the vectorized check

any(dplyr::near(0.92, x))
# [1] TRUE