So far I haven’t seen anyone answer this correctly.
0.1+0.1 == 0.2 is
true then what does
0.1+0.2 == 0.3 evaluate to? The answer surprises most people, because it’s
false. Try it right now, go to your browser’s address bar and type in
The representation of 0.1 in binary is 0.000(1100) where parentheses mean repeating. This is because diving by 5 in binary numbers yields a repeating fraction, kind of like diving by 3 in the decimal world. So depending on how many bits we leave for the exponent vs. the significant it changes at which point we cut the repeating fraction off. As we saw, this leads to inaccurate representations of decimal numbers.
So why are binary floating point numbers used at all? The only reason is speed. Specialized computer hardware uses binary floating point fractions to make quick calculations. In languages like Java there’s a specialized BigDecimal class for decimal fractions
I would argue that decimal fractions and arbitrary-precision arithmetic should be the default for a general use programming language. Binary floating point numbers should be accessed in a more awkward way, not the other way around.
Consider the following Java snippet:
BigDecimal x = new BigDecimal(0.1);
Do you see the mistake? We passed a double into the BigDecimal’s constructor, so now the value of the BigDecimal is not 0.1 at all. Again, this is very counter-intuitive because you need to pass a string into its constructor like so:
BigDecimal x = new BigDecimal("0.1");
Just the fact that there is no way to write an accurate literal for 0.1 makes a strong case for scaled decimals to be the default representation. Remember, the decimal base is the default for integers because we write numbers in the decimal base. So why should that not be the case for fractions?