Binary floating point numbers vs. Scaled Decimals

So far I haven’t seen anyone answer this correctly.

In Javascript (or in C and Java, for that matter) 0.1+0.1 == 0.2 is true then what does 0.1+0.2 == 0.3 evaluate to? The answer surprises most people, because it’s false. Try it right now, go to your browser’s address bar and type in javascript:alert(0.1+0.2); and you’ll get 0.30000000000000004 and that’s not 0.3, is it?

This is because the fractions in Java, C or Javascript are binary floating point fractions. This means that when converting to decimal numbers (the numbers we type by hand) there’s a loss of precision. So 0.1+0.1 happens to be correct, but 0.1+0.2 happens to have a small error. Notice that using more precision wouldn’t help the problem, it doesn’t matter how many zeroes we get before we get the 4, the == operator will still say the two numbers are not equal.

The representation of 0.1 in binary is 0.000(1100) where parentheses mean repeating. This is because diving by 5 in binary numbers yields a repeating fraction, kind of like diving by 3 in the decimal world. So depending on how many bits we leave for the exponent vs. the significant it changes at which point we cut the repeating fraction off. As we saw, this leads to inaccurate representations of decimal numbers.

So why are binary floating point numbers used at all? The only reason is speed. Specialized computer hardware uses binary floating point fractions to make quick calculations. In languages like Java there’s a specialized BigDecimal class for decimal fractions

I would argue that decimal fractions and arbitrary-precision arithmetic should be the default for a general use programming language. Binary floating point numbers should be accessed in a more awkward way, not the other way around.

Consider the following Java snippet:

BigDecimal x = new BigDecimal(0.1);

Do you see the mistake? We passed a double into the BigDecimal’s constructor, so now the value of the BigDecimal is not 0.1 at all. Again, this is very counter-intuitive because you need to pass a string into its constructor like so:

BigDecimal x = new BigDecimal("0.1");

Just the fact that there is no way to write an accurate literal for 0.1 makes a strong case for scaled decimals to be the default representation. Remember, the decimal base is the default for integers because we write numbers in the decimal base. So why should that not be the case for fractions?

Advertisements

3 responses to “Binary floating point numbers vs. Scaled Decimals

  1. So you say, “I would argue that decimal fractions and arbitrary-precision arithmetic should be the default for a general use programming language,” would you care to post an example of how you would like to see it used?

    • For example, in C# there’s notation like so:
      0.1m + 0.2m which uses decimal fractions and will actually be equal to 0.3m

      I believe any kind of high level language should default to decimal fractions and provide a way to access binary fractions either with awkward literals like 0.1f or as in Newspeak: 0.1 asFloat

      In Java, BigDecimal notation is extremely awkward because there is no kind of operator overloading so you can’t use + – * / on them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s