[GHC] #9304: Floating point woes; Different behavior on Mac vs Linux

GHC ghc-devs at haskell.org
Sun Jul 13 17:03:17 UTC 2014


#9304: Floating point woes; Different behavior on Mac vs Linux
-------------------------------------+------------------------------------
        Reporter:  lerkok            |            Owner:
            Type:  bug               |           Status:  new
        Priority:  high              |        Milestone:
       Component:  Compiler          |          Version:  7.8.3
      Resolution:                    |         Keywords:  floating point
Operating System:  Unknown/Multiple  |     Architecture:  Unknown/Multiple
 Type of failure:  None/Unknown      |       Difficulty:  Unknown
       Test Case:                    |       Blocked By:  9276
        Blocking:                    |  Related Tickets:
-------------------------------------+------------------------------------

Comment (by lerkok):

 @ekmett: I do agree that expecting consistency is a hard sell; but I think
 this particular case is a real bug on 32-bit Linux.

 @carter: I think you are spot on that the Linux 32-bit version is doing
 something funky, and producing garbage as a result.

 I've used an infinite precision calculator to compute the result of the
 multiplication. The real answer is -10.9999999999999806. It turns out that
 this gets rounded to -10.99999999999998 when run on Mac/Linux-64 bit;
 which I have not double-checked, but I am willing to think is the correct
 rounding with the default rounding mode of round-nearest-even. At least it
 looks reasonable without looking at individual bits of the lower order,
 and I've also some indirect evidence to this, as the failing test case
 came from an interaction with the Z3 SMT solver, which provided a model
 for a problem that turned out to be false in the 32-bit Linux realm. (The
 test passes on 64-bit Mac.)

 However, on Linux-32 bit, I get the following result: 10.999999999999982
 for the multiplication.

 That clearly is incorrectly rounded, even if the intermediate result is
 taken to be infinitely precise. Thus, this leads me to think that the
 32-bit linux implementation is buggy somehow.

 I think Carter's earlier experiment suggests 64-bit works just file (both
 Mac and Linux), and we have evidence that the result in 32-bit Linux is
 buggy. I have no way of getting to a 32-bit Mac with a modern GHC at
 least; but if someone can replicate it there, it would provide more
 evidence if this is a true 32-bit issue; or if Linux plays a role too.

 This may not be a huge deal, as the 32-bit machines are becoming more and
 more obsolete, but we should not cop-out behind floating-point
 inconsistencies by saying "whatever happens happens." I think Haskell has
 to play a leading role here and 'Double' should really mean IEEE754 64-bit
 double-precision value regardless of the architecture; but that's whole
 another can of worms, obviously.

--
Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/9304#comment:9>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler


More information about the ghc-tickets mailing list