[GHC] #9304: Floating point woes; Different behavior on Mac vs Linux

GHC ghc-devs at haskell.org
Sun Jul 13 09:45:33 UTC 2014


#9304: Floating point woes; Different behavior on Mac vs Linux
-------------------------------------+------------------------------------
        Reporter:  lerkok            |            Owner:
            Type:  bug               |           Status:  new
        Priority:  high              |        Milestone:
       Component:  Compiler          |          Version:  7.8.3
      Resolution:                    |         Keywords:  floating point
Operating System:  Unknown/Multiple  |     Architecture:  Unknown/Multiple
 Type of failure:  None/Unknown      |       Difficulty:  Unknown
       Test Case:                    |       Blocked By:  9276
        Blocking:                    |  Related Tickets:
-------------------------------------+------------------------------------

Comment (by ekmett):

 > I'm not 100% sure as to which one is actually correct; but the point is
 that these are IEEE floating point numbers running on the same
 architecture (Intel X86), and thus should decode in precisely the same
 way.

 I personally wouldn't expect that at all.

 I'd expect it all comes down to whether the optimizer decided to keep the
 intermediate result in a larger temporary because it ran it through the
 old x87 hardware or if it decided to go through SSE, etc. This will
 happen.

 It will give different answers, even on the same machine and OS at
 different call sites, when the optimizer spits out different code for
 different inlinings, etc.

 Floating point answers are very fragile. A difference of only one ulp is
 actually pretty good. ;)

 There is a relevant note in:
 http://www.haskell.org/ghc/docs/latest/html/users_guide/bugs.html#bugs-ghc

 > On 32-bit x86 platforms when using the native code generator, the
 -fexcess-precision option is always on. This means that floating-point
 calculations are non-deterministic, because depending on how the program
 is compiled (optimisation settings, for example), certain calculations
 might be done at 80-bit precision instead of the intended 32-bit or 64-bit
 precision. Floating-point results may differ when optimisation is turned
 on. In the worst case, referential transparency is violated, because for
 example let x = E1 in E2 can evaluate to a different value than E2[E1/x].

 > One workaround is to use the -msse2 option (see Section 4.16, “Platform-
 specific Flags”, which generates code to use the SSE2 instruction set
 instead of the x87 instruction set. SSE2 code uses the correct precision
 for all floating-point operations, and so gives deterministic results.
 However, note that this only works with processors that support SSE2
 (Intel Pentium 4 or AMD Athlon 64 and later), which is why the option is
 not enabled by default. The libraries that come with GHC are probably
 built without this option, unless you built GHC yourself.

 Also note I'd suspect a profound lack of interaction of that flag with
 ghci's bytecode interpreter. The -msse2 thing is probably doing you no
 good in the REPL. Also as noted above, the libraries that ship with GHC
 aren't build that way, so if base is doing your multiplication for you,
 you're probably just smacking into generated code that had -fexcess-
 precision turned on.

--
Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/9304#comment:8>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler


More information about the ghc-tickets mailing list