[Haskell-cafe] Possible floating point bug in GHC?
bugfact at gmail.com
Fri Apr 3 16:10:17 EDT 2009
I tried both precise and fast, but that did not help. Compiling to SSE2
fixed it, since that does not use a floating point stack I guess.
I'm preparing a repro test case, but it is tricky since removing code tends
to change the optimizations and then the bug does not occur.
Does anybody know what the calling convention for floating points is for
cdecl on x86? The documentation says that the result is returned in st(0),
but it says nothing about the floating point tags. I assume that every
function expects the FP stack to be empty, potentially containing just
argument values. But GHC calls the C function with some FP registers
reserved on the stack...
On Fri, Apr 3, 2009 at 9:54 PM, Zachary Turner <divisortheory at gmail.com>wrote:
> What floating point model is your DLL compiled with? There are a variety
> of different options here with regards to optimizations, and I don't know
> about the specific assembly that each option produces, but I know there are
> options like Strict, Fast, or Precise, and maybe when you do something like
> that it makes different assumptions about the caller. Although that doesn't
> say anything about whose "fault" it is, but at least it might be helpful to
> know if changing the floating point model causes the bug to go away.
> On Fri, Apr 3, 2009 at 2:31 PM, Peter Verswyvelen <bugfact at gmail.com>wrote:
>> Well this situation can indeed not occur on PowerPCs since these CPUs just
>> have floating point registers, not some weird dual stack sometimes /
>> registers sometimes architecture.
>> But in my case the bug is consistent, not from time to time.
>> So I'll try to reduce this to a small reproducible test case, maybe
>> including the assembly generated by the VC++ compiler.
>> On Fri, Apr 3, 2009 at 9:02 PM, Malcolm Wallace <
>> Malcolm.Wallace at cs.york.ac.uk> wrote:
>>> Interesting. This could be the cause of a weird floating point bug that
>>> has been showing up in the ghc testsuite recently, specifically affecting
>>> MacOS/Intel (but not MacOS/ppc).
>>> That test compares the result of the builtin floating point ops with the
>>> same ops imported via FFI. The should not be different, but on Intel they
>>> sometimes are.
>>> On 3 Apr 2009, at 18:58, Peter Verswyvelen wrote:
>>> For days I'm fighting against a weird bug.
>>>> My Haskell code calls into a C function residing in a DLL (I'm on
>>>> Windows, the DLL is generated using Visual Studio). This C function computes
>>>> a floating point expression. However, the floating point result is
>>>> I think I found the source of the problem: the C code expects that all
>>>> the Intel's x86's floating point register tag bits are set to 1, but it
>>>> seems the Haskell code does not preserve that.
>>>> Since the x86 has all kinds of floating point weirdness - it is both a
>>>> stack based and register based system - so it is crucially important that
>>>> generated code plays nice. For example, when using MMX one must always emit
>>>> an EMMS instruction to clear these tag bits.
>>>> If I manually clear these tags bits, my code works fine.
>>>> Is this something other people encountered as well? I'm trying to make a
>>>> very simple test case to reproduce the behavior...
>>>> I'm not sure if this is a visual C compiler bug, GHC bug, or something
>>>> I'm doing wrong...
>>>> Is it possible to annotate a foreign imported C function to tell the
>>>> Haskell code generator the functioin is using floating point registers
>>> Haskell-Cafe mailing list
>>> Haskell-Cafe at haskell.org
>> Haskell-Cafe mailing list
>> Haskell-Cafe at haskell.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe