[GHC] #9125: int-to-float conversion broken on ARM - 7.8.1-rc2

GHC ghc-devs at haskell.org
Mon Jul 14 00:10:05 UTC 2014


#9125: int-to-float conversion broken on ARM - 7.8.1-rc2
------------------------------------------------+--------------------------
        Reporter:  Ansible                      |            Owner:
            Type:  bug                          |           Status:  new
        Priority:  normal                       |        Milestone:
       Component:  Compiler                     |          Version:  7.8.1
      Resolution:                               |         Keywords:
Operating System:  Linux                        |     Architecture:  arm
 Type of failure:  Incorrect result at runtime  |       Difficulty:
       Test Case:                               |  Unknown
        Blocking:                               |       Blocked By:
                                                |  Related Tickets:
------------------------------------------------+--------------------------

Comment (by amurrayc):

 I have the same problem on iOS (simulator or device) with 7.8.2 or 7.8.3
 compiled from source tarballs.  I'm not sure if it should be a separate
 ticket.

 I don't think this is an `Integer` to `Float` problem per se.  If I run
 {{{
 print (F# 29.0#) -- to pick a value at random
 }}}
 I get
 {{{
 2109.0
 }}}
 as in the original ticket.  This should be using a direct `Float` literal,
 no?

 A little digging showed that
 {{{
 print (floor (29.0 :: Float) :: Int)
 }}}
 shows
 {{{
 2109
 }}}
 with no optimisation, but
 {{{
 29
 }}}
 with `-O`

 Another clue is that `floor :: Float -> Integer` doesn't give the correct
 result, even with `-O`.

 Optimised `floor :: Float -> Int` uses the primop `float2Int#` while its
 unoptimised version and both versions of `floor :: Float -> Integer` use
 `decodeFloat_Int#` as does `show :: Float -> String` which last explains
 the original ticket.

 running
 {{{
 let (m,e) = decodeFloat (29.0 :: Float)
     mstr = printf "%#x" m
 putStrLn $ "(" ++ mstr ++ ", " ++ show e ++ ")"
 }}}
 gives
 {{{
 (0x41e80000, -19)
 }}}
 instead of the correct
 {{{
 (0xe80000, -19)
 }}}
 Notably the erroneous `0x41e80000` is the correct value for the entire
 bitfield of `29.0 :: Float#` rather than just the mantissa.

 This all suggests that the problem lies within `decodeFloat_Int#`.  I
 looked at `__decodeFloat_Int` in `rts/StgPrimFloat.c`, even inserting a
 couple of `assert`s at the end to check the values for 29.0 which passed.
 That led me to look at `stg_decodeFloatzuIntzh` in `rts/PrimOps.cmm` but
 at that point I was getting a bit lost.

 Any ideas anyone?

 For the record I'm running the above snippets in a simple Haskell
 `Main.hs` like:
 {{{
 {-# LANGUAGE ForeignFunctionInterface #-}
 module Main where

 import Foreign
 import Text.Printf ( printf )

 foreign import ccall safe "c_main" c_main :: IO ()

 main = do
     let (m,e) = decodeFloat (29.0 :: Float)
         mstr = printf "%#x" m
     putStrLn $ "(" ++ mstr ++ ", " ++ show e ++ ")"
     c_main
 }}}
 with a skeleton Xcode 5.1.1 project and observing the results in the debug
 window.

--
Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/9125#comment:1>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler


More information about the ghc-tickets mailing list