[GHC] #9564: Floating point subnormals overrounded on output

GHC ghc-devs at haskell.org
Wed Sep 10 15:14:13 UTC 2014


#9564: Floating point subnormals overrounded on output
-------------------------------------+-------------------------------------
              Reporter:  jrp         |            Owner:  simonmar
                  Type:  bug         |           Status:  new
              Priority:  normal      |        Milestone:
             Component:  Runtime     |          Version:  7.8.3
  System                             |         Keywords:
            Resolution:              |     Architecture:  x86_64 (amd64)
      Operating System:  MacOS X     |       Difficulty:  Unknown
       Type of failure:  Incorrect   |       Blocked By:
  result at runtime                  |  Related Tickets:
             Test Case:              |
              Blocking:              |
Differential Revisions:              |
-------------------------------------+-------------------------------------

Comment (by carter):

 the standard defines the read/show for Float/Double as
 {{{

 instance  Show Float  where
     showsPrec p         = showFloat
 instance  Read Float  where
     readsPrec p         = readSigned readFloat
 instance  Show Double  where
     showsPrec p         = showFloat
 instance  Read Double  where
     readsPrec p         = readSigned readFloat

 }}}

 at the bottom of
 https://www.haskell.org/onlinereport/haskell2010/haskellch9.html#x16-1710009


 then  per
 https://www.haskell.org/onlinereport/haskell2010/haskellch38.html#x46-31400038

 {{{

 showFloat :: RealFloat a => a -> ShowS
 Show a signed RealFloat value to full precision using standard decimal
 notation for arguments
 whose absolute value lies between 0.1 and 9,999,999, and scientific
 notation otherwise.
 }}}

 then looking at the stuff in 4.7.0.1 base
 http://hackage.haskell.org/package/base-4.7.0.1/docs/src/GHC-
 Float.html#showFloat
 {{{

 -- | Show a signed 'RealFloat' value to full precision
 -- using standard decimal notation for arguments whose absolute value lies
 -- between @0.1@ and @9,999,999@, and scientific notation otherwise.
 showFloat :: (RealFloat a) => a -> ShowS
 showFloat x  =  showString (formatRealFloat FFGeneric Nothing x)
 }}}


 I'll try to dig into this a teeny bit more, but I think thank as long as
 `(read . show) :: Float -> Float` acts as the identity function on all
 floating point values when we roundtrip them. (and as long as they get
 correctly parsed to that same internal value when read/showed between
 haskell and another language)

 phrased differently, if we dont have roughly that
 `read_hs . show_clang == read_clang . show_hs ==  read_clang . show_clang
 ==  read_hs . show_hs == id `

 then yes we have a problem, but we dont quite show that problem with
 theses tests as above, right?  We just demonstrate that the particular
 choice in default representation for show differs from C, right? Its
 important to remember that parsing a floating point number itself will do
 rounding to the nearest floating point value too.

 i'm trying to focus on some work work this week, but if someone could test
 that these roundtripining identities work out ok, that'd be awesome (I may
 try to do it soon myself, but who knows how long that will take to happen)

--
Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/9564#comment:3>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler


More information about the ghc-tickets mailing list