Folding constants for floats

Kyle Van Berendonck kvanberendonck at gmail.com
Mon Jan 13 22:21:45 UTC 2014


Hi,

I'd like to work on the primitives first. They are relatively easy to
implement. Here's how I figure it;

The internal representation of the floats in the cmm is as a Rational
(ratio of Integers), so they have "infinite precision". I can implement all
the constant folding by just writing my own operations on these rationals;
ie, ** takes the power of the top/bottom and reconstructs a new Rational,
log takes the difference between the log of the top/bottom etc. This is all
very easy to fold.

I can encode errors in the Rational where infinity is >0 %: 0 and NaN is 0
%: 0. Since the size of floating point constants is more of an architecture
specific thing, and floats don't wrap around like integers do, it would
make more sense (in my opinion) to only reduce the value to the
architecture specific precision (or clip it to a NaN or such) in the
**final** stage as apposed to trying to emulate the behavior of a double
native to the architecture (which is a very hard thing to do, and results
in precision errors -- the real question is, do people want precision
errors when they write literals in code, or are they really looking for the
compiler to do a better job than them at making sure they stay precise?)


On Tue, Jan 14, 2014 at 3:27 AM, Carter Schonwald <
carter.schonwald at gmail.com> wrote:

> Oh I see the ticket.  Are you focusing on adding hex support to Double#
> and Float# ? That would be splendid.  We currently don have a decent way of
> writing nan, and the infinities.  That would be splendid.
>
> On Monday, January 13, 2014, Carter Schonwald wrote:
>
>> This is actually a bit more subtle than you'd think.  Are those constants
>> precise and exact?  (There's certainly floating point code that exploits
>> the cancellations in the floating point model) There's many floating point
>> computations that can't be done with exact rational operations.  There's
>> also certain aspects that are target dependent like operations having 80bit
>> vs 64bit precision. (Ie using the old intel fp registers vs sse2 and newer)
>>
>> What's the ticket you're working on?
>>
>>
>> Please be very cautious with floating point, any changes to the meaning
>> that aren't communicated by the programs author could leave a haskeller
>> numerical analyst scratching their head.  For example, when doing these
>> floating point computations, what rounding modes will you use?
>>
>> On Monday, January 13, 2014, Kyle Van Berendonck wrote:
>>
>>> Hi,
>>>
>>> I'm cutting my teeth on some constant folding for floats in the cmm.
>>>
>>> I have a question regarding the ticket I'm tackling:
>>>
>>> Should floats be folded with infinite precision (and later truncated to
>>> the platform float size) -- most useful/accurate, or folded with the
>>> platform precision, i.e. double, losing accuracy but keeping consistent
>>> behaviour with -O0 -- most "correct"?
>>>
>>> I would prefer the first case because it's *much* easier to implement
>>> than the second, and it'll probably rot less.
>>>
>>> Regards.
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20140114/d1f2b150/attachment.html>


More information about the ghc-devs mailing list