Reading floating point
Carter Schonwald
carter.schonwald at gmail.com
Tue Oct 11 14:41:48 UTC 2016
Could you elaborate or point me to where this philosophy is articulated in
commentary in base or in the language standards ?
On Monday, October 10, 2016, David Feuer <david.feuer at gmail.com> wrote:
> It may currently be true for floats, but it's never been true in general,
> particularly with regard to records. Read is not actually designed to parse
> Haskell; it's for parsing "Haskell-like" things. Because it, unlike a true
> Haskell parser, is type-directed, there are somewhat different trade-offs.
>
> On Oct 11, 2016 1:50 AM, "Carter Schonwald" <carter.schonwald at gmail.com
> <javascript:_e(%7B%7D,'cvml','carter.schonwald at gmail.com');>> wrote:
>
>> How is that not a bug? We should be able to read back floats
>>
>> On Monday, October 10, 2016, David Feuer <david.feuer at gmail.com
>> <javascript:_e(%7B%7D,'cvml','david.feuer at gmail.com');>> wrote:
>>
>>> It doesn't, and it never has.
>>>
>>> On Oct 10, 2016 6:08 PM, "Carter Schonwald" <carter.schonwald at gmail.com>
>>> wrote:
>>>
>>>> Read should accept exactly the valid source literals for a type.
>>>>
>>>> On Monday, October 10, 2016, David Feuer <david.feuer at gmail.com> wrote:
>>>>
>>>>> What does any of that have to do with the Read instances?
>>>>>
>>>>> On Oct 10, 2016 1:56 PM, "Carter Schonwald" <
>>>>> carter.schonwald at gmail.com> wrote:
>>>>>
>>>>>> The right solution is to fix things so we have scientific notation
>>>>>> literal rep available. Any other contortions run into challenges in
>>>>>> repsentavility of things. That's of course ignoring denormalized floats,
>>>>>> infinities, negative zero and perhaps nans.
>>>>>>
>>>>>> At the very least we need to efficiently and safely support
>>>>>> everything but nan. And I have some ideas for that I hope to share soon.
>>>>>>
>>>>>> On Monday, October 10, 2016, David Feuer <david.feuer at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I fully expect this to be somewhat tricky, yes. But some aspects of
>>>>>>> the current implementation strike me as pretty clearly non-optimal. What I
>>>>>>> meant about going through Rational is that given "625e-5", say, it
>>>>>>> calculates 625%100000, producing a fraction in lowest terms, before calling
>>>>>>> fromRational, which itself invokes fromRat'', a division function optimized
>>>>>>> for a special case that doesn't seem too relevant in this context. I could
>>>>>>> be mistaken, but I imagine even reducing to lowest terms is useless here.
>>>>>>> The separate treatment of the digits preceding and following the decimal
>>>>>>> point doesn't do anything obviously useful either. If we (effectively)
>>>>>>> normalize in decimal to an integral mantissa, for example, then we can
>>>>>>> convert the whole mantissa to an Integer at once; this will balance the
>>>>>>> merge tree better than converting the two pieces separately and combining.
>>>>>>>
>>>>>>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" <gale at sefer.org> wrote:
>>>>>>>
>>>>>>> The way I understood it, it's because the type of "floating point"
>>>>>>> literals is
>>>>>>>
>>>>>>> Fractional a => a
>>>>>>>
>>>>>>> so the literal parser has no choice but to go via Rational. Once you
>>>>>>> have that, you use the same parser for those Read instances to ensure
>>>>>>> that the result is identical to what you would get if you parse it as
>>>>>>> a literal in every case.
>>>>>>>
>>>>>>> You could replace the Read parsers for Float and Double with much
>>>>>>> more
>>>>>>> efficient ones. But you would need to provide some other guarantee of
>>>>>>> consistency with literals. That would be more difficult to achieve
>>>>>>> than one might think - floating point is deceivingly tricky. There
>>>>>>> are
>>>>>>> already several good parsers in the libraries, but I believe all of
>>>>>>> them can provide different results than literals in some cases.
>>>>>>>
>>>>>>> YItz
>>>>>>>
>>>>>>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer <david.feuer at gmail.com>
>>>>>>> wrote:
>>>>>>> > The current Read instances for Float and Double look pretty iffy
>>>>>>> from an
>>>>>>> > efficiency standpoint. Going through Rational is exceedingly
>>>>>>> weird: we have
>>>>>>> > absolutely nothing to gain by dividing out the GCD, as far as I
>>>>>>> can tell.
>>>>>>> > Then, in doing so, we read the digits of the integral part to form
>>>>>>> an
>>>>>>> > Integer. This looks like a detour, and particularly bad when it
>>>>>>> has many
>>>>>>> > digits. Wouldn't it be better to normalize the decimal
>>>>>>> representation first
>>>>>>> > in some fashion (e.g., to 0.xxxxxxexxx) and go from there?
>>>>>>> Probably less
>>>>>>> > importantly, is there some way to avoid converting the mantissa to
>>>>>>> an
>>>>>>> > Integer at all? The low digits may not end up making any difference
>>>>>>> > whatsoever.
>>>>>>> >
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > ghc-devs mailing list
>>>>>>> > ghc-devs at haskell.org
>>>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>>>> >
>>>>>>>
>>>>>>>
>>>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-devs/attachments/20161011/9a069a08/attachment.html>
More information about the ghc-devs
mailing list