Proposal: Add log1p and expm1 to GHC.Float.Floating
John Lato
jwlato at gmail.com
Thu Apr 24 05:35:21 UTC 2014
I'm not entirely sure how I feel about the default + MINIMAL proposal; I've
tried to take some time to think about it. There are still a few cracks
for the unwary from my POV, but they're pretty small. There won't be any
old code using log1p etc, and any new instances will be suitably warned. I
have no reservations about choosing this path.
I would feel better about selecting it as an implementation strategy if
there was a bit more feedback from others, especially as there was already
some support for the OP.
John
On Wed, Apr 23, 2014 at 10:06 PM, Edward Kmett <ekmett at gmail.com> wrote:
> On Thu, Apr 24, 2014 at 12:46 AM, Gershom Bazerman <gershomb at gmail.com>wrote:
>
>> Let me try to be a bit concrete here.
>>
>> Are there _any_ implementations of Floating outside of base that we know
>> of, which we are _concretely_ worried will not implement log1p and thus
>> cause algos to lose precision? If anybody knows of these implementations,
>> let them speak!
>>
>
> I would like to avoid shoving this up to RealFloat for the simple
> pragmatic reason that it takes us right back where we started in many ways,
> the naive version of the algorithm would have weaker constraints, and so
> the thing that shouldn't need to exist would have an artificial lease on
> life as a result.
>
>
>> Furthermore, if we do know of them, then can't we submit patch requests
>> for them? Unless there are too many of course, but I know of only one type
>> of "typical" implementation of Floating outside of base. That
>> implementation is constructive, arbitrary-precision, reals, and in that
>> case, the default implementation should be fine.
>>
>
> The major implementations beyond base are in linear, vector-space, ad,
> numbers' CReal, and debug-simplereflect.
>
> There may be a dozen other automatic differentiation implementations out
> there, e.g. fad, my old rad, Conal's, old versions of monoids, etc.
>
> That said, on this John's point is true, it is an open universe, there can
> be a lot of them out there.
>
> That _also_ said, if we went with something like the MINIMAL pragma with
> default approach that we were just discussing those 'private application
> instances' would be the things people build locally that *would* blast them
> with warnings.
>
> So that might suggest the concrete implementation strategy:
>
> Add the methods to Floating with defaults, but include them in MINIMAL as
> in my previous modified proposal, but also commit to going through hackage
> looking for existing instances and reach out to authors to patch / with
> patches.
>
> That pass over hackage might spackle over John's objection to default +
> MINIMAL in that it doesn't catch _everything_ for folks who install via a
> package manager, as the stuff that gets installed by such means after all
> starts out in the world of hackage.
>
> ... and with that we could all move on to other things. ;)
>
> -Edward
>
>
>> (Outside of that, I know of two other perhaps implementations outside of
>> base, one by edwardk, and he as well as the other author are fine adding
>> log1p).
>>
>> Also, in general, I don't care what happens to Floating, because it is a
>> silly class with a hodgepodge of methods anyway (plenty of which
>> potentially apply to things that aren't 'floating point' in any meaningful
>> sense), although RealFloat is even sillier. (By the way did you know that
>> RealFloat has a defaulted "atan2" method? Whatever we do, it won't be worse
>> than that).
>>
>> Anyway, +1 for the original proposal, and also +1 for adding this to
>> RealFloat instead if that's acceptable, because I'm sure everyone could
>> agree that class couldn't possibly get much worse, and there's precedent
>> there anyway.
>>
>> Also, I should add, as a rule, I think it is near-impossible to write
>> numerical code where you genuinely care both about performance and accuracy
>> in such a way as to be actually generic over the concrete representations
>> involved.
>>
>> Cheers,
>> Gershom
>>
>>
>> On 4/23/14, 7:57 PM, John Lato wrote:
>>
>> There's one part of this alternative proposal I don't understand:
>>
>> On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett <ekmett at gmail.com> wrote:
>>
>>>
>>> * If you can compile sans warnings you have nothing to fear. If you do
>>> get warnings, you can know precisely what types will have degraded back to
>>> the old precision at *compile* time, not runtime.
>>>
>>
>> I don't understand the mechanism by which this happens (maybe I'm
>> misunderstanding the MINIMAL pragma?). If a module has e.g.
>>
>> > import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package,
>> doesn't implement log1p etc.
>> >
>> > x = log1p 1e-10 :: DodgyFloat
>>
>> I don't understand why this would generate a warning (i.e. I don't
>> believe it will generate a warning). So the user is in the same situation
>> as with the original proposal.
>>
>> John L.
>>
>>
>>> On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov <
>>> alexey.skladnoy at gmail.com> wrote:
>>>
>>>> On 21 April 2014 09:38, John Lato <jwlato at gmail.com> wrote:
>>>> > I was just wondering, why not simply numerically robust algorithms as
>>>> > defaults for these functions? No crashes, no errors, no loss of
>>>> precision,
>>>> > everything would just work. They aren't particularly complicated, so
>>>> the
>>>> > performance should even be reasonable.
>>>> >
>>>> I think it's best option. log1p and exp1m come with guarantees
>>>> about precision. log(1+p) default makes it impossible to depend in such
>>>> guarantees. They will silenly give wrong answer
>>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Libraries mailing listLibraries at haskell.orghttp://www.haskell.org/mailman/listinfo/libraries
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/libraries/attachments/20140423/a4a6e94a/attachment.html>
More information about the Libraries
mailing list