Proposal: Add log1p and expm1 to GHC.Float.Floating
Carter Schonwald
carter.schonwald at gmail.com
Thu Apr 24 15:16:20 UTC 2014
Agreed, (I tried thinking this through myself, Floating doesn't provide
enough structure to give you any notion of precision)
On Thu, Apr 24, 2014 at 10:37 AM, Edward Kmett <ekmett at gmail.com> wrote:
> The idea of a slower more careful default doesn't work. Without ratcheting
> up the requirements to RealFloat you don't have any notion of precision to
> play with, and moving it there rules out many important cases like Complex,
> Quaternion, etc.
>
>
> On Thu, Apr 24, 2014 at 10:17 AM, Casey McCann <cam at uptoisomorphism.net>wrote:
>
>> I expect the largest audience involved here are the group that doesn't
>> know or care about these functions, but definitely wants their code to
>> work.
>>
>> As such, I'm opposed to anything that would break code that doesn't
>> need the benefits of these functions. Two specific scenarios come to
>> mind:
>>
>> - Floating instances written for DSLs or AST-like types (the only
>> common example of Floating instances not already mentioned) needing
>> implementations of functions their author may not have heard of in
>> order to compile without warnings. This could be mitigated by good
>> documentation and providing "defaultFoo" functions suitable for
>> implementations that don't or can't do anything useful with these
>> functions anyway.
>>
>> - Programmers who don't actually need the extra precision using these
>> functions anyway due to having a vague sense that they're "better".
>> Yes, in my experience this sort of thing is a common mentality among
>> programmers. Silently introducing runtime exceptions in this scenario
>> seems completely unacceptable to my mind and I'm strongly opposed to
>> any proposal involving that.
>>
>> As far as I can see, together these rule out any proposal that would
>> directly add functions to an existing class unless default
>> implementations no worse than the status quo (in some sense) are
>> provided.
>>
>> For default implementations, I would prefer the idea suggested earlier
>> of a (probably slower) algorithm that does preserve precision rather
>> than a naive version, if that's feasible. This lives up to the claimed
>> benefit of higher precision, and as a general rule I feel that any
>> pitfalls left for the unwary should at worst provide the correct
>> result more slowly.
>>
>> Also, of all the people who might be impacted by this, I suspect that
>> the group who really do need both speed and precision for number
>> crunching are the most likely to know what they're doing and be aware
>> of potential pitfalls.
>>
>> - C.
>>
>>
>>
>> On Wed, Apr 23, 2014 at 11:55 PM, Edward Kmett <ekmett at gmail.com> wrote:
>> > Let's try taking a step back here.
>> >
>> > There are clearly two different audiences in mind here and that the
>> > parameters for debate here are too narrow for us to achieve consensus.
>> >
>> > Maybe we can try changing the problem a bit and see if we can get there
>> by
>> > another avenue.
>> >
>> > Your audience would wants a claim that these functions do everything in
>> > their power to preserve accuracy.
>> >
>> > My audience wants to be able to opportunistically grab accuracy without
>> > leaking it into the type and destroying the usability of their
>> libraries for
>> > the broadest set of users.
>> >
>> > I essence here it is your audience is the one seeking an extra
>> > guarantee/law.
>> >
>> > Extra guarantees are the sort of thing one often denotes through a
>> class.
>> >
>> > However, putting them in a separate class destroys the utility of this
>> > proposal for me.
>> >
>> > As a straw-man / olive-branch / half-baked idea:
>> >
>> > Could we get you what you want by simply making an extra class to
>> indicate
>> > the claim that the guarantee holds, and get what I want by placing these
>> > methods in the existing Floating with the defaults?
>> >
>> > I rather don't like that solution, but I'm just trying to broaden the
>> scope
>> > of debate, and at least expose where the fault lines lie in the design
>> > space, and find a way for us to stop speaking past each other.
>> >
>> > -Edward
>> >
>> >
>> > On Wed, Apr 23, 2014 at 11:38 PM, Edward Kmett <ekmett at gmail.com>
>> wrote:
>> >>
>> >> On Wed, Apr 23, 2014 at 8:16 PM, John Lato <jwlato at gmail.com> wrote:
>> >>>
>> >>> Ah. Indeed, that was not what I thought you meant. But the user may
>> not
>> >>> be compiling DodgyFloat; it may be provided via apt/rpm or similar.
>> >>
>> >>
>> >> That is a fair point.
>> >>
>> >>>
>> >>> Could you clarify one other thing? Do you think that \x -> log (1+x)
>> >>> behaves the same as log1p?
>> >>
>> >>
>> >> I believe that \x -> log (1 + x) is a passable approximation of log1p
>> in
>> >> the absence of a better alternative and that I'd rather have the user
>> get
>> >> something no worse than they get today if they refactored their code
>> to take
>> >> advantage of the extra capability we are exposing, than just wind up
>> in a
>> >> situation where they have to choose between trying to use it because
>> the
>> >> types say they should be able to call it and getting unexpected
>> bottoms they
>> >> can't protect against, so that the new functionality can't be used in
>> a way
>> >> that can be detected at compile time.
>> >>
>> >> At this point we're just going around in circles.
>> >>
>> >> Under your version of things we put them into a class in a way that
>> >> everyone has to pay for it, and nobody including me gets to have enough
>> >> faith that it won't crash when invoked to actually call it.
>> >>
>> >> -Edward
>> >>
>> >>>
>> >>> On Wed, Apr 23, 2014 at 5:08 PM, Edward Kmett <ekmett at gmail.com>
>> wrote:
>> >>>>
>> >>>> I think you may have interpreted me as saying something I didn't try
>> to
>> >>>> say.
>> >>>>
>> >>>> To clarify, what I was indicating was that during the compilation of
>> >>>> your 'DodgyFloat' supplying package a bunch of warnings about
>> unimplemented
>> >>>> methods would scroll by.
>> >>>>
>> >>>> -Edward
>> >>>>
>> >>>>
>> >>>> On Wed, Apr 23, 2014 at 8:06 PM, Edward Kmett <ekmett at gmail.com>
>> wrote:
>> >>>>>
>> >>>>> This does work.
>> >>>>>
>> >>>>> MINIMAL is checked based on the definitions supplied locally in the
>> >>>>> instance, not based on the total definitions that contribute to the
>> >>>>> instance.
>> >>>>>
>> >>>>> Otherwise we couldn't have the very poster-chid example of this from
>> >>>>> the documentation for MINIMAL
>> >>>>>
>> >>>>> class Eq a where
>> >>>>> (==) :: a -> a -> Bool
>> >>>>> (/=) :: a -> a -> Bool
>> >>>>> x == y = not (x /= y)
>> >>>>> x /= y = not (x == y)
>> >>>>> {-# MINIMAL (==) | (/=) #-}
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> On Wed, Apr 23, 2014 at 7:57 PM, John Lato <jwlato at gmail.com>
>> wrote:
>> >>>>>>
>> >>>>>> There's one part of this alternative proposal I don't understand:
>> >>>>>>
>> >>>>>> On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett <ekmett at gmail.com>
>> >>>>>> wrote:
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> * If you can compile sans warnings you have nothing to fear. If
>> you
>> >>>>>>> do get warnings, you can know precisely what types will have
>> degraded back
>> >>>>>>> to the old precision at compile time, not runtime.
>> >>>>>>
>> >>>>>>
>> >>>>>> I don't understand the mechanism by which this happens (maybe I'm
>> >>>>>> misunderstanding the MINIMAL pragma?). If a module has e.g.
>> >>>>>>
>> >>>>>> > import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package,
>> >>>>>> > doesn't implement log1p etc.
>> >>>>>> >
>> >>>>>> > x = log1p 1e-10 :: DodgyFloat
>> >>>>>>
>> >>>>>> I don't understand why this would generate a warning (i.e. I don't
>> >>>>>> believe it will generate a warning). So the user is in the same
>> situation
>> >>>>>> as with the original proposal.
>> >>>>>>
>> >>>>>> John L.
>> >>>>>>
>> >>>>>>>
>> >>>>>>> On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov
>> >>>>>>> <alexey.skladnoy at gmail.com> wrote:
>> >>>>>>>>
>> >>>>>>>> On 21 April 2014 09:38, John Lato <jwlato at gmail.com> wrote:
>> >>>>>>>> > I was just wondering, why not simply numerically robust
>> algorithms
>> >>>>>>>> > as
>> >>>>>>>> > defaults for these functions? No crashes, no errors, no loss
>> of
>> >>>>>>>> > precision,
>> >>>>>>>> > everything would just work. They aren't particularly
>> complicated,
>> >>>>>>>> > so the
>> >>>>>>>> > performance should even be reasonable.
>> >>>>>>>> >
>> >>>>>>>> I think it's best option. log1p and exp1m come with guarantees
>> >>>>>>>> about precision. log(1+p) default makes it impossible to depend
>> in
>> >>>>>>>> such
>> >>>>>>>> guarantees. They will silenly give wrong answer
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>
>> >>
>> >
>> >
>> > _______________________________________________
>> > Libraries mailing list
>> > Libraries at haskell.org
>> > http://www.haskell.org/mailman/listinfo/libraries
>> >
>>
>
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/libraries/attachments/20140424/25c1ffa9/attachment-0001.html>
More information about the Libraries
mailing list