Proposal: Add log1p and expm1 to GHC.Float.Floating
Gershom Bazerman
gershomb at gmail.com
Thu Apr 24 04:46:18 UTC 2014
Let me try to be a bit concrete here.
Are there _any_ implementations of Floating outside of base that we know
of, which we are _concretely_ worried will not implement log1p and thus
cause algos to lose precision? If anybody knows of these
implementations, let them speak!
Furthermore, if we do know of them, then can't we submit patch requests
for them? Unless there are too many of course, but I know of only one
type of "typical" implementation of Floating outside of base. That
implementation is constructive, arbitrary-precision, reals, and in that
case, the default implementation should be fine.
(Outside of that, I know of two other perhaps implementations outside of
base, one by edwardk, and he as well as the other author are fine adding
log1p).
Also, in general, I don't care what happens to Floating, because it is a
silly class with a hodgepodge of methods anyway (plenty of which
potentially apply to things that aren't 'floating point' in any
meaningful sense), although RealFloat is even sillier. (By the way did
you know that RealFloat has a defaulted "atan2" method? Whatever we do,
it won't be worse than that).
Anyway, +1 for the original proposal, and also +1 for adding this to
RealFloat instead if that's acceptable, because I'm sure everyone could
agree that class couldn't possibly get much worse, and there's precedent
there anyway.
Also, I should add, as a rule, I think it is near-impossible to write
numerical code where you genuinely care both about performance and
accuracy in such a way as to be actually generic over the concrete
representations involved.
Cheers,
Gershom
On 4/23/14, 7:57 PM, John Lato wrote:
> There's one part of this alternative proposal I don't understand:
>
> On Mon, Apr 21, 2014 at 5:04 AM, Edward Kmett <ekmett at gmail.com
> <mailto:ekmett at gmail.com>> wrote:
>
>
> * If you can compile sans warnings you have nothing to fear. If
> you do get warnings, you can know precisely what types will have
> degraded back to the old precision at *compile* time, not runtime.
>
>
> I don't understand the mechanism by which this happens (maybe I'm
> misunderstanding the MINIMAL pragma?). If a module has e.g.
>
> > import DodgyFloat (DodgyFloat) -- defined in a 3rd-party package,
> doesn't implement log1p etc.
> >
> > x = log1p 1e-10 :: DodgyFloat
>
> I don't understand why this would generate a warning (i.e. I don't
> believe it will generate a warning). So the user is in the same
> situation as with the original proposal.
>
> John L.
>
>
> On Mon, Apr 21, 2014 at 5:24 AM, Aleksey Khudyakov
> <alexey.skladnoy at gmail.com <mailto:alexey.skladnoy at gmail.com>> wrote:
>
> On 21 April 2014 09:38, John Lato <jwlato at gmail.com
> <mailto:jwlato at gmail.com>> wrote:
> > I was just wondering, why not simply numerically robust
> algorithms as
> > defaults for these functions? No crashes, no errors, no
> loss of precision,
> > everything would just work. They aren't particularly
> complicated, so the
> > performance should even be reasonable.
> >
> I think it's best option. log1p and exp1m come with guarantees
> about precision. log(1+p) default makes it impossible to
> depend in such
> guarantees. They will silenly give wrong answer
>
>
>
>
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/libraries/attachments/20140424/e83044df/attachment.html>
More information about the Libraries
mailing list