Proposal: Add "fma" to the RealFloat class

Takenobu Tani takenobu.hs at gmail.com
Tue May 5 13:06:27 UTC 2015


Hi,

Related informatioln.

Intel FMA's information(hardware dependent) is here:

  Chapter 11

  Intel 64 and IA-32 Architectures Optimization Reference Manual

http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf


Of course, it is information that depends on the particular processor.
And abstraction level is too low.

PS
I like Haskell's abstruct naming convention more than "fma":-)

Regards,
Takenobu



2015-05-05 11:54 GMT+09:00 Carter Schonwald <carter.schonwald at gmail.com>:

> pardon the wall of text everyone, but I really want some FMA tooling :)
>
> I am going to spend some time later this week and next adding FMA primops
> to GHC and playing around with different ways to add it to Num (which seems
> pretty straightforward, though I think we'd all agree it shouldn't be
> exported by Prelude). And then depending on how Yitzchak's reproposal  of
> that exactly goes (or some iteration thereof) we can get something
> useful/usable into 7.12
>
> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact
> numbers*, and a higher precision FMA for *approximate numbers *(*ie
> floating point*),  and where I cant sanely use FMA if it lives anywhere
> but Num unless I rub typeable everywhere and do runtime type checks for
> applicable floating point types, which kinda destroys parametrically in
> engineering nice things.
>
> @levent: ghc doesn't do any optimization for floating point arithmetic
> (aside from 1-2 very simple things that are possibly questionable), and
> until ghc has support for precisly emulating high precision floating point
> computation in a portable way, probably wont have any interesting floating
> point computation.  Mandating that fma a b c === a*b+c for inexact number
> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc
> is conservative about optimizing floating point, because it makes doing
> correct stability analyses tractable!  I look forward to the day that GHC
> gets a bit more sophisticated about optimizing floating point computation,
> but that day is still a ways off.
>
> relatedly: FMA for float and double are not generally going to be faster
> than the individual primitive operations, merely more accurate when used
> carefully.
>
> point being*, i'm +1 on adding some manner of FMA operations to Num*
> (only sane place to put it where i can actually use it for a general use
> library) and i dont really care if we name it fusedMultiplyAdd,
> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor
> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to
> experts yet easy to google for the curious.
>
> to repeat: i'm going to do some leg work so that the double and float
> prims are portably exposed by ghc-prims (i've spoken with several ghc devs
> about that, and they agree to its value, and thats a decision outside of
> scope of the libraries purview), and I do hope we can to a consensus about
> putting it in Num so that expert library authors can upgrade the guarantees
> that they can provide end users without imposing any breaking changes to
> end users.
>
> A number of folks have brought up "but Num is broken" as a counter
> argument to adding FMA support to Num. I emphatically agree  num is borken
> :), BUT! I do also believe that fixing up Num prelude has the burden of
> providing a whole cloth design for an alternative design that we can get
> broad consensus/adoption with.  That will happen by dint of actually
> experimentation and usage.
>
> Point being, adding FMA doesn't further entrench current Num any more than
> it already is, it just provides expert library authors with a transparent
> way of improving the experience of their users with a free upgrade in
> answer accuracy if used carefully. Additionally, when Num's "semiring ish
> equational laws" are  framed with respect to approximate forwards/backwards
> stability, there is a perfectly reasonable law for FMA. I am happy to spend
> some time trying to write that up more precisely IFF that will tilt those
> in opposition to being in favor.
>
> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as a
> method therein for Num. If that constitutes a different and *more
> palatable proposal*  than what people have articulated so far (by
> discouraging casual use by dint of hiding) then I am happy to kick off a
> new thread with that concrete design choice.
>
> If theres a counter argument thats a bit more substantive than "Num is for
> exact arithmetic" or "Num is wrong" that will sway me to the other side,
> i'm all ears, but i'm skeptical of that.
>
> I emphatically support those who are displeased with Num to prototype some
> alternative designs in userland, I do think it'd be great to figure out a
> new Num prelude we can migrate Haskell / GHC to over the next 2-5 years,
> but again any such proposal really needs to be realized whole cloth before
> it makes its way to being a libraries list proposal.
>
>
> again, pardon the wall of text, i just really want to have nice things :)
> -Carter
>
>
> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok <erkokl at gmail.com> wrote:
>
>> I think `mulAdd a b c` should be implemented as `a*b+c` even for
>> Double/Float. It should only be an "optmization" (as in modular
>> arithmetic), not a semantic changing operation. Thus justifying the
>> optimization.
>>
>> "fma" should be the "more-precise" version available for Float/Double. I
>> don't think it makes sense to have "fma" for other types. That's why I'm
>> advocating "mulAdd" to be part of "Num" for optimization purposes; and
>> "fma" reserved for true IEEE754 types and semantics.
>>
>> I understand that Edward doesn't like this as this requires a different
>> class; but really, that's the price to pay if we claim Haskell has proper
>> support for IEEE754 semantics. (Which I think it should.) The operation is
>> just different. It also should account for the rounding-modes properly.
>>
>> I think we can pull this off just fine; and Haskell can really lead the
>> pack here. The situation with floats is even worse in other languages. This
>> is our chance to make a proper implementation, and we have the right tools
>> to do so.
>>
>> -Levent.
>>
>> On Mon, May 4, 2015 at 10:58 AM, Artyom <yom at artyom.me> wrote:
>>
>>>  On 05/04/2015 08:49 PM, Levent Erkok wrote:
>>>
>>> Artyom: That's precisely the point. The true IEEE754 variants where
>>> precision does matter should be part of a different class. What Edward and
>>> Yitz want is an "optimized" multiply-add where the semantics is the same
>>> but one that goes faster.
>>>
>>> No, it looks to me that Edward wants to have a more precise operation in
>>> Num:
>>>
>>> I'd have to make a second copy of the function to even try to see the
>>> precision win.
>>>
>>> Unless I'm wrong, you can't have the following things simultaneously:
>>>
>>>    1. the compiler is free to substitute *a+b*c* with *mulAdd a b c*
>>>    2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more
>>>    precise)
>>>    3. Num operations for Double (addition and multiplication) always
>>>    conform to IEEE754
>>>
>>>  The true IEEE754 variants where precision does matter should be part
>>> of a different class.
>>>
>>> So, does it mean that you're fine with not having point #3 because
>>> people who need it would be able to use a separate class for IEEE754 floats?
>>>
>>>
>>
>> _______________________________________________
>> Libraries mailing list
>> Libraries at haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>>
>>
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/libraries/attachments/20150505/272534d2/attachment.html>


More information about the Libraries mailing list