[Haskell-beginners] truncate results depend on strict/lazy

Oscar Benjamin oscar.j.benjamin at gmail.com
Wed Sep 11 12:41:00 CEST 2013


On 11 September 2013 00:19, Brandon Allbery <allbery.b at gmail.com> wrote:
> On Tue, Sep 10, 2013 at 6:14 PM, Oscar Benjamin <oscar.j.benjamin at gmail.com>
> wrote:
>>
>> On 10 September 2013 22:49, Brandon Allbery <allbery.b at gmail.com> wrote:
>> > On Tue, Sep 10, 2013 at 5:11 PM, Oscar Benjamin
>> > <oscar.j.benjamin at gmail.com>
>> > wrote:
>> >>
>> >> What do you mean when you say that floating point can't be capture in
>> >> a simple functional description?
>> >
>> > *You* try describing the truncation behavior of Intel FPUs (they use 80
>> > bits
>> > internally but only store 64, for (double)). "Leaving aside" isn't an
>> > option; it's visible in the languages that use them.
>>
>> However, for the same CPU and the same pair of inputs floatadd(A, B)
>> returns the same result right? The result may differ from one CPU to
>
> In isolation, probably. When combined with other operations, it depends on
> optimization and the other operations.

Well that depends what you mean. floatadd(A, floatadd(B, C)) is also a
well defined function. It just happens that it is not equivalent to
floatadd(floatadd(A, B), C). But the same is true for most functions.
If the compiler tries to optimise by assuming that e.g. (a+b)+c is
equivalent to a+(b+c) for floats then this is not really an
optimisation but rather a semantic change. If those kind of
optimisations are occurring beneath your feet then it becomes pretty
much impossible to reason about the accuracy of higher-level code.

>> through computation). What is it about functional programming
>> languages that makes this difficult as you implied earlier?
>
> Only the expectation differs; programmers in e.g. C generally ignore such
> things, although there are obscure compiler options that try to control what
> happens. And C doesn't promise much about the behavior anyway. In pure
> functional programming, people get used to things behaving in nice
> theoretically characterized ways... and then they run into the bit size
> limit on Int or the somewhat erratic behavior of Float and Double and
> suddenly the nice abstractions fall apart.

It's true that straight-forward C code is very much subject to
problems with regard to compiler optimisation and that most C
programmers don't care. However as of C99 the C standards are
integrated with IEEE-754 meaning that if you do care about these
things then it is possible to control them (without resorting to
Fortran!). Does Haskell have language/compiler features that can
protect FP operations from unsafe optimisation (or from any
optimisation)?


Oscar



More information about the Beginners mailing list