apfelmus at quantentunnel.de
Sun Oct 11 05:35:55 EDT 2009
Iavor Diatchki wrote:
> well, I think that the fact that we seem to have a program context
> that can distinguish "f1" from "f2" is worth discussing because I
> would have thought that in a pure language they are interchangable.
> The question is, does the context in Oleg's example really distinguish
> between "f1" and "f2"? You seem to be saying that this is not the
> case: in both cases you end up with the same non-deterministic
> program that reads two numbers from the standard input and subtracts
> them but you can't assume anything about the order in which the
> numbers are extracted from the input---it is merely an artifact of the
> GHC implementation that with "f1" the subtraction always happens the
> one way, and with "f2" it happens the other way.
> I can (sort of) buy this argument, after all, it is quite similar to
> what happens with asynchronous exceptions (f1 (error "1") (error "2")
> vs f2 (error "1") (error "2")). Still, the whole thing does not
> "smell right": there is some impurity going on here, and trying to
> offload the problem onto the IO monad only makes reasoning about IO
> computations even harder (and it is petty hard to start with). So,
> discussion and alternative solutions should be strongly encouraged, I
To put it in different words, here an elaboration on what exactly the
non-determinism argument is:
Consider programs foo1 and foo2 defined as
foo :: (a -> b -> c) -> IO String
foo f = Control.Exception.catch
(evaluate (f (error "1") (error "2")) >> return "3")
(\(ErrorCall s) -> return s)
foo1 = foo f1 where f1 x y = x `seq` y `seq` ()
foo2 = foo f2 where f2 x y = y `seq` x `seq` ()
Knowing how exceptions and seq behave in GHC, it is straightforward to
foo1 = return "1"
foo2 = return "2"
which clearly violates referential transparency. This is bad, so the
idea is to disallow the proof.
In particular, the idea is that referential transparency can be restored
if we only allow proofs that work for all evaluation orders, which is
equivalent to introducing non-determinism. In other words, we are only
allowed to prove
foo1 = return "1" or return "2"
foo2 = return "1" or return "2"
Moreover, we can push the non-determinism into the IO type and pretend
that pure functions A -> B are semantically lifted to Nondet A ->
Nondet B with some kind of fmap .
The same goes for hGetContents : if you use it twice on the same
handle, you're only allowed to prove non-deterministic behavior, which
is not very useful if you want a deterministic program. But you are
allowed to prove deterministic results if you use it with appropriate
In other words, the language semantics guarantees less than GHC actually
does. In particular, the semantics only allows reasoning that is
independent of the evaluation order and this means to treat IO as
non-deterministic in certain cases.
More information about the Haskell-prime