newbie conceptual question [from haskell list]

D. Tweed tweed@compsci.bristol.ac.uk
Wed, 25 Jul 2001 17:55:03 +0100 (BST)


I'd like to respectfully disagree with some of this :-)

On Wed, 25 Jul 2001, Frank Atanassow wrote:

> These things are nice, but the more I learn about functional languages, the
> more I feel that they are only icing on the cake. The most important thing
> about functional languages is that we know _what they are_, and that they
> can be characterized in a regular fashion which is amenable to analysis; in
> other words, we know how to reason about them, and what sorts of properties
> functional programs satisfy.

Umm, I wouldn't agree with that exactly. If you ignore stupidities like
specifying the mod operator % to be implementation defined, I would claim
that C has/can be given semantics which are virtually as simple as
Haskell. Equally I'd argue that we know as much about how to reason about
imperative languages as we do functional ones. The point is that those
semantics don't split the problem of reasoning about what's going on into
nice independent subproblems like the semantics of Haskell do. Equally,
I'd agree we don't know how to perform _effective_ reasoning on them,
probably because it's pretty much impossible without restricting the set
of allowable programs.

> Saying that a language "satisfies nice properties" sounds abstract and
> useless to some people who just want to get things done, but I really
> believe that it helps not only researchers who know know how to read and use
> formal semantics, but also everyday programmers, because they absorb some of
> these reasoning methods and then start using them intuitively and
> unconsciously. For example, every Haskell programmer has some grasp of
> referential transparency, and uses it implicitly every time he thinks about
> how to structure or rearrange his programs.

Again, as a C++ programmer I have some grasp of what program
rearrangements are valid (E.g.,I can't move an assignment involving an
expression in another variable, say v, from before an assignment to v to
after an assignment to v), and I use it implicitly every time I think
about structuring or rearranging my programs. The problem is that that
doesn't really help because determining whether there's an assignment to
variable v between where an assignment is now and where I want to move it
to is `decidedly non-trivial'.

What I'm basically trying to say is that I don't think it's `simplicity of
semantics' that's the FP advantage but the _effectiveness in the world of
people writing programs_ of those semantics. (Indeed I vaguely remember
reading that John McCarthy didn't develop LISP as a computational
realization of the existing theory of lambda calculus but rather because
he thought programming with only functions would be an effective way of
doing things. The `lambda' was an accident; a friend gave him a book on
Church's theory late in the development and he admitted he didn't really
understand it but thought the name lambda for the function definition
construct was `cool'.) And that advantage comes partly by restricting
yourself to programs that allow this kind of reasoning; this is no
different from the fact that I won't drive after having drunk any alcohol
even though I might well in actuallity still be safe because it's easier
to follow that rule than to figure out if I'm genuinely safe.

> Of course, this argument holds for any "simple" language, not just
> functional ones, where by "simple" I mean "easy to reason about". For

As I say `easy to _effectively_ reason about' =/= simple semantics.
I'm making this point because the natural rejoinder from a C programmer
being told that C (as an imperative language) has more complex semantics
than Haskell is to say `no it doesn't', which I agree with, the point
being that Haskell has a semantics which makes it easier to reason about
effectively. (I'm glossing over the distinction between the various
kinds of semantics for a language such as denotational and operational
here.)

> example, there are concurrent calculi of processes, object calculi and
> first-order algebraic languages which also share this property, and I prefer
> any of them to warty conventional languages. (And this is why I also like
> SML: even though it's not referentially transparent like Haskell, I know
> what sorts of properties it satisfies.)

Now this to me is a really interesting statement. IIRC from many years ago
theres not necessarily an indication in the type system when a function
involves references. Do you actually reason (albeit subconciously) using a
full semantics which includes possible effects due to aliasing of
references or changes to their contents somewhere deep in the functions,
or do you use a rule `well this is 99% likely to be referentially
transparent so I'll assume it is, but I'll keep in the back of my mind the
possibility this assumption may be wrong'? (That's what I'd end up doing I
think.)

> Haskell (and ML) really is different. First, it's not
> implementation-defined.

Umm, again I'd say this is debatable: the volume of e-mail to/from Simon
PJ about the haskell 98 report even though there are various Haskell 98
interpreters/compilers out there suggests this isn't true. Likewise, C and
C++ have specifications which are met to a reasonable degree by many
compilers out there, which prompts the question: when was the last time a
bug in your code in an imperative language was due to an
implmentation-defined part of the language? In 5 years of intensive C++
programming I'd guess I've made maybe 25 bugs due to this, but the number
of bugs I've fixed in my code must be in the tens of thousands.

FWIW, my take on the single biggest reason why I like and am more
productive in situations where I can use a functional language over an
imperative language is essentially being sheilded from temptation: because
in a typical imperative language you tend to have to implement a
case of `map f xs', for example, by a separate routine that knows what the
types of f and xs are (and maybe even what f is) and where you have to
explicitly manipulate things via temporary variables and array indices,
there's an almost impossible urge (often even subconcious) to specialise
and `optimise' things in ways which are in reality very error prone and
difficult to understand. In contrast, not being able to do these sorts of
nasty things in Haskell leads me to do things the next-simplest way,
which is generally using higher order functions and the like (where things
by definition can't go wrong in many of the ways of C).

___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: tweed@cs.bris.ac.uk      |   you have, half your time is spent
work tel: (0117) 954-5250       |   waiting for compilations to finish.