stephen.tetley at gmail.com
Thu Dec 10 16:42:39 EST 2009
C'mon Andrew - how about some facts, references?
2009/12/10 Andrew Coppin <andrewcoppin at btinternet.com>:
> 1. Code optimisation becomes radically easier. The compiler can make very
> drastic alterations to your program, and not chance its meaning. (For that
> matter, the programmer can more easily chop the code around too...)
Which code optimizations?
>From a different point of view, whole program compilation gives plenty
of opportunity for re-ordering transformations / optimization - Stalin
(now Stalinvlad) and MLton often generated the fastest code for their
respective (strict, impure) languages Scheme and Standard ML.
Feel free to check the last page of the report here before replying
with the Shootout - (GHC still does pretty well though easily beating
Gambit and Bigloo):
> 2. Purity leads more or less directly to laziness, which has several
Other way round, no?
> 2a. Unecessary work can potentially be avoided. (E.g., instead of a function
> for getting the first solution to an equation and a seperate function to
> generate *all* the solutions, you can just write the latter and laziness
> gives you the former by magic.)
Didn't someone quote Heinrich Apfelmus in this list in the last week or so:
"Well, it's highly unlikely that algorithms get faster by introducing
laziness. I mean, lazy evaluation means to evaluate only those things
that are really needed and any good algorithm will be formulated in a
way such that the unnecessary things have already been stripped off."
> 2b. You can define brand new flow control constructs *inside* the language
> itself. (E.g., in Java, a "for" loop is a built-in language construct. In
> Haskell, "for" is a function in Control.Monad. Just a plain ordinary
> function that anybody could write.)
Psst, heard about Scheme & call/cc?
> 2c. The algorithms for generating data, selecting data and processing data
> can be seperated. (Normally you'd have to hard-wire the stopping condition
> into the function that generates the data, but with lazy "infinite" data
> structures, you can seperate it out.)
Granted. But some people have gone quite some way in the strict world, e.g.:
> 2d. Parallel computation. This turns out to be more tricky than you'd think,
> but it's leaps and bounds easier than in most imperative languages.
Plenty of lazy and strict, pure and impure languages in this survey:
> 3. It's much harder to accidentally screw things up by modifying a piece of
> data from one part of the program which another part is still actually
> using. (This is somewhat similar to how garbage collection makes it harder
> to free data that's still in use.)
In a pure language I'd like to think its impossible...
More information about the Haskell-Cafe