[Haskell-cafe] Why?

Edward Kmett ekmett at gmail.com
Mon Dec 14 09:43:44 EST 2009


On Thu, Dec 10, 2009 at 4:42 PM, Stephen Tetley <stephen.tetley at gmail.com>wrote:

> C'mon Andrew - how about some facts, references?
>
> 2009/12/10 Andrew Coppin <andrewcoppin at btinternet.com>:
>
>
> > 1. Code optimisation becomes radically easier. The compiler can make very
> > drastic alterations to your program, and not chance its meaning. (For
> that
> > matter, the programmer can more easily chop the code around too...)
>
> Which code optimizations?
>

Anything permitted by free theorems for one. That gives you a whole lot of
code motion possibilities. Purity also gives the compiler freedom to
increase sharing. It lets GHC rely on 'almost correct' ways to evaluate
thunks on multiple cores with blackholing that has a vanishingly small but
still existent race condition, which is FAR faster than the version they
would have to use if they were more concerned about accidentally duplicating
effects.


> >From a different point of view, whole program compilation gives plenty
> of opportunity for re-ordering transformations / optimization - Stalin
> (now Stalinvlad) and MLton often generated the fastest code for their
> respective (strict, impure) languages Scheme and Standard ML.
>
> Feel free to check the last page of the report here before replying
> with the Shootout - (GHC still does pretty well though easily beating
> Gambit and Bigloo):
> http://docs.lib.purdue.edu/ecetr/367/
>
> > 2. Purity leads more or less directly to laziness, which has several
> > benefits:
>
> Other way round, no?
>

Laziness pushes you in the direction of purity, if only because it provides
you with the freedom you need to recover usable efficiency, but I would
argue that the converse is also true to some extent.

When you are designing a pure language, you can't do as much in a strict
setting as you can in a lazy setting. Laziness allows you to tie the knot
without mutation -- or rather using only the mutation intrinsic to
thunk-evaluation in call-by-need.

A pure strict language is also often faced with some kind of fun/val
distinction, leading to ml like syntactic warts, and eta-reduction isn't
always sound, so EDSLs wind up carrying extra ()'s or need pointful plumbing
ala f#'s |>.


> > 2a. Unecessary work can potentially be avoided. (E.g., instead of a
> function
> > for getting the first solution to an equation and a seperate function to
> > generate *all* the solutions, you can just write the latter and laziness
> > gives you the former by magic.)
> >
>
> Didn't someone quote Heinrich Apfelmus in this list in the last week or so:
>
> http://www.haskell.org/pipermail/haskell-cafe/2009-November/069756.html
>
> "Well, it's highly unlikely that algorithms get faster by introducing
> laziness. I mean, lazy evaluation means to evaluate only those things
> that are really needed and any good algorithm will be formulated in a
> way such that the unnecessary things have already been stripped off."
>

Continuing that quotation seems to say quite the opposite.

> I mean, lazy evaluation means to evaluate only those things that are
really needed and any good algorithm will be formulated in a way such that
the unnecessary things have already been stripped off. But laziness allows
to simplify and compose algorithms.

Not only that but the surrounding example is the classic 'take 5 . qsort',
which DOES change its asymptotics in the face of laziness. When you are
writing the algorithm from scratch, I agree that it is the case that you
probably derive nothing from laziness. The nice thing about laziness is that
it permits you to avoid this tedium and makes it easier to grab a widget
from the standard library and just have it suit your purposes.

Another non-intuitive asymptotic changing difference is that laziness
provides a side-effect free way to describe memoization and dynamic
programming over certain domains.

http://apfelmus.nfshost.com/quicksearch.html



> > 2b. You can define brand new flow control constructs *inside* the
> language
> > itself. (E.g., in Java, a "for" loop is a built-in language construct. In
> > Haskell, "for" is a function in Control.Monad. Just a plain ordinary
> > function that anybody could write.)
>
> Psst, heard about Scheme & call/cc?
>

To be fair, most scheme control structures are implemented using macros, and
call-by-macro-expansion is a form of call-by-name:

http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_macro_expansion


> > 2c. The algorithms for generating data, selecting data and processing
> data
> > can be seperated. (Normally you'd have to hard-wire the stopping
> condition
> > into the function that generates the data, but with lazy "infinite" data
> > structures, you can seperate it out.)
>
> Granted. But some people have gone quite some way in the strict world,
> e.g.:
>
> http://users.info.unicaen.fr/~karczma/arpap/FDPE05/f20-karczmarczuk.pdf<http://users.info.unicaen.fr/%7Ekarczma/arpap/FDPE05/f20-karczmarczuk.pdf>
>

Given. But then people have written some amazing things in brainfuck too.
That doesn't mean that I want to subject myself to working in such a
sufficient computing environment :)




-Edward Kmett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20091214/a61dc433/attachment.html


More information about the Haskell-Cafe mailing list