[haskell-cafe] Monad and kinds
derek.a.elkins at gmail.com
Sat Sep 6 00:32:41 EDT 2008
On Fri, 2008-09-05 at 20:11 -0700, Tim Chevalier wrote:
> On Fri, Sep 5, 2008 at 7:23 PM, Derek Elkins <derek.a.elkins at gmail.com> wrote:
> > This attitude is wrong. Many potentially significant performance
> > problems can easily be spotted and solved during construction without
> > affecting the readability of the code, problems that would be much
> > harder to diagnose at the end running a profiler. This is especially
> > crucial for library code. The users of the library may be the ones that
> > find the easily resolved space leak your profiling didn't reveal and now
> > they can't do anything about it until a new version is released e.g.
> > Data.Map.insertWith. A performance problem that renders your code
> > unusable is a bug and catching it early or not making it in the first
> > place is much better than catching it late.
> Library writers don't write applications that use their code as part
> of the testing process?
They don't use their libraries in every conceivable way.
> > A (highly related) analogy would be telling Scheme programmers (or
> > Haskell programmers for that matter) not to use to tail recursive code
> > until a profiler tells them to and transforming to a tail recursive
> > style is much more intrusive than adding a strictness annotation.
> Tail recursion isn't always a win in Haskell.
I'm very well aware of that. Due to the extremely close relationship
with when to tail recursion and when to use laziness, your original
statement almost obligates you to also believe that "masters" don't
write tail recursive code until after profiling.
> I'm not much in touch
> with the Scheme world, so can't speak to that.
Stick in any strict functional language or any other strict language
(preferably higher order, but that isn't really important) that also
supports tail call optimization.
> > Highly competent Haskell programmers add strictness annotations
> > relatively systematically. The details of mixing eager and lazy code is
> > one of the significant contributions to the pragmatics of programming
> > lazy functional languages have made. At another extreme, things like
> > Chris Okasaki's data structures rely on a specific balance of eagerness
> > and laziness.
> > Also, it is easier (as in not impossible) to turn a strict in the
> > elements data structure into a lazy one than the other way around.
> > Eager by default or lazy by default are both have (actually dual)
> > pitfalls that are best solved by a laziness or strictness annotation
> > respectively. There is no need to walk into those pitfalls with your
> > eyes wide open.
> That may be, but are the holes that one falls into due to unexpected
> laziness and the ones that one falls into due to unexpected strictness
> equally numerous? Are they equally deep?
As I said, this particular issue is a duality. For at least a certain
class of problems, the ones most relevant here, the holes are equally
deep and equally numerous. What is correct in one language is a problem
in the other.
For examples and more discussion see:
These last questions, though, are both flawed and irrelevant. They are
flawed because the issue isn't which is the default: having both
available -at all- makes you vulnerable to the pitfalls of both. Here's
a good example:
compose = foldr (.) id
or should it be
compose = foldl' (.) id
maybe something else?
The fact that Haskell has both strict and non-strict functions makes
neither of these implementations correct. It is irrelevant because all
that needs to be shown is that there exist -any- problem that can easily
be identified and fixed with a strictness annotation. The Haskell
implementation of factorial in the above has such a bug, it can be fixed
with the addition of two characters, $!, I don't need a profiler to tell
me this, and for every standard numeric type there will be no change in
semantics (where "semantics" here does not include "performance"
More information about the Haskell-Cafe