[Haskell-cafe] Re: Why can't Haskell be faster?

Sebastian Sylvan sebastian.sylvan at gmail.com
Thu Nov 1 14:58:45 EDT 2007


On 01/11/2007, Tim Newsham <newsham at lava.net> wrote:
> > Unfortunately, they replaced line counts with bytes of gzip'ed code --
> > while the former certainly has its problems, I simply cannot imagine
> > what relevance the latter has (beyond hiding extreme amounts of
> > repetitive boilerplate in certain languages).
>
> Sounds pretty fair to me.  Programming is a job of compressing a solution
> set.  Excessive boilerplate might mean that you have to type a lot, but
> doesn't necessarily mean that you have to think a lot.  I think the
> previous line count was skewed in favor of very terse languages like
> haskell, especially languages that let you put many ideas onto a single
> line.  At the very least there should be a constant factor applied when
> comparing haskell line counts to python line counts, for example.
> (python has very strict rules about putting multiple things on the same
> line).
>
> Obviously no simple measure is going to satisfy everyone, but I think the
> gzip measure is more even handed across a range of languages.  It probably
> more closely aproximates the amount of mental effort and hence time it
> requires to construct a program (ie. I can whip out a lot of lines of code
> in python very quickly, but it takes a lot more of them to do the same
> work as a single, dense, line of haskell code).
>
> > When we compete against Python and its ilk, we do so for programmer
> > productivity first, and performance second.  LOC was a nice measure,
> > and encouraged terser and more idiomatic programs than the current
> > crop of performance-tweaked low-level stuff.
>
> The haskell entries to the shootout are very obviously written for speed
> and not elegance.  If you want to do better on the LoC measure, you can
> definitely do so (at the expense of speed).
>


Personally I think syntactic noise is highly distracting, and semantic
noise is even worse! Gzip'd files don't show you that one language
will require you to do 90%  book-keeping for 10% algorithm, while the
other lets you get on with the job, it may make it look as if both
languages are roughly equally good at letting the programmer focus on
the important bits.

I'm not sure what metric to use, but actively disgusing noisy
languages using compression sure doesn't seem like anywhere close to
the ideal. Token count would be good, but then we'd need a parser for
each language, which is quite a bit of work to do...


-- 
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862


More information about the Haskell-Cafe mailing list