[Haskell-cafe] Re: Why can't Haskell be faster?
Sterling Clover
s.clover at gmail.com
Fri Nov 2 23:47:47 EDT 2007
As I understand it, the question is what you want to measure for.
gzip is actually pretty good at, precisely because it removes
boilerplate, reducing programs to something approximating their
complexity. So a higher gzipped size means, at some level, a more
complicated algorithm (in the case, maybe, of lower level languages,
because there's complexity that's not lifted to the compiler). LOC
per language, as I understand it, has been somewhat called into
question as a measure of productivity, but there's still a
correlation between programmers and LOC across languages even if it
wasn't as strong as thought -- on the other hand, bugs per LOC seems
to have been fairly strongly debunked as something constant across
languages. If you want a measure of the language as a language, I
guess LOC/gzipped is a good ratio for how much "noise" it introduces
-- but if you want to measure just pure speed across similar
algorithmic implementations, which, as I understand it, is what the
shootout is all about, then gzipped actually tends to make some sense.
--S
More information about the Haskell-Cafe
mailing list