[Haskell-cafe] Re: A suggestion for the next high profile
ajb at spamcop.net
ajb at spamcop.net
Sun Dec 17 20:12:01 EST 2006
Quoting Neil Mitchell <ndmitchell at gmail.com>:
> I believe that compilers can get a lot cleverer - my hope is that one
> day the natural Haskell definition will outperform a C definition.
First off, let's get something straight: Everyone's metric for "performance"
is different. When someone makes a claim about the "performance" of some
programming language (let alone a particular program built with a particular
implementation), always ask how "performance" is measured.
In my ad-hoc testing, Haskell has already outperformed most other languages
for some of my projects using a particular metric that I care about at the
time. For example, for certain types of problem, Haskell minimises the
amount of time between the point where I start typing and the point where
I have the answer.
Similarly, I think today that a decent merge sort in Haskell is likely
to outperform most C qsort() implementations TODAY, under reasonable
conditions (reasonably large data set, for example), because C's qsort()
interface simply cannot support good specialisation.
This last point is important, because no matter what language you're
writing in, and no matter what your metric for "performance" is, the best
thing you can do for the performance of your code is to write your APIs
carefully. If you do that, then when you find a performance problem, you
can swap out the offending code, swap more in, and everything should work
The known performance problems in Haskell with I/O, and binary I/O in
particular, are based in this issue, too. The insistance on defining
String as [Char] means that your data is inefficiently represented even
before it hits hPutStr.
This is why I think it's a mistake to blame the performance of executable
code. The biggest performance issues in a language tend to come from the
part of the language in which you define the APIs.
More information about the Haskell-Cafe