[Haskell-cafe] Can Haskell outperform C++?

Richard O'Keefe ok at cs.otago.ac.nz
Wed May 23 04:59:28 CEST 2012

On 23/05/2012, at 4:54 AM, Isaac Gouy wrote:
>> There may be very little one can do about the I/O part.
> Maybe you could say how the Java I/O is being done.

>>     For 50,000 nodes and 8,385,254 edges,
>>     Java (first version) ran out of memory after 89.54 seconds (default heap)
>>     Java (2nd version)  13.31 seconds   -- avoid Integer boxing!
>>         Java (3rd version)  15.07 seconds
>>         Smalltalk           14.52 seconds
>>         C                    2.36 seconds
> fwiw my expectation is that Java would not be that much slower than C, and would be considerably faster than Smalltalk.

My own experience is that Java is anywhere between 2 times slower and 150 times
slower than C.  This is not for number crunching; one _would_ expect Java to be
about the same as C for that because it is working at the same level as C.  But
string processing and text I/O using the java.io.* classes aren't brilliant.

   Reader r = new BufferedReader(new InputStreamReader(System.in));
This "layers of wrappers" approach to text I/O adds overheads of its own, but
less than I'd thought.  Using System.in directly takes the time down from
15.07 seconds to 11.11 seconds.  The code used Character.isWhitespace(c) to
test for a white space character; replacing that by c <= ' ' brought the time
down further to 10.87 seconds.

With both of these changes, we are moving away from recommended good practice;
the faster code is the kind of code people are not supposed to write any more.

Characters are read one at a time using r.read().  There are no fast "skip
characters in this set" or "take characters in this set and return them as
a string" methods in the Reader or InputStream interfaces, and StreamTokenizer
reads characters one at a time using .read() also, so would be slower.

As for Smalltalk, you must be thinking of free Smalltalk systems that lacked a
JIT.  Commercial Smalltalks have excellent JITs (many HotSpot ideas were copied
from them) and there is now a pretty good one for the free systems Squeak and
Pharo.  These particular measurements were made using my own Smalltalk compiler
which is an oddity amongst Smalltalks: a whole program compiler that compiles
via C.  Yes, most of the good ideas came from INRIA, although ST/X does
something not entirely dissimilar.

> fwiw my expectation is that should be possible to make the Java program considerably faster.

I have used profiling to find where the time was going.
Approximately 70% is spent in the one method that reads a "word":
 - a loop skipping white space characters
 - a loop reading other characters and adding them to a StringBuilder
 - [looking the string up in a HashMap and creating and entering a
    new Node if necessary -- this time is not part of that 70%].

Any reasonable person would expect file reading to be buffered and
for the read-one-byte method to usually just fiddle a few integers
and fetch an element of an array.  In fact the method is native, so
every character requires switching from the Java universe to the C
one and back.  (The Smalltalk equivalent is pretty close to fgetc().)

The only way to make this Java program 'considerably faster' is to
not read characters (or bytes) in the natural Java way.  (The way,
in fact, that java.io.StreamTokenizer does.)

And it's not INTERESTING, and it's not about LANGUAGES.
There is NOTHING about the Java language that makes code like this
necessarily slow.  It's the LIBRARY.  The java.io library was
designed for flexibility, not speed.  That's why there is a java.nio

Haskell is in pretty much the same boat.  I've always liked the
I/O model in the Haskell report.  I've expected simplicity from it,
not speed, and that's what I get.  But as all the new approaches
to I/O (like iteratees, which make my head hurt) make perfectly
clear, the limitations of the IO module are not limitations of
Haskell, or of JHC, but of a particular library.

And that's the point I was making with this example.  Why does
Smalltalk come out in the middle of the Java results?  A balance
between a language penalty (tagged integer arithmetic is a lot
slower than native integer arithmetic) and a library bonus (a
leaner meaner I/O design where there are wrappers if you want
them but you very seldom need them).  It's the great advantage
of using libraries rather than syntax: libraries can be changed.
>> All I was saying is that surely the only *point* of
>> talking about the performance of *languages* is to get some idea of
>> how well programs are likely to do, and not any old specially crafted
>> code, but the kind of code that is likely to be written for purposes
>> other than benchmarking.  So the more you bash on a particular example
>> to get the time down, the less representative it is of the kind of code
>> people are likely to write *for purposes other than benchmarking*.
> Certainly less representative of the kind of code students

Leave the students out of it.  It's less representative of the kind of
code I see written by Sun or in Apache stuff.
> The "kind of code people are likely to write" is sometimes best described as bad.

At last, something we can agree about!
>> *That* function, no.  *Some* hash function as a "primitive" (meaning
>> optimised C), well, I don't have every Smalltalk, but the ones I do
>> have, I've checked, and yes they do.
> Maybe I misunderstood the thrust of your original comment - Were you trying to make a point about writing in C and calling that code from Smalltalk as a user written primitive; or were you trying to make a point about the importance of using a better hash function?

Neither.  I am making the point that many benchmarks benchmark library
code rather than languages or compilers per se, and that the concept of
"same algorithm" is as a result very fuzzy.
>>>   Have you researched what code people are actually likely to write, or are 
>> you just speculating? ;-)
>> I'm in my 6th decade.  I've seen a lot of code in a lot of languages.
> So just speculating.

How is "I have seen a lot of code" construed as "just speculating"?
I know what code people are likely to write because I have seen the
code that many people DID write.  I know what code people are likely
to write in Java because I have seen more Java (written by people
who really should know what they are doing) than I ever wanted to see
AND because I know how they are TOLD to write.
>> The subject line has the obvious and boring answer "yes, of course".
> I agree, because of the implicit qualification - "if we write the C++ badly enough" :-)

Otherwise construed, "the way C++ is usually written".
C++ *can* be extremely fast.  One of my colleagues is very good at it.
But one of his key rules is "never use the standard template library,
including C++ strings."

>> The point here of course is that there are data structures and algorithms
>> that are easy to express in some languages but hard in others, so that
>> in code written *for purposes other than benchmarking*, they just would
>> not be _used_ in one of those languages.
> Let's say that's true - until we show how much better programs using other data structures and algorithms perform those specific tasks, it's just an excuse. 
> (And, depending on how hard it is to express those different data structures and algorithms in other languages, it might not even be a good excuse.)

My evidence may be anecdotal, but it is better than arguing with
NO evidence.

The example you are commenting on here is one where I reported an
emulated Prolog program running faster than native Fortran 77.
Recall that Fortran 77
 - did not include recursion
 - did not include pointers
 - did not include user defined data types
The kind of dynamic tree construction and walking that is so
trivial and every day in Haskell was extremely difficult to
express in Fortran 77.

Not necessarily impossible.  A parser for English was once written
in Fortran, and I think that was before Fortran 77 even.  But hard.

A programming language that supports concurrency, or backtracking,
or constraint programming, or type level arithmetic, or dependent
types, or ... will make some things thinkable and straightforward
to express that would be neither in a language without such

More information about the Haskell-Cafe mailing list