[Haskell-cafe] Hugs/nhc getting progressively slower

Donald Bruce Stewart dons at cse.unsw.edu.au
Tue May 1 21:39:07 EDT 2007


ndmitchell:
> Hi,
> 
> I like to develop on Hugs, because its a nice platform to work with,
> and provides WinHugs, auto-reloading, sub-second compilation etc.
> Unfortunately some of the newer libraries (ByteString/Binary in
> particular) have been optimised to within an inch of their lives on
> GHC, at the cost of being really really slow on Hugs.
> 
> Taking the example of Yhc Core files, which are stored in binary.
> Using a very basic hPutChar sequence is miles faster (10x at least)
> than all the fancy ByteString/Binary trickery.
> 
> Taking the example of nobench, Malcolm told me he reimplemented
> ByteString in terms of [Char] and gained a massive performance
> increase (6000x springs to mind, but that seems way too high to be
> true) using nhc.
> 
> Could we have a collective thought, and decide whether we wish to
> either kill off all compilers that don't start with a G, or could
> people at least do minimal benchmarking on Hugs? I'm not quite sure
> what the solution is, but it probably needs some discussion.

I'm not sure how we can optimise both for interpreters, and compilers.

The Binary and ByteString stuff pays close attention to the hardware:
cache misses, branch prediction. And there's no option to abandon high
performance compiled Haskell, to help out the interpreters. 

Interestingly, the techniques we use for, say, Data.ByteString, seem to
also produce very good results in MLton. So it really is a matter of
optimising for compiled native code, versus bytecode interpreters.

I don't know if there's anything that can be done here.

-- Don


More information about the Haskell-Cafe mailing list