Analyzing Efficiency

Simon Marlow
Wed, 14 Aug 2002 10:25:27 +0100

> I've come up with three different methods of approach to=20
> solve the same=20
> problem in haskell. I would like to compare the three in=20
> terms of reductions,=20
> memory usage, and overall big O complexity.
> What's the quickest way to gather these stats? I usually use the ghc=20
> compiler, but also have hugs installed. The big O complexity=20
> probably has to=20
> be done by hand, but maybe there's a tool out there to do it=20
> automagically.

Apart from the "normal" ways (profiling, Unix 'time', GHC's +RTS
-sstderr), here's another one I've been using recently: cachegrind.
It's the wonderful cache profiling extension by Nick Nethercote that
comes with Julian Seward's Valgrind.  The great thing is that you don't
even need to recompile the program - you just do 'cachegrind <program>',
and it runs (very slowly) and outputs reliable cache statistics
including how many instructions were executed.  Get it from=20

Oh, it only works on Linux BTW.