Inferring from context declarations

George Russell ger@tzi.de
Wed, 21 Feb 2001 17:56:01 +0100


Hmm, this throwaway comment is getting interesting.  But please cc any replies to
me as I don't normally subscribe to haskell-cafe . . .

"D. Tweed" wrote:
[snip]
> Some of the
> time this is what's wanted, but sometimes it imposes annoying compilation
> issues (the source code of the polymorphic function has to be available
> everytime you want to use the function on a new class, even if its not
> time critical, which isn't the case for Haskell). 
You don't need the original source code, but some pickled form of it,
like that GHC already outputs to .hi files when you ask it to inline
functions.
> I also often
> write/generate very large polymorphic functions that in an ideal world
> (where compilers are can do _serious, serious_ magic) I'd prefer to work
> using something similar to a dictionary passing implementation.
Why then?  If it's memory size, consider that the really important thing
is not how much you need in virtual memory, but how much you need in the
various caches.  Inlining will only use more cache if you are using two
different applications of the same large polymorphic function at approximately
the same time.  Certainly possible, and like all changes you will be able to
construct examples where inlining polymorphism will result in slower execution
time, but after my experience with MLj I find it hard to believe that it is
not a good idea in general.
> I'd argue
> that keeping flexibility about polymorphic function implementation (which
> assumes some default but can be overridden by the programmer) in Haskell
> compilers is a Good Thing.
I'm certainly not in favour of decreeing that Haskell compilers MUST inline
polymorphism.
> 
> Given that, unless computing hardware really revolutionises, the
> `speed/memory' profile of todays desktop PC is going to recurr in wearable
> computers/PDAs/etc I believe that in 20 years time we'll still be figuring
> out the same trade-offs, and so need to keep flexibility.
Extrapolating from the last few decades I predict that
(1) memory will get much much bigger.
(2) CPU times will get faster.
(3) memory access times will get faster, but the ratio of memory access time/CPU processing time
    will continue to increase.
The consequence of the last point is that parallelism and pipelining are going to become
more and more important.  Already the amount of logic required by a Pentium to try to
execute several operations at once is simply incredible, but it only works if you have
comparatively long stretches of code where the processor can guess what is going to happen.
You are basically stuffed if every three instructions the code executes a jump to a location
the processor can't foresee.  Thus if you compile Haskell like you do today, the processor
will be spending about 10% of its time actually processing, and the other 90% waiting on
memory.  If Haskell compilers are to take much advantage of processor speeds, I don't see
any solution but to inline more and more.