[Haskell-cafe] Haskell vs. Erlang: The scheduler
joelr1 at gmail.com
Tue Jan 3 10:13:17 EST 2006
On Jan 3, 2006, at 2:30 PM, Simon Marlow wrote:
> The default context switch interval in GHC is 0.02 seconds,
> measured in CPU time by default. GHC's scheduler is stricly round-
> robin, so therefore with 100 threads in the system it can be 2
> seconds between a thread being descheduled and scheduled again.
> I measured the time taken to unpickle those large 50k packets as
> 0.3 seconds on my amd64 box (program compiled *without*
> optimisation), so the thread can get descheduled twice during while
> unpickling a large packet, giving a >4s delay with 100 threads
Is it impractical then to implement this type of app in Haskell?
Based on the nature of Haskell scheduling I would be inclined to say
yes. I'm including information on the Erlang scheduler below.
I think it's possible to emulate the workings of the Erlang scheduler
in Haskell by using delimited continuations a-la Zipper File Server/
OS. A single delimited continuation (request in Zipper FS parlance?)
would be a scheduling unit and a programmer could then tune the
"scheduler" to their hearts content.
Apart from putting a lot of burden on the programmer this becomes
quite troublesome when multiple sockets or file descriptors are
concerned. There's no easy way to plug into the select facility of
the Haskell runtime to receive notifications of input available. You
will notice the Zipper FS spending quite a few lines of code to roll
its own select facility.
The Erlang scheduler is based on reduction count where one reduction
is roughly equivalent to a function call. See http://www.erlang.org/
ml-archive/erlang-questions/200104/msg00072.html for more detail.
There's also this helpful bit of information:
erlang:bump_reductions(Reductions) -> void()
Types Reductions = int()
This implementation-dependent function increments the reduction
counter for the calling process. In the Beam emulator, the
reduction counter is normally incremented by one for each func-
tion and BIF call, and a context switch is forced when the
counter reaches 1000.
Regarding the issue of why a logger process in Erlang does not get
overwhelved, this is the reply I got from Raimo Niskanen (Erlang team
There is a small fix in the scheduler for the standard
producer/consumer problem: A process that sends to a
receiver having a large receive queue gets punished
with a large reduction (number of function calls)
count for the send operation, and will therefore
get smaller scheduling slots.
More information about the Haskell-Cafe