Concurrency (was: Re: [GUI] Re: events & callbacks)
Wed, 12 Mar 2003 23:58:25 +0100
Thanks for your concrete proposal!
> 1) Callbacks will be executed synchronously. No other events will be handled until a callback returns.
> *) This maps directly to the execution model of all backend toolkits that I know
> *) You can easily get concurrency by calling forkIO, but going the other way is difficult.
Good plan. Just to make things more clear, this means that all callbacks are executed sequentially.
> 2) Calls to the CGA can be made at any time, from any thread. The implementation is responsible for assuring that the serialization requirements of the backend toolkit are met.
Unfortunately, you are not clear about what kind of thread you mean: OS thread or Haskell thread?
I therefore propose another rule:
2d) All haskell code is run in a single OS thread (the "GUI thread"). Calls to the CGA
can be made at any time from any Haskell thread (within the GUI thread).
- We can run many haskell threads within one OS thread. Furthermore, the interaction
between Haskell threads is well understood and light weight.
- The serialization requirements of the backend toolkit are automatically met since
all these haskell threads run in the same OS thread.
- All current Haskell systems can easily support this model -- in contrast, no haskell
system (except the LVM in Helium :-) supports multiple OS threads to my knowledge.
- We can implement all advanced event models on top of this model using concurrency.
- We most surely do *not* want foreign C calls to be run in a different OS thread
and don't want the "threadsafe" keyword here. (I am opposed to such extension --
complexity without reason - run your OS threads from C yourself!)
About implementing concurrency:
As you explained, a callback can use forkIO to achieve concurrent callbacks
for example. However a more "real world" reason for having concurrency would
be that a callback needs to do a lot of processing. As callbacks are run
sequentially, the entire application will not react to events while the
callback does its work. Therefore, a callback that needs to do a lot of processing
should spawn a Haskell thread to do the processing (a worker thread) and
return as soon as possible in order to stay reactive.
Now, the big issue here is how to keep those Haskell threads running! At the primitive level, there is an eventloop that waits for events to happen,
pops them of the queue, calls a Haskell callback, waits for its result, and
loops again. When a callback returns that has just forked another Haskell thread,
the eventloop will make an OS call to wait for the next event. Since we are running
in a single OS thread, this will "disable" the Haskell scheduler and the worker thread
will not run at all until another event happens!
The question is how to solve this. Alastair Reid used the "yield" primitive to wait until
all Haskell threads were done (or called yield themselves), but that wouldn't solve our particular problem as we want to stay reactive *and* have a lot of processing to do.
Now, I think that we should use "idle time" events to solve this. It would stay nicely in
our simple concurrency model and also keep the whole GUI desktop reactive (do we remember
the old Winhugs bug :-). Basically, when an "idle" time event happens we call a callback
that just sleeps for, say, 0.1 seconds, giving the Haskell scheduler time to run other
haskell threads. It would be good of course when we have some hooks to determine whether
there are any haskell threads waiting to be scheduled so we can avoid calling that handler
when not necessary. We don't necessarily have to use real "idle" events: if there are Haskell
treads waiting to be scheduled, we can also peek at the event queue, and when no new events
have arrived, call the idle callback. When all haskell threads are done, we wait again.
Note that this schedule also give more priority to new events, keeping the whole application