[HOpenGL] HOpenGL and --enable-threaded-rts
Thu, 20 Jun 2002 11:50:49 +0200
Simon Peyton-Jones wrote:
> GHC's approach to threading
> The current GHC model has the basic assumption that OS threads
> are inter-changeable.
As has already been mentioned by others, this assumption doesn't hold
if you leave the Happy World of Haskell(tm): Locks/mutexes/... and
thread-local variables can be hidden under the hoods of many libraries
out there, for which there will *never* be a Haskell replacement. I
consider the ability to write bindings to those libraries as an extremely
important point, otherwise the usefulness of Haskell is rather limited.
Remember that Haskell has been described as a "good glue" for other
building blocks (= APIs) around...
> [...] There is a good reason for this: the current OS thread may block
> in some I/O call (getChar, say), and we don't want that to block
> all Haskell threads.
Hmmm, I don't consider this a good reason. IMHO there are basically 2
ways for threading in an RTS:
* The "Green Threads" approach, where no OS threading is used, but
everything is done by hand, 'select'-ing carefully internally, etc.
If the user itself calls out to C land and gets blocked, the whole
RTS is blocked. Not nice, but the price to pay for getting flyweight
* A 1-1 mapping from the threads in the language in question to OS
threads. If there is a danger of blocking, the user forks away a
new thread to do the work, and everything is fine. The Java world
lives quite happily with this.
Anything between will be doomed to fail in the general case, I fear,
for the reasons given in this (mail- :-) thread. It is OK if no
external code is ever called, but then you may use "Green Threads",
Hacking GLUT to work with GHC's current approach makes no sense, it
would only solve a single instance of a general problem. Furthermore,
it would be extremely system-dependent and would cost a *vast* amount of
performance: Switching OpenGL contexts can require a round-trip to a
remote server, can trigger swapping a few MB of textures into your
favourite graphics card, etc. OK, these are worst-case scenarios, but
doing even the best case just for drawing a few vertices would probably
be ridiculously slow.