[HOpenGL] HOpenGL and --enable-threaded-rts
Wolfgang Thaller
wolfgang.thaller@gmx.net
Thu, 20 Jun 2002 00:54:25 +0200
Simon Peyton-Jones wrote:
> That might be ok provided there was a single context "in play".
> In effect, your proposal amounts to keeping a process-global context,
> and zapping it into the Current Haskell Worker Thread whenever
> it grabs a capability, correct?
Correct.
A little more work (Haskell-Thread-Local storage and one or two more RTS
callbacks) would be required to use OpenGL from multiple haskell threads
at the same time.
However my actual proposal has nothing to do with OpenGL. Rather, I'd
say "extend the RTS to allow library bindings to do this or similar
things". I'm not proposing to put anything OpenGL-specific into the RTS.
> But if there's a single global context, why does it need to be
> thread-local?
? I don't follow you. OpenGL can deal with multiple contexts, and it
keeps the current context in thread-local state, so it can even deal
with multiple contexts at the same time. We can't prevent OpenGL from
using thread-local state. The "first version" of the proposed solution
would only allow a single global context at one time for HOpenGL, and
copy that to the thread-local state of every thread that executes
haskell code.
If we need to use OpenGL (or similar APIs) from several Haskell threads
simultaneously, we will need a place to store a haskell-thread-local
context (ideally, this would be a generall mechanism for
haskell-thread-local store) and an additional callback from schedule().
> Another possibility that Simon and I have discussed is to provide a
> sort of forkIO that says "create a Haskell thread permanently bound
> to an OS thread". Much more expensive than normal forkIO. More like
> having a permanent secretary at your beck and call, rather than the
> services of a typist from the typing pool.
So calling C from this thread would happen inside that OS thread,
callbacks would happen in that OS thread, and the RTS would continue to
run while that OS thread is blocked?
How much overhead would that create? I wouldn't like much additional
overhead for my OpenGL programs :-(. Would this require an OS mutex lock
for every heap allocation, or is there a better way?
Cheers,
Wolfgang