[Haskell-cafe] Consecutive FFI calls

Takenobu Tani takenobu.hs at gmail.com
Sat May 30 03:10:57 UTC 2015

Hi David,

I'm not 100% sure, especially semantics,  and I'm studying too.
I don't have an answer, but I describe the related matters in order to
organize my head.

At first:
  "memory barrier" ... is order control mechanism between memory accesses.
  "bound thread"   ... is association mechanism between ffi calls and a
specified thread.

  "memory barrier"  ... is depend on cpu hardware architecture(x86, ARM,
  "OS level thread" ... is depend on OS(Linux, Windows, ...).

There are four cases about ffi call [1]:
  (1) safe ffi call   on unbound thread(forkIO)
  (2) unsafe ffi call on unbound thread(forkIO)
  (3) safe ffi call   on bound thread(main, forkOS)
  (4) unsafe ffi call on bound thread(main, forkOS)

I think, maybe (2) and (4) have not guarantee with memory ordering.
Because they might be inlined and optimized.

If (1) and (3) always use pthread api (or memory barrier api) for
thread/HEC context switch,
they are guarantee.
But I think that it would not guarantee the full case.

I feel that order issues are very difficult.
I think order issues can be safely solved by explicit notation,
like explicit memory barrier notation, STM,...

If I have misunderstood, please teach me :-)



2015-05-29 1:24 GMT+09:00 David Turner <dct25-561bs at mythic-beasts.com>:

> Hi,
> If I make a sequence of FFI calls (on a single Haskell thread) but
> which end up being called from different OS threads, is there any kind
> of ordering guarantee given? More specifically, is there a full memory
> barrier at the point where a Haskell thread migrates to a new OS
> thread?
> Many thanks,
> David
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-devs/attachments/20150530/9e633426/attachment-0001.html>

More information about the ghc-devs mailing list