<div dir="ltr"><div class="gmail_quote"><div dir="ltr">Am Do., 29. Nov. 2018 um 21:54 Uhr schrieb Ian Denhardt <<a href="mailto:ian@zenhack.net">ian@zenhack.net</a>>:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">[...] I've been able to reproduce your results, and if I change the last line<br>
to:<br>
<br>
forever $ do<br>
yield<br>
atomically $ writeTVar x True<br>
<br>
..it always prints -- so the culprit is definitely a failure to yield.<br></blockquote><div><br></div><div>But even that is not enough from a specification POV: After the yield, the same thread might be schedule immediately again, and again, ... Or do we have some specification of the scheduler? I don't think so, but perhaps I'm wrong in this respect. If we have one, it has to state explicitly that the scheduling is fair in the sense that every runnable thread actually runs after a finite amount of time, otherwise you are in undefined land again...</div><div><br></div><div>The question where scheduling can actually happen is a totally different issue, and I don't know of a specification here, either. In GHC, this seems to be tied to allocations, but this is a bit brittle and unintuitive. To guarantee that you hit a scheduling point after a finite amount of time is easy in principle, e.g. do this on every backwards branch and on every function entry. But this has an associated cost, so we have a tradeoff here.</div><div><br></div><div>In general, I wouldn't worry too much about the semantics of unsynchronized threads, if you rely on this somehow, you will sooner or later enter a world of pain. Add e.g. thread priorities to the mix, and you will suffer even more, experiencing wonderful things like priority inversion etc. :-P</div></div></div>