Temporarily pinning a thread to a capability
fryguybob at gmail.com
Wed Oct 28 20:49:54 UTC 2015
I figured as much :D
On Wed, Oct 28, 2015 at 4:43 PM, Edward Kmett <ekmett at gmail.com> wrote:
> If the number of capabilities is increased or decreased while everything I
> have here is running I'm going to have to blow up the world anyways.
> Basically I'll need to rely on an invariant that setNumCapabilities is
> called before you spin up these Par-like computations.
> On Wed, Oct 28, 2015 at 4:28 PM, Ryan Yates <fryguybob at gmail.com> wrote:
>> A thread with TSO_LOCKED can be migrated if the number of capabilities
>> On Tue, Oct 27, 2015 at 11:35 PM, Edward Kmett <ekmett at gmail.com> wrote:
>>> Would anything go wrong with a thread id if I pinned it to a capability
>>> after the fact?
>>> I could in theory do so just by setting
>>> tso->flags |= TSO_LOCKED
>>> and then disabling this later by restoring the TSO flags.
>>> I can't think of anything but I figured folks here might be able to
>>> think of invariants I don't know about.
>>> Usage scenario:
>>> I have a number of things where I can't afford a map from a ThreadId# or
>>> even its internal id to a per-thread value for bounded wait-free
>>> On the other hand, I can afford one entry per capability and to make a
>>> handful of primitives that can't be preempted, letting me use normal
>>> writes, not even a CAS, to update the capability-local variable in a
>>> primitive (indexing into an array of size based on the number of
>>> capabilities). This lets me bound the amount of "helpers" to consider by
>>> the capability count rather than the potentially much larger and much more
>>> variable number of live threads.
>>> However, I may need to access this stuff in "pure" code that wasn't
>>> written with my needs in mind, so I need to at least temporarily pin the
>>> current thread to a fixed capability for the duration when that happens.
>>> This isn't perfect, it won't react to a growing number of capabilities
>>> nicely in the future, but it does handle a lot of things I can't do now at
>>> all without downgrading to lock-free and starving a lot of computations, so
>>> I'm hoping the answer is "it all works". =)
>>> ghc-devs mailing list
>>> ghc-devs at haskell.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ghc-devs