Async exceptions and delimited continuations

Alexis King lexi.lambda at gmail.com
Wed Jul 1 19:12:58 UTC 2020


Hi all,

As some of you are likely aware, I have an open GHC proposal[1] to add
native support for delimited continuations to the RTS. I also have an
incomplete implementation,[2] and the only major remaining obstacle
concerns async exceptions. The issue is subtle, so I’ve tried to
provide all the necessary context in this email. If you’re already
familiar with the ideas at play, you can skip the context about how
delimited continuations work.

For those unfamiliar, delimited continuations allow capturing slices
of the call stack and restoring them later. For example, the program

    do y <- prompt $ do x <- control0 $ \k -> k (pure 10)
                        pure (x + 5)
       print y

will print 15. To understand what’s happening operationally, we can
imagine an abstract call stack made up of continuation frames:

    ┌──────────┐
    │  ● + 5   │    redex: control0 $ \k -> k (pure 10)
    ├──────────┤
    │ prompt ● │
    ├──────────┤
    │ print ●  │
    ├──────────┤
    │   ...    │
    ├──────────┤

Here, each ● represents the “hole” where the evaluated result of the
redex will be returned. `control0` moves all the frames between the
top of the stack and the first `prompt` into the heap and returns a
reference to them, so after a single reduction step, we have

    ┌──────────┐
    │ print ●  │    redex: k1 (pure 10)
    ├──────────┤    heap:  k1 = ┌──────────┐
    │   ...    │                │  ● + 5   │
    ├──────────┤                └──────────┘

When a continuation is applied, its stored stack frames are copied
back onto the top of the current stack, and the argument becomes the
new redex:

    ┌──────────┐
    │  ● + 5   │    redex: pure 10
    ├──────────┤
    │ print ●  │
    ├──────────┤
    │   ...    │
    ├──────────┤

Now it should hopefully be clear how we end up printing 15.

With that context, consider the following expression:

    prompt $ mask_ $ do x <- control0 $ \k -> k (pure 10)
                        f x

The call stack at the point of control0 looks very similar in this
program, but now we have a use of `mask_` in there as well:

    ┌──────────┐
    │   f ●    │    redex: control0 $ \k -> k (pure 10)
    ├──────────┤    exns:  masked
    │ mask_ ●  │
    ├──────────┤
    │ prompt ● │
    ├──────────┤
    │   ...    │
    ├──────────┤

When capturing the continuation, we’ll unwind the stack the same way
we did before. Because we’re popping mask_ off the stack, we’ll unmask
async exceptions:

    ┌──────────┐    redex: k1 (pure 10)
    │   ...    │    exns:  not masked
    ├──────────┤    heap:  k1 = ┌──────────┐
                                │   f ●    │
                                ├──────────┤
                                │ mask_ ●  │
                                └──────────┘

Now when we apply `k1`, we’ll copy the `mask_` frame back onto the
stack, and we must re-mask async exceptions. Otherwise, exceptions
will not be masked during the call to `f`, which would be wrong.

Why is this a problem? The RTS applies an optimization: if you call
mask_ (actually maskAsyncExceptions#) while exceptions are already
masked, it doesn’t push a new stack frame at all. So, for example, if
you write

    mask_ $ mask_ $ foo bar

you’ll end up with only one mask_ frame on the call stack, not two.
This tiny optimization actually allows not one but two savings:

    1. The immediate benefit is that we save a stack frame.

    2. The hidden benefit is that we never need to push the old
       exception masking state onto the stack.

       If we had multiple mask_ frames on the stack simultaneously, we
       wouldn’t know what to do when returning: should we unmask them,
       or should they stay masked? We’d need to push that information
       onto the stack in the same way we must push a return address
       when calling a function.

       By skipping these redundant stack frames, we can always be
       certain the right thing to do on return is to unmask async
       exceptions. No need to store anything else.

(This explanation is a slight simplification, especially once
maskUninterruptible comes into play, but it’s close enough.)

Now you may see the looming problem: this strategy completely breaks
down in the presence of delimited continuations. With delimited
continuations, we might have a program like this:

    mask_ $ prompt $ mask_ $ ...

If we capture a continuation up to this prompt, we expect the inner
mask_ frame to be captured along with it. But that won’t happen if we
never pushed a mask_ frame at all due to the aforementioned
optimization.

So now I can finally state my question: what is the right solution for
this? I see three obvious possible ways forward:

    1. Keep track of whether or not we’re inside a prompt and skip the
       optimization in that case.

    2. Keep some bookkeeping that tracks the modifications to the
       async exception masking state since the most recently pushed
       prompt.

    3. Just don’t bother with the optimization at all.

Option 3 certainly seems the most appealing from a simplicity point of
view, and I suspect the optimization doesn’t matter much in practice.
Why? Because the real `mask` implementation from Control.Exception
already avoids re-masking exceptions if they’re masked! (And that’s
okay, because prompt# and control0# are not intended to be used
directly in IO, so code that uses them can provide its own version of
`mask`.) However, it is admittedly possible for the restore action
passed to the argument of `mask` to create redundant calls, as the
check is only performed in `mask` itself.

Is eliminating this optimization an acceptable compromise? Or is there
reason to believe this is important for performance of real programs?

Thanks,
Alexis

[1]: https://github.com/ghc-proposals/ghc-proposals/pull/313
[2]: https://gitlab.haskell.org/lexi.lambda/ghc/-/commits/first-class-continuations


More information about the ghc-devs mailing list