[Haskell-cafe] Surprising interaction between @-patterns and nested patterns
Albert Y. C. Lai
trebla at vex.net
Mon Jan 2 23:49:07 UTC 2023
I don't have an opinion; or maybe I do, my hunch is that any alternative
would be differently but equally tricky.
But the x0 example is equivalent to simply:
let x0 = (undefined, ()) in x0 `seq` ()
which may be surprisingly unsurprising because I see that it has almost
nothing to do with the x1 example.
In order to make "pattern = x0" actually relevant, you may want:
let { x0 = (undefined, ()) ; ((_, _), y) = x0 } in y `seq` ()
-> undefined
The semantic explanation involves the same considerations as those of
the x1 example. And you can hold the same opinion, for or against.
On 2023-01-02 18:00, Tom Ellis wrote:
> Consider the following:
>
> let x0 = (undefined, ()) ; ((_, _), _) = x0 in x0 `seq` ()
> ()
>
> but then combining the patterns into an @-pattern, which really feels
> like it shouldn't change semantics:
>
> let x1@((_, _), _) = (undefined, ()) in x1 `seq` ()
> -> undefined
>
> Does this strike anyone else as odd? The reason for this behaviour is
> that
>
> let x at p = rhs in body
>
> is translated to
>
> case rhs of ~(x at p) -> body
>
> according to section 3.12 of The Report[1] which is then translated to
>
> (\x -> body) (case rhs of x at p -> x)
>
> if p binds no variables, according to section 3.17.3 of The Report[2],
> and then to
>
> (\x -> body) (case rhs of p -> (\x -> x) rhs)
>
> which is equivalent to
>
> (\x -> body) (case rhs of p -> rhs)
>
> Putting it all together
>
> let x0 = (undefined, ()) ; ((_, _), _) = x0 in x0 `seq` ()
>
> desugars as
>
> (\x0 -> x0 `seq` ())
> (let v = (undefined, ()) in case v of ((_, _), _) -> v)
>
> which evaluates as
>
> let v = (undefined, ())
> in case v of ((_, _), _) -> ()
>
>
> This seems very odd to me. Why should combining two non-diverging
> patterns into an @-pattern cause divergence? Specifically, the
> translation from
>
> let x at p = rhs in body
>
> to
>
> case rhs of ~(x at p) -> body
>
> seems highly dubious. It comes from the more general rule translating
>
> let p = rhs in body
>
> to
>
> case rhs of ~p in body
>
> but it seems to interact badly with @-patterns. It seems like the
> rule for @-patterns should be something like
>
> let x at p = rhs in body
>
> translates to
>
> case rhs of ~(x@(~p)) in body
>
> Then the original expression containing x1 would not diverge.
>
> Does anyone else have a view on this matter?
>
> Tom
>
>
> [1] https://www.haskell.org/onlinereport/exps.html#sect3.12
>
> [2] https://www.haskell.org/onlinereport/exps.html#sect3.17.3
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
More information about the Haskell-Cafe
mailing list