Bang patterns

Ben Millwood haskell at benmachine.co.uk
Sun Feb 3 23:34:04 CET 2013


On Fri, Feb 01, 2013 at 05:10:42PM +0000, Ian Lynagh wrote:
>
>The first is suggested by "A bang only really has an effect if it
>precedes a variable or wild-card pattern" on
>http://hackage.haskell.org/trac/haskell-prime/wiki/BangPatterns
>
>We could therefore alter the lexical syntax to make strict things into
>lexems, for example
>    reservedid -> ...
>                | _
>                | !_
>    strictvarid -> ! varid
>etc. This would mean that "f !x" is 2 lexemes, and "f ! x" 3 lexemes,
>with the former defining the function 'f' and the latter defining the
>operator '!'.
>
>This has 3 downsides:
>
>* It would require also accepting the more radical proposal of making
>  let strict, as it would no longer be possible to write
>    let ![x,y] = undefined in ()

We really can't make let strict, in my view: its laziness is sort of 
fundamental. I don't see why the given example necessitates it though: 
just use case-of in that scenario. In fact, I've kind of always been 
uncomfortable with bang patterns in let-statements. I feel like I should 
be able to omit an unused let-binding without affecting my program at 
all, and bang patterns in let make that no longer true.

>* It would mean that "f !x" and "f !(x)" are different. Probably not a
>  big issue in practice.

Yeah, I'm not upset about this. We'd be thinking of the ! as a decorator 
in the same way that, say, infix-backticks are: we don't expect `(foo)` 
to work.

>* It may interact badly with other future extensions. For example,
>    {-# LANGUAGE ViewPatterns #-}
>    f !(view -> x) = ()
>  should arguably be strict in x.
>  (you might also argue that it should define the operator '!'.
>  Currently, in ghc, it defines an 'f' that is lazy in x, which IMO is a
>  bug).

Hmm. Not quite strict in x. I'd think the right way to make that strict 
in x is:

      f (view -> !x) = ()

What you want is possibly to evaluate the thing you pass to the view 
/before/ matching on the result. But I imagine that in most cases your 
view function will be strict so the difference will be immaterial.

I agree that GHC current behaviour looks like a bug.

>The second is to parse '!' differently depending on whether or not it is
>followed by a space. In the absence of a decision to require infix
>operators to be surrounded by spaces, I think this is a bad idea: Tricky
>to specify, and to understand.

Hmm. It's a shame because in real code operator definitions are almost 
invariably surrounded by spaces, even when the use of the operator 
wouldn't be. But I agree in general.

>The third is to parse '!' in patterns in the same way that '~' is parsed
>in patterns, except that (!) would be accepted as binding the operator
>'!'. This means that "f ! x" defines f.

This is roughly how it's done at present, right? It's annoyingly 
inconsistent, but fairly low-impact.

>So my proposal would be to go with option 3. What do you think? And did
>I miss any better options?

You missed the option of going the way of ~ and making ! an illegal name 
for an operator. Obvious drawbacks, probably not a good idea, but it 
would be the most consistent solution, so I wouldn't dismiss it 
immediately.

(If we do come up with a way that doesn't involve making ! illegal, 
maybe we should consider allowing ~ as an operator as well!)

There's another alternative entirely, that I haven't really thought 
about: introduce bang patterns on types instead of on variables. I 
realise this is less flexible, but! it covers many common cases, it 
avoids the infix confusion altogether, it echoes the existing usage for 
strict datatypes, and it makes the strictness of a function 
(potentially) part of its type signature, which would be handy in 
documentation. I realise this is a bit late in the game to be including 
this option, but if it doesn't get thought about now, it never will.

Anyway, in light of my above comments, I think I like the first option 
the best (so bang patterns only apply to variables, let doesn't become 
strict).

regards,
Ben



More information about the Haskell-prime mailing list