[Haskell-beginners] tokenizing a string and parsing the string
Christian Maeder
Christian.Maeder at dfki.de
Wed Oct 12 11:24:03 CEST 2011
Despite the term "scannerless" parsing you'll typically have "lexical
rules" for the tokens (like identifiers, numbers, separators, etc.) and
normal parser/grammar rules.
I recommend to use parsec also as scanner (and avoid a separate
tokenizer). I don't think, speed matters that much.
The point is that after every token the spaces or comments until the
next token starts must be consumed from the input. (I call this
"skipping", Daan Leijen has a "lexeme" parser for this in his
Parsec.Token module.)
HTH Christian
Am 12.10.2011 10:39, schrieb Erik de Castro Lopo:
> Stephen Tetley wrote:
>
>> In combinator parsing with say Parsec, you don't tokenize the input
>> the parsing - this is an instance of so called "scannerless" parsing
>> (slight exaggeration for sake of simplicity).
>>
>> If you needed to tokenize then parse, this is the model followed by
>> Alex and Happy.
>
> It is actually possible to use alex to split the input into tokens and
> then use Parsec to parse the stream of tokens. Token parsers tend
> to run a bit faster than Char parsers.
>
> Erik
More information about the Beginners
mailing list