[Haskell-cafe] Parsing workflow
ozgurakgun at gmail.com
Sun Oct 31 11:50:59 EDT 2010
I don't know if you've already used it, but Parsec includes some kind of a
lexer through the
You can start by having a look at the
On 31 October 2010 15:11, Nils Schweinsberg <ml at n-sch.de> wrote:
> I'm having a really hard time to write a correct parser for a small
> language I've developed. I have been trying to write a parser using parsec,
> but always get a lot of error messages like "unexpected "\n", expected ...,
> new-line or..." when trying to run the parser. Then I read about the happy
> parser and really liked the separation of lexing the text into tokens and
> parsing the actual logic behind those tokens. Since I couldn't get familiar
> with the lexer "alex" I gave up on the alex-happy-approach again and went
> back to parsec. But with that lexer->parser idea on my mind, my parser
> currently looks a lot like a lexer. So I came up with the idea of using a
> combination of parsec and happy, where I generate a list of tokens for my
> text via parsec and analyse it with happy.
> My questions would be:
> - Is this a valid approach?
> - What is your workflow on parsing complex data structures?
> - What about performance? Since my project is going to be an interpreted
> language parsing performance might be interesting aswell. I've read that
> happy is in general faster than parsec, but what if I combine both of them
> as I said above? I guess that parsing a simple list of tokens without any
> nested parser structures would be pretty fast?
> - Do you have any other ideas on how to improve my parser?
> - What are your general thoughts on happy vs. parsec?
> Thanks for any replies,
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe