[Haskell-cafe] Tokenizing and Parsec
uhollerbach at gmail.com
Mon Jan 11 22:20:51 EST 2010
Hi, Günther, you could write functions that pattern-match on various
sequences of tokens in a list, you could for example have a look at
the file Evaluator.hs in my scheme interpreter haskeem, or you could
build up more-complex data structures entirely within parsec, and for
this I would point you at the file Parser.hs in my accounting program
umm; both are on hackage. Undoubtedly there are many more and probably
better examples, but I think these are at least a start...
On 1/11/10, Günther Schmidt <gue.schmidt at web.de> wrote:
> Hi all,
> I've used Parsec to "tokenize" data from a text file. It was actually
> quite easy, everything is correctly identified.
> So now I have a list/stream of self defined "Tokens" and now I'm stuck.
> Because now I need to write my own parsec-token-parsers to parse this
> token stream in a context-sensitive way.
> Uhm, how do I that then?
> a Token is something like:
> data Token = ZE String
> | OPS
> | OPSShort String
> | OPSLong String
> | Other String
> | ZECd String
> deriving Show
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
More information about the Haskell-Cafe