[Haskell-cafe] Tokenizing and Parsec
Khudyakov Alexey
alexey.skladnoy at gmail.com
Tue Jan 12 13:33:31 EST 2010
В сообщении от 12 января 2010 03:35:10 Günther Schmidt написал:
> Hi all,
>
> I've used Parsec to "tokenize" data from a text file. It was actually
> quite easy, everything is correctly identified.
>
> So now I have a list/stream of self defined "Tokens" and now I'm stuck.
> Because now I need to write my own parsec-token-parsers to parse this
> token stream in a context-sensitive way.
>
> Uhm, how do I that then?
>
That's pretty easy actually. You can use function `token' to define you own
primitive parsers. It's defined in Parsec.Prim If I'm correctly remember.
Also you could want to add information about position in the source code to
you lexems. Here is some code to illustrate usage:
>
> -- | Language lexem
> data LexemData = Ident String
> | Number Double
> | StringLit String
> | None
> | EOL
> deriving (Show,Eq)
>
> data Lexem = Lexem { lexemPos :: SourcePos
> , lexemData :: LexemData
> }
> deriving Show
>
> type ParserLex = Parsec [Lexem] ()
>
> num :: ParserLex Double
> num = token (show . lexemData) lexemPos (comp . lexemData)
> where
> comp (Number x) = Just x
> comp _ = Nothing
More information about the Haskell-Cafe
mailing list