[prev in list] [next in list] [prev in thread] [next in thread] 

List:       haskell-cafe
Subject:    Re: [Haskell-cafe] Tokenizing and Parsec
From:       Khudyakov Alexey <alexey.skladnoy () gmail ! com>
Date:       2010-01-12 18:33:31
Message-ID: 201001122133.31582.alexey.skladnoy () gmail ! com
[Download RAW message or body]

В сообщении от 12 января 2010 03:35:10 Günther Schmidt написал:
> Hi all,
> 
> I've used Parsec to "tokenize" data from a text file. It was actually
> quite easy, everything is correctly identified.
> 
> So now I have a list/stream of self defined "Tokens" and now I'm stuck.
> Because now I need to write my own parsec-token-parsers to parse this
> token stream in a context-sensitive way.
> 
> Uhm, how do I that then?
> 
That's pretty easy actually. You can use function `token' to define you own 
primitive parsers. It's defined in Parsec.Prim If I'm correctly remember.

Also you could want to add information about position in the source code to 
you lexems. Here is some code to illustrate usage:

> 
> -- | Language lexem
> data LexemData = Ident String
>                | Number Double
>                | StringLit String
>                | None
>                | EOL
>                  deriving (Show,Eq)
> 
> data Lexem = Lexem { lexemPos  :: SourcePos
>                    , lexemData :: LexemData
>                    }
>              deriving Show
> 
> type ParserLex = Parsec [Lexem] ()
> 
> num :: ParserLex Double
> num = token (show . lexemData) lexemPos (comp . lexemData)
>     where
>       comp (Number x) = Just x
>       comp _          = Nothing
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic