ok
Direktori : /proc/thread-self/root/proc/self/root/lib64/python2.7/lib2to3/pgen2/ |
Current File : //proc/thread-self/root/proc/self/root/lib64/python2.7/lib2to3/pgen2/conv.pyc |
� {fc @ sE d Z d d l Z d d l m Z m Z d e j f d � � YZ d S( s� Convert graminit.[ch] spit out by pgen to Python code. Pgen is the Python parser generator. It is useful to quickly create a parser from a grammar file in Python's grammar notation. But I don't want my parsers to be written in C (yet), so I'm translating the parsing tables to Python data structures and writing a Python parse engine. Note that the token numbers are constants determined by the standard Python tokenizer. The standard token module defines these numbers and their names (the names are not used much). The token numbers are hardcoded into the Python tokenizer and into pgen. A Python implementation of the Python tokenizer is also available, in the standard tokenize module. On the other hand, symbol numbers (representing the grammar's non-terminals) are assigned by pgen based on the actual grammar input. Note: this module is pretty much obsolete; the pgen module generates equivalent grammar tables directly from the Grammar.txt input file without having to invoke the Python pgen C program. i����N( t grammart tokent Converterc B s2 e Z d Z d � Z d � Z d � Z d � Z RS( s2 Grammar subclass that reads classic pgen output files. The run() method reads the tables as produced by the pgen parser generator, typically contained in two C files, graminit.h and graminit.c. The other methods are for internal use only. See the base class for more documentation. c C s( | j | � | j | � | j � d S( s<