ok
Direktori : /proc/thread-self/root/lib64/python3.6/lib2to3/pgen2/__pycache__/ |
Current File : //proc/thread-self/root/lib64/python3.6/lib2to3/pgen2/__pycache__/conv.cpython-36.pyc |
3 \�% � @ s2 d Z ddlZddlmZmZ G dd� dej�ZdS )a� Convert graminit.[ch] spit out by pgen to Python code. Pgen is the Python parser generator. It is useful to quickly create a parser from a grammar file in Python's grammar notation. But I don't want my parsers to be written in C (yet), so I'm translating the parsing tables to Python data structures and writing a Python parse engine. Note that the token numbers are constants determined by the standard Python tokenizer. The standard token module defines these numbers and their names (the names are not used much). The token numbers are hardcoded into the Python tokenizer and into pgen. A Python implementation of the Python tokenizer is also available, in the standard tokenize module. On the other hand, symbol numbers (representing the grammar's non-terminals) are assigned by pgen based on the actual grammar input. Note: this module is pretty much obsolete; the pgen module generates equivalent grammar tables directly from the Grammar.txt input file without having to invoke the Python pgen C program. � N)�grammar�tokenc @ s0 e Zd ZdZdd� Zdd� Zdd� Zdd � Zd S )� Convertera2 Grammar subclass that reads classic pgen output files. The run() method reads the tables as produced by the pgen parser generator, typically contained in two C files, graminit.h and graminit.c. The other methods are for internal use only. See the base class for more documentation. c C s | j |� | j|� | j� dS )z<Load the grammar tables from the text files written by pgen.N)�parse_graminit_h�parse_graminit_c� finish_off)�selfZ graminit_hZ graminit_c� r �*/usr/lib64/python3.6/lib2to3/pgen2/conv.py�run/ s z Converter.runc C s� yt |�}W n0 tk r< } ztd||f � dS d}~X nX i | _i | _d}x�|D ]�}|d7 }tjd|�}| r�|j� r�td|||j� f � qT|j� \}}t |�}|| jks�t �|| jks�t �|| j|<