ok

Mini Shell

Direktori : /opt/cloudlinux/venv/lib/python3.11/site-packages/coverage/__pycache__/
Upload File :
Current File : //opt/cloudlinux/venv/lib/python3.11/site-packages/coverage/__pycache__/phystokens.cpython-311.pyc

�

�܋f}���dZddlmZddlZddlZddlZddlZddlZddlZddl	Z	ddl
mZmZm
Z
mZmZddlmZddlmZmZee	jZdd
�ZGd�dej��Zdd�ZGd�d��Ze��jZdd�ZdS)z"Better tokenizing for coverage.py.�)�annotationsN)�Iterable�List�Optional�Set�Tuple)�env)�TLineNo�TSourceTokenLines�toks�
TokenInfos�returnc	#�:K�d}d}d}|D�]\}}\}}\}}	}
||kr�|r�|�d��r�d}|�d��rd}n<|tjkr,d|vr(|�dd	��d
ddkrd}|rLt	|�d��d��d	z
}tjdd||f||d
zf|��V�|
}|t
jt
jfvr|}tj||||f||	f|
��V�|}��dS)aBReturn all physical tokens, even line continuations.

    tokenize.generate_tokens() doesn't return a token for the backslash that
    continues lines.  This wrapper provides those tokens so that we can
    re-create a faithful representation of the original source.

    Returns the same values as generate_tokens()

    N����z\
T�\F�
�r���i���)	�endswith�token�STRING�split�len�tokenize�	TokenInfo�NEWLINE�NL)
r�	last_line�last_lineno�
last_ttext�ttype�ttext�slineno�scol�elineno�ecol�ltext�inject_backslash�ccols
             �f/builddir/build/BUILD/cloudlinux-venv-1.0.6/venv/lib64/python3.11/site-packages/coverage/phystokens.py�_phys_tokensr-s����� $�I��K��J�AE�'�'�=��u�o�w�������'�!�!�� 
�Y�/�/��7�7� 
� $(� ��&�&�t�,�,�1�',�$�$��e�l�*�*��u�}�}����T�1�)=�)=�a�)@��)D��)L�)L�,1�(�#���y���t�4�4�R�8�9�9�A�=�D�"�,��v� �$��'�4��6�):�!������
�I���)�8�;�7�7�7��J�� �������$��QV�W�W�W�W�W����O'�'�c�<�eZdZdZdd�Zejdkrd
d
�ZdSdS)�MatchCaseFinderz$Helper for finding match/case lines.�source�strr�Nonec�z�t��|_|�tj|����dS�N)�set�match_case_lines�visit�ast�parse)�selfr1s  r,�__init__zMatchCaseFinder.__init__Rs0��.1�e�e����
�
�3�9�V�$�$�%�%�%�%�%r.���
�node�	ast.Matchc���|j�|j��|jD]&}|j�|jj���'|�|��dS)z Invoked by ast.NodeVisitor.visitN)r7�add�lineno�cases�pattern�
generic_visit)r;r@�cases   r,�visit_MatchzMatchCaseFinder.visit_MatchXse���!�%�%�d�k�2�2�2��
�
?�
?���%�)�)�$�,�*=�>�>�>�>����t�$�$�$�$�$r.N)r1r2rr3)r@rArr3)�__name__�
__module__�__qualname__�__doc__r<�sys�version_inforI�r.r,r0r0PsZ������.�.�&�&�&�&�
��7�"�"�	%�	%�	%�	%�	%�	%�#�"r.r0r1r2rc#�@K�tjtjtjtjh}g}d}|�d���dd��}t|��}tj
jrt|��j
}t|��D�]s\}}\}}	\}
}}
d}tjd|��D�]G}
|
dkr|V�g}d}d}�n0|
dkrd}�n&||vrd}�n|r%|	|kr|�d	d
|	|z
zf��d}tj�|d�����dd
�}|tjkr�t-j|��rd}n}t0jdkrmtj
jr\t-j|��rHt7|��dkrd}n*t7|��dkr|ddd	krd}nd}|r||vrd}|�||
f��d}d}	��I|r|}��u|r|V�dSdS)aGenerate a series of lines, one for each line in `source`.

    Each line is a list of pairs, each pair is a token::

        [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]

    Each pair has a token class, and the token text.

    If you concatenate all the token texts, and then join them with newlines,
    you should have your original `source` back, with two differences:
    trailing white space is not preserved, and a final line with no newline
    is indistinguishable from a final line with a newline.

    r�z
rTz(
)Fr�ws� �xxNr>�keyr=r)r�INDENT�DEDENTrrr�
expandtabs�replace�generate_tokensr	�
PYBEHAVIOR�
soft_keywordsr0r7r-�rer�append�tok_name�get�lower�NAME�keyword�	iskeywordrNrO�
issoftkeywordr)r1�	ws_tokens�line�col�tokgenr7r#r$�sliner&�_r(�
mark_start�part�mark_end�	tok_class�is_start_of_lines                 r,�source_token_linesrr`sq���� ��u�|�U�]�H�K�H�I�"$�D�
�C�
�
�
�q�
!�
!�
)�
)�&�$�
7�
7�F�
�V�
$�
$�F�
�~�#�D�*�6�2�2�C��5A�&�5I�5I�'�'�1��u�m�u�d�Y�a����
��H�V�U�+�+�#	�#	�D��t�|�|��
�
�
����� ������� ����)�#�#� ����'�$��*�*��K�K��s�d�S�j�'9� :�;�;�;�!&�J�$�-�1�1�%��>�>�D�D�F�F�r��r�J�	��E�J�&�&��(��/�/�2�$)�	�	��)�W�4�4��>�7�
2�G�<Q�RW�<X�<X�
2� #�4�y�y�A�~�~�37� 0� 0�"%�d�)�)�q�.�.�d�1�g�a�j�D�6H�6H�37� 0� 0�38� 0�/�2�E�=M�4M�4M�,1�	����Y��-�.�.�.����D�D��	��C�����
�
�
�
�
��r.c�"�eZdZdZd
d�Zdd�Zd	S)�CachedTokenizeraXA one-element cache around tokenize.generate_tokens.

    When reporting, coverage.py tokenizes files twice, once to find the
    structure of the file, and once to syntax-color it.  Tokenizing is
    expensive, and easily cached.

    This is a one-element cache so that our twice-in-a-row tokenizing doesn't
    actually tokenize twice.

    rr3c�"�d|_g|_dSr5)�	last_text�last_tokens)r;s r,r<zCachedTokenizer.__init__�s��(,���57����r.�textr2r
c���||jkrU||_tj|��j}	t	tj|����|_n
#d|_�xYw|jS)z*A stand-in for `tokenize.generate_tokens`.N)rv�io�StringIO�readline�listrr[rw)r;rxr|s   r,r[zCachedTokenizer.generate_tokens�sg���4�>�!�!�!�D�N��{�4�(�(�1�H�
�#'��(@��(J�(J�#K�#K�� � ��
�!%���������s�&A�
AN)rr3)rxr2rr
)rJrKrLrMr<r[rPr.r,rtrt�sF������	�	�8�8�8�8�
 �
 �
 �
 �
 �
 r.rt�bytesc��t|�d����j}tj|��dS)z�Determine the encoding for `source`, according to PEP 263.

    `source` is a byte string: the text of the program.

    Returns a string, the name of the encoding.

    Tr)�iter�
splitlines�__next__r�detect_encoding)r1r|s  r,�source_encodingr��s8���F�%�%�d�+�+�,�,�5�H��#�H�-�-�a�0�0r.)rr
rr
)r1r2rr)r1r~rr2)rM�
__future__rr9rzrdr^rNrr�typingrrrrr�coverager	�coverage.typesr
rrr
r-�NodeVisitorr0rrrtr[r�rPr.r,�<module>r�ss��)�(�"�"�"�"�"�"�
�
�
�
�	�	�	�	�����	�	�	�	�
�
�
�
���������7�7�7�7�7�7�7�7�7�7�7�7�7�7�������5�5�5�5�5�5�5�5��h�(�
)�
�4�4�4�4�n
%�
%�
%�
%�
%�c�o�
%�
%�
%� D�D�D�D�N � � � � � � � �8"�/�#�#�3��	1�	1�	1�	1�	1�	1r.

Zerion Mini Shell 1.0