tokenizer#

class ErnieMTokenizer(vocab_file, sentencepiece_model_file, do_lower_case=False, encoding='utf8', unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[源代码]#

基类:PretrainedTokenizer

Constructs a ErnieM tokenizer. It uses the sentencepiece tools to cut the words to sub-words.

参数:
  • vocab_file (str) -- The file path of the vocabulary.

  • sentencepiece_model_file (str) -- The file path of sentencepiece model.

  • do_lower_case (str, optional) -- Whether or not to lowercase the input when tokenizing. Defaults to`True`.

  • unk_token (str, optional) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "[UNK]".

  • sep_token (str, optional) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".

  • pad_token (str, optional) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".

  • cls_token (str, optional) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".

  • mask_token (str, optional) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".

get_offset_mapping(text)[源代码]#

Returns the map of tokens and the start and end index of their start and end character. Modified from bojone/bert4keras :param text: Input text. :type text: str :param split_tokens: the tokens which has been split which can accelerate the operation. :type split_tokens: Optional[List[str]]

返回:

The offset map of input text.

返回类型:

list

property vocab_size#

Return the size of vocabulary.

返回:

The size of vocabulary.

返回类型:

int

get_vocab()[源代码]#

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

返回:

The vocabulary.

返回类型:

Dict[str, int]

clean_text(text)[源代码]#

Performs invalid character removal and whitespace cleanup on text.

convert_tokens_to_string(tokens)[源代码]#

Converts a sequence of tokens (strings for sub-words) in a single string.

convert_ids_to_string(ids)[源代码]#

Converts a sequence of tokens (strings for sub-words) in a single string.

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]#

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

An ERNIE-M sequence has the following format: - single sequence: [CLS] X [SEP] - pair of sequences: [CLS] A [SEP] [SEP] B [SEP] :param token_ids_0: List of IDs to which the special tokens will be added. :type token_ids_0: List[int] :param token_ids_1: Optional second list of IDs for sequence pairs.

Defaults to None.

返回:

List of input_id with the appropriate special tokens.

返回类型:

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

An ERNIE-M offset_mapping has the following format: - single sequence: (0,0) X (0,0) - pair of sequences: (0,0) A (0,0) (0,0) B (0,0)

参数:
  • offset_mapping_ids_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.

  • offset_mapping_ids_1 (List[tuple], optional) -- Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.

返回:

List of wordpiece offsets with the appropriate offsets of special tokens.

返回类型:

List[tuple]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]#

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods. :param token_ids_0: List of ids of the first sequence. :type token_ids_0: List[int] :param token_ids_1: Optional second list of IDs for sequence pairs.

Defaults to None.

参数:

already_has_special_tokens (str, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to False.

返回:

The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

返回类型:

List[int]

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: List[int] | None = None) List[int][源代码]#

Create the token type IDs corresponding to the sequences passed. [What are token type IDs?](../glossary#token-type-ids)

Should be overridden in a subclass if the model has a special way of building those.

参数:
  • token_ids_0 (List[int]) -- The first tokenized sequence.

  • token_ids_1 (List[int], optional) -- The second tokenized sequence.

返回:

The token type ids.

返回类型:

List[int]

is_ch_char(char)[源代码]#
is_alpha(char)[源代码]#
is_punct(char)[源代码]#
is_whitespace(char)[源代码]#

is whitespace