tokenizer_utils

class PretrainedTokenizer(*args, **kwargs)[源代码]

基类:object

The base class for all pretrained tokenizers. It mainly provides common methods for loading (construction and loading) and saving pretrained tokenizers. Loading and saving also rely on the following class attributes which should be overridden by derived classes accordingly:

  • tokenizer_config_file (str): Represents the file name of tokenizer configuration for configuration saving and loading in local file system. The value is tokenizer_config.json.

  • resource_files_names (dict): Represents resources to specific file names mapping for resource saving and loading in local file system. The keys of dict representing resource items should be argument names in tokenizer's __init__ method, and the values are file names for saving and loading corresponding resources. The mostly used resources here are vocabulary file and sentence-piece model file.

  • pretrained_init_configuration (dict): Provides the tokenizer configurations of built-in pretrained tokenizers (contrasts to tokenizers in local file system). It has pretrained tokenizer names as keys (the same as pretrained model names, such as bert-base-uncased), and the values are dict preserving corresponding configuration for tokenizer initialization.

  • pretrained_resource_files_map (dict): Provides resource URLs of built-in pretrained tokenizers (contrasts to tokenizers in local file system). It has the same keys as resource_files_names, and the values are also dict mapping specific pretrained tokenizer names (such as bert-base-uncased) to corresponding resource URLs.

Moreover, methods common to tokenizers for tokenization, token/id conversion and encoding as model inputs are also provided here.

Besides, metaclass InitTrackerMeta is used to create PretrainedTokenizer, by which subclasses can track arguments for initialization automatically and expose special tokens initialization used as attributes.

__call__(text, text_pair=None, max_seq_len: Optional[int] = None, stride=0, is_split_into_words=False, pad_to_max_seq_len=False, truncation_strategy='longest_first', return_position_ids=False, return_token_type_ids=True, return_attention_mask=False, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False)[源代码]

Performs tokenization and uses the tokenized tokens to prepare model inputs. It supports sequence or sequence pair as input, and batch input is allowed. self.encode() or self.batch_encode() would be called separately for single or batch input depending on input format and is_split_into_words argument.

参数
  • text (str, List[str] or List[List[str]]) -- The sequence or batch of sequences to be processed. One sequence is a string or a list of strings depending on whether it has been pretokenized. If each sequence is provided as a list of strings (pretokenized), you must set is_split_into_words as True to disambiguate with a batch of sequences.

  • text_pair (str, List[str] or List[List[str]], optional) -- Same as text argument, while it represents for the latter sequence of the sequence pair.

  • max_seq_len (int, optional) -- If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those overflowing tokens will be added to the returned dictionary when return_overflowing_tokens is True. Defaults to None.

  • stride (int, optional) -- Only available for batch input of sequence pair and mainly for question answering usage. When for QA, text represents questions and text_pair represents contexts. If stride is set to a positive number, the context will be split into multiple spans where stride defines the number of (tokenized) tokens to skip from the start of one span to get the next span, thus will produce a bigger batch than inputs to include all spans. Moreover, 'overflow_to_sample' and 'offset_mapping' preserving the original example and position information will be added to the returned dictionary. Defaults to 0.

  • pad_to_max_seq_len (bool, optional) -- If set to True, the returned sequences would be padded up to max_seq_len specified length according to padding side (self.padding_side) and padding token id. Defaults to False.

  • truncation_strategy (str, optional) --

    String selected in the following options:

    • 'longest_first' (default) Iteratively reduce the inputs sequence

    until the input is under max_seq_len starting from the longest one at each token (when there is a pair of input sequences). - 'only_first': Only truncate the first sequence. - 'only_second': Only truncate the second sequence. - 'do_not_truncate': Do not truncate (raise an error if the input sequence is longer than max_seq_len).

    Defaults to 'longest_first'.

  • return_position_ids (bool, optional) -- Whether to include tokens position ids in the returned dictionary. Defaults to False.

  • return_token_type_ids (bool, optional) -- Whether to include token type ids in the returned dictionary. Defaults to True.

  • return_attention_mask (bool, optional) -- Whether to include the attention mask in the returned dictionary. Defaults to False.

  • return_length (bool, optional) -- Whether to include the length of each encoded inputs in the returned dictionary. Defaults to False.

  • return_overflowing_tokens (bool, optional) -- Whether to include overflowing token information in the returned dictionary. Defaults to False.

  • return_special_tokens_mask (bool, optional) -- Whether to include special tokens mask information in the returned dictionary. Defaults to False.

返回

The dict has the following optional items:

  • input_ids (list[int]): List of token ids to be fed to a model.

  • position_ids (list[int], optional): List of token position ids to be fed to a model. Included when return_position_ids is True

  • token_type_ids (list[int], optional): List of token type ids to be fed to a model. Included when return_token_type_ids is True.

  • attention_mask (list[int], optional): List of integers valued 0 or 1, where 0 specifies paddings and should not be attended to by the model. Included when return_attention_mask is True.

  • seq_len (int, optional): The input_ids length. Included when return_length is True.

  • overflowing_tokens (list[int], optional): List of overflowing tokens. Included when if max_seq_len is specified and return_overflowing_tokens is True.

  • num_truncated_tokens (int, optional): The number of overflowing tokens. Included when if max_seq_len is specified and return_overflowing_tokens is True.

  • special_tokens_mask (list[int], optional): List of integers valued 0 or 1, with 0 specifying special added tokens and 1 specifying sequence tokens. Included when return_special_tokens_mask is True.

  • offset_mapping (list[int], optional): list of pair preserving the index of start and end char in original input for each token. For a special token, the index pair is (0, 0). Included when stride works.

  • overflow_to_sample (int, optional): Index of example from which this feature is generated. Included when stride works.

返回类型

dict or list[dict] (for batch input)

property all_special_tokens

All the special tokens ('<unk>', '<cls>'...) corresponding to special token arguments in __init__ (arguments end with '_end').

Type

list

property all_special_ids

All the token ids corresponding to all the special tokens.

Type

list

convert_tokens_to_ids(tokens)[源代码]

Converts a sequence of tokens into ids using the vocab attribute (an instance of Vocab). Override it if needed.

Args:

tokens (list[int]): List of token ids.

返回

Converted id list.

返回类型

list

convert_tokens_to_string(tokens)[源代码]

Converts a sequence of tokens (list of string) to a single string by using ' '.join(tokens) .

参数

tokens (list[str]) -- A sequence of tokens.

返回

Converted string.

返回类型

str

convert_ids_to_tokens(ids, skip_special_tokens=False)[源代码]

Converts a token id or a sequence of token ids (integer) to a token or a sequence of tokens (str) by using the vocab attribute (an instance of Vocab).

参数
  • ids (int` or `list[int]) -- A token id or a sequence of token ids.

  • skip_special_tokens (bool, optional) -- Whether to skip and not decode special tokens when converting. Defaults to False.

返回

Converted token or token sequence.

返回类型

str

classmethod from_pretrained(pretrained_model_name_or_path, *args, **kwargs)[源代码]

Creates an instance of PretrainedTokenizer. Related resources are loaded by specifying name of a built-in pretrained model, or a community-contributed pretrained model, or a local file directory path.

参数
  • pretrained_model_name_or_path (str) --

    Name of pretrained model or dir path to load from. The string can be:

    • Name of built-in pretrained model

    • Name of a community-contributed pretrained model.

    • Local directory path which contains tokenizer related resources and tokenizer config file ("tokenizer_config.json").

  • *args (tuple) -- position arguments for model __init__. If provided, use these as position argument values for tokenizer initialization.

  • **kwargs (dict) -- keyword arguments for model __init__. If provided, use these to update pre-defined keyword argument values for tokenizer initialization.

返回

An instance of PretrainedTokenizer.

返回类型

PretrainedTokenizer

示例

from paddlenlp.transformers import BertTokenizer

# Name of built-in pretrained model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# Name of community-contributed pretrained model
tokenizer = BertTokenizer.from_pretrained('yingyibiao/bert-base-uncased-sst-2-finetuned')

# Load from local directory path
tokenizer = BertTokenizer.from_pretrained('./my_bert/')
save_pretrained(save_directory)[源代码]

Save tokenizer configuration and related resources to files under save_directory. The tokenizer configuration would be saved into tokenizer_config_file indicating file (thus tokenizer_config.json), and resources would be saved into resource_files_names indicating files by using self.save_resources(save_directory).

The save_directory can be used in from_pretrained as argument value of pretrained_model_name_or_path to re-load the tokenizer.

参数

save_directory (str) -- Directory to save files into.

示例

from paddlenlp.transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer.save_pretrained('trained_model')
# reload from save_directory
tokenizer = BertTokenizer.from_pretrained('trained_model')
save_resources(save_directory)[源代码]

Save tokenizer related resources to resource_files_names indicating files under save_directory by copying directly. Override it if necessary.

参数

save_directory (str) -- Directory to save files into.

static load_vocabulary(filepath, unk_token=None, pad_token=None, bos_token=None, eos_token=None, **kwargs)[源代码]

Instantiate an instance of Vocab from a file reserving all tokens by using Vocab.from_dict. The file contains a token per line, and the line number would be the index of corresponding token.

参数
  • filepath (str) -- path of file to construct vocabulary.

  • unk_token (str) -- special token for unknown token. If no need, it also could be None. Defaults to None.

  • pad_token (str) -- special token for padding token. If no need, it also could be None. Defaults to None.

  • bos_token (str) -- special token for bos token. If no need, it also could be None. Defaults to None.

  • eos_token (str) -- special token for eos token. If no need, it also could be None. Defaults to None.

  • **kwargs (dict) -- keyword arguments for Vocab.from_dict.

返回

An instance of Vocab.

返回类型

Vocab

static save_vocabulary(filepath, vocab)[源代码]

Save all tokens to a vocabulary file. The file contains a token per line, and the line number would be the index of corresponding token.

参数
  • filepath (str) -- File path to be saved to.

  • vocab (Vocab|dict) -- The Vocab or dict instance to be saved.

truncate_sequences(ids, pair_ids=None, num_tokens_to_remove=0, truncation_strategy='longest_first', stride=0)[源代码]

Truncates a sequence pair in place to the maximum length.

参数
  • ids -- list of tokenized input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • pair_ids -- Optional second list of input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • num_tokens_to_remove (int, optional, defaults to 0) -- number of tokens to remove using the truncation strategy

  • truncation_strategy --

    string selected in the following options: - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_seq_len

    starting from the longest one at each token (when there is a pair of input sequences). Overflowing tokens only contains overflow from the first sequence.

    • 'only_first': Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove.

    • 'only_second': Only truncate the second sequence

    • 'do_not_truncate': Does not truncate (raise an error if the input sequence is longer than max_seq_len)

  • stride (int, optional, defaults to 0) -- If set to a number along with max_seq_len, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

Should be overridden in a subclass if the model has a special way of building those.

参数
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs.

返回

List of input_id with the appropriate special tokens.

返回类型

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

Should be overridden in a subclass if the model has a special way of building those.

参数
  • offset_mapping_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.

  • offset_mapping_1 (List[tuple], optional) -- Optional second list of char offsets for offset mapping pairs.

返回

List of char offsets with the appropriate offsets of special tokens.

返回类型

List[tuple]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

参数
  • token_ids_0 (List[int]) -- List of ids of the first sequence.

  • token_ids_1 (List[int], optional) -- List of ids of the second sequence.

  • already_has_special_tokens (bool, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to None.

返回

The list of integers in the range [0, 1]:

1 for a special token, 0 for a sequence token.

返回类型

results (List[int])

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

Should be overridden in a subclass if the model has a special way of building those.

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs.

返回

List of token_type_id according to the given sequence(s).

返回类型

List[int]

num_special_tokens_to_add(pair)[源代码]

Returns the number of added tokens when encoding a sequence with special tokens.

参数

pair (bool, optional) -- Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence. Defaults to False.

返回

Number of special tokens added to sequences.

返回类型

int

encode(text, text_pair=None, max_seq_len=512, pad_to_max_seq_len=False, truncation_strategy='longest_first', return_position_ids=False, return_token_type_ids=True, return_attention_mask=False, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False)[源代码]

Performs tokenization and uses the tokenized tokens to prepare model inputs. It supports sequence or sequence pair as input, and batch input is not allowed.

参数
  • text (str, List[str] or List[int]) -- The sequence to be processed. One sequence is a string, a list of strings, or a list of integers depending on whether it has been pretokenized and converted to ids.

  • text_pair (str, List[str] or List[List[str]]) -- Same as text argument, while it represents for the latter sequence of the sequence pair.

  • max_seq_len (int, optional) -- If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those overflowing tokens will be added to the returned dictionary when return_overflowing_tokens is True. Defaults to None.

  • stride (int, optional) -- Only available for batch input of sequence pair and mainly for question answering usage. When for QA, text represents questions and text_pair represents contexts. If stride is set to a positive number, the context will be split into multiple spans where stride defines the number of (tokenized) tokens to skip from the start of one span to get the next span, thus will produce a bigger batch than inputs to include all spans. Moreover, 'overflow_to_sample' and 'offset_mapping' preserving the original example and position information will be added to the returned dictionary. Defaults to 0.

  • pad_to_max_seq_len (bool, optional) -- If set to True, the returned sequences would be padded up to max_seq_len specified length according to padding side (self.padding_side) and padding token id. Defaults to False.

  • truncation_strategy (str, optional) --

    String selected in the following options:

    • 'longest_first' (default) Iteratively reduce the inputs sequence

    until the input is under max_seq_len starting from the longest one at each token (when there is a pair of input sequences). - 'only_first': Only truncate the first sequence. - 'only_second': Only truncate the second sequence. - 'do_not_truncate': Do not truncate (raise an error if the input sequence is longer than max_seq_len).

    Defaults to 'longest_first'.

  • return_position_ids (bool, optional) -- Whether to include tokens position ids in the returned dictionary. Defaults to False.

  • return_token_type_ids (bool, optional) -- Whether to include token type ids in the returned dictionary. Defaults to True.

  • return_attention_mask (bool, optional) -- Whether to include the attention mask in the returned dictionary. Defaults to False.

  • return_length (bool, optional) -- Whether to include the length of each encoded inputs in the returned dictionary. Defaults to False.

  • return_overflowing_tokens (bool, optional) -- Whether to include overflowing token information in the returned dictionary. Defaults to False.

  • return_special_tokens_mask (bool, optional) -- Whether to include special tokens mask information in the returned dictionary. Defaults to False.

返回

The dict has the following optional items:

  • input_ids (list[int]): List of token ids to be fed to a model.

  • position_ids (list[int], optional): List of token position ids to be fed to a model. Included when return_position_ids is True

  • token_type_ids (list[int], optional): List of token type ids to be fed to a model. Included when return_token_type_ids is True.

  • attention_mask (list[int], optional): List of integers valued 0 or 1, where 0 specifies paddings and should not be attended to by the model. Included when return_attention_mask is True.

  • seq_len (int, optional): The input_ids length. Included when return_length is True.

  • overflowing_tokens (list[int], optional): List of overflowing tokens. Included when if max_seq_len is specified and return_overflowing_tokens is True.

  • num_truncated_tokens (int, optional): The number of overflowing tokens. Included when if max_seq_len is specified and return_overflowing_tokens is True.

  • special_tokens_mask (list[int], optional): List of integers valued 0 or 1, with 0 specifying special added tokens and 1 specifying sequence tokens. Included when return_special_tokens_mask is True.

返回类型

dict

batch_encode(batch_text_or_text_pairs, max_seq_len=512, pad_to_max_seq_len=False, stride=0, is_split_into_words=False, truncation_strategy='longest_first', return_position_ids=False, return_token_type_ids=True, return_attention_mask=False, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False)[源代码]

Performs tokenization and uses the tokenized tokens to prepare model inputs. It supports batch inputs of sequence or sequence pair.

参数
  • batch_text_or_text_pairs (list) -- The element of list can be sequence or sequence pair, and the sequence is a string or a list of strings depending on whether it has been pretokenized. If each sequence is provided as a list of strings (pretokenized), you must set is_split_into_words as True to disambiguate with a sequence pair.

  • max_seq_len (int, optional) -- If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those overflowing tokens will be added to the returned dictionary when return_overflowing_tokens is True. Defaults to None.

  • stride (int, optional) -- Only available for batch input of sequence pair and mainly for question answering usage. When for QA, text represents questions and text_pair represents contexts. If stride is set to a positive number, the context will be split into multiple spans where stride defines the number of (tokenized) tokens to skip from the start of one span to get the next span, thus will produce a bigger batch than inputs to include all spans. Moreover, 'overflow_to_sample' and 'offset_mapping' preserving the original example and position information will be added to the returned dictionary. Defaults to 0.

  • pad_to_max_seq_len (bool, optional) -- If set to True, the returned sequences would be padded up to max_seq_len specified length according to padding side (self.padding_side) and padding token id. Defaults to False.

  • truncation_strategy (str, optional) --

    String selected in the following options:

    • 'longest_first' (default) Iteratively reduce the inputs sequence

    until the input is under max_seq_len starting from the longest one at each token (when there is a pair of input sequences). - 'only_first': Only truncate the first sequence. - 'only_second': Only truncate the second sequence. - 'do_not_truncate': Do not truncate (raise an error if the input sequence is longer than max_seq_len).

    Defaults to 'longest_first'.

  • return_position_ids (bool, optional) -- Whether to include tokens position ids in the returned dictionary. Defaults to False.

  • return_token_type_ids (bool, optional) -- Whether to include token type ids in the returned dictionary. Defaults to True.

  • return_attention_mask (bool, optional) -- Whether to include the attention mask in the returned dictionary. Defaults to False.

  • return_length (bool, optional) -- Whether to include the length of each encoded inputs in the returned dictionary. Defaults to False.

  • return_overflowing_tokens (bool, optional) -- Whether to include overflowing token information in the returned dictionary. Defaults to False.

  • return_special_tokens_mask (bool, optional) -- Whether to include special tokens mask information in the returned dictionary. Defaults to False.

返回

The dict has the following optional items:

  • input_ids (list[int]): List of token ids to be fed to a model.

  • position_ids (list[int], optional): List of token position ids to be fed to a model. Included when return_position_ids is True

  • token_type_ids (list[int], optional): List of token type ids to be fed to a model. Included when return_token_type_ids is True.

  • attention_mask (list[int], optional): List of integers valued 0 or 1, where 0 specifies paddings and should not be attended to by the model. Included when return_attention_mask is True.

  • seq_len (int, optional): The input_ids length. Included when return_length is True.

  • overflowing_tokens (list[int], optional): List of overflowing tokens. Included when if max_seq_len is specified and return_overflowing_tokens is True.

  • num_truncated_tokens (int, optional): The number of overflowing tokens. Included when if max_seq_len is specified and return_overflowing_tokens is True.

  • special_tokens_mask (list[int], optional): List of integers valued 0 or 1, with 0 specifying special added tokens and 1 specifying sequence tokens. Included when return_special_tokens_mask is True.

  • offset_mapping (list[int], optional): list of pair preserving the index of start and end char in original input for each token. For a sqecial token, the index pair is (0, 0). Included when stride works.

  • overflow_to_sample (int, optional): Index of example from which this feature is generated. Included when stride works.

返回类型

list[dict]

get_offset_mapping(text)[源代码]

Returns the map of tokens and the start and end index of their start and end character. Modified from https://github.com/bojone/bert4keras/blob/master/bert4keras/tokenizers.py#L372

参数

text (str) -- Input text.

返回

The offset map of input text.

返回类型

list

class BPETokenizer(vocab_file, encoder_json_path='./configs/encoder.json', vocab_bpe_path='./configs/vocab.bpe', unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]')[源代码]

基类:paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer

The base class for all bpe tokenizers. It mainly provides common tokenize methods for bpe type tokenizer.

参数
  • vocab_file (str) -- file path of the vocabulary.

  • encoder_json_path (str, optional) -- file path of the id to vocab.

  • vocab_bpe_path (str, optional) -- file path of word merge text.

  • unk_token (str, optional) -- The special token for unknown words. Defaults to "[UNK]".

  • sep_token (str, optional) -- The special token for separator token. Defaults to "[SEP]".

  • pad_token (str, optional) -- The special token for padding. Defaults to "[PAD]".

  • cls_token (str, optional) -- The special token for cls. Defaults to "[CLS]".

  • mask_token (str, optional) -- The special token for mask. Defaults to "[MASK]".

tokenize_chinese_chars(text)[源代码]

Adds whitespace around any CJK character.

is_chinese_char(cp)[源代码]

Checks whether CP is the codepoint of a CJK character.