tokenizer#
- class ErnieTokenizer(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[源代码]#
-
Constructs an ERNIE tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.
This tokenizer inherits from
PretrainedTokenizer
which contains most of the main methods. For more information regarding those methods, please refer to this superclass.- 参数:
vocab_file (str) -- The vocabulary file path (ends with '.txt') required to instantiate a
WordpieceTokenizer
.do_lower_case (str, optional) -- Whether or not to lowercase the input when tokenizing. Defaults to`True`.
unk_token (str, optional) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token
in order to be converted to an ID. Defaults to "[UNK]".sep_token (str, optional) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".
pad_token (str, optional) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".
cls_token (str, optional) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".
mask_token (str, optional) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".
示例
from paddlenlp.transformers import ErnieTokenizer tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0') encoded_inputs = tokenizer('He was a puppeteer') # encoded_inputs: # { 'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2], # 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} # }
- property vocab_size#
Return the size of vocabulary.
- 返回:
The size of vocabulary.
- 返回类型:
int
- extend_chinese_char()[源代码]#
For, char level model such as ERNIE, we need add ## chinese token to demonstrate the segment information.
- get_vocab()[源代码]#
Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token]
is equivalent totokenizer.convert_tokens_to_ids(token)
whentoken
is in the vocab.- 返回:
The vocabulary.
- 返回类型:
Dict[str, int]
- convert_tokens_to_string(tokens)[源代码]#
Converts a sequence of tokens (list of string) in a single string. Since the usage of WordPiece introducing
##
to concat subwords, also remove##
when converting.- 参数:
tokens (List[str]) -- A list of string representing tokens to be converted.
- 返回:
Converted string from tokens.
- 返回类型:
str
示例
from paddlenlp.transformers import ErnieTokenizer tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0') tokens = tokenizer.tokenize('He was a puppeteer') strings = tokenizer.convert_tokens_to_string(tokens) #he was a puppeteer
- num_special_tokens_to_add(pair=False)[源代码]#
Returns the number of added tokens when encoding a sequence with special tokens.
备注
This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
- 参数:
pair (bool, optional) -- Whether the input is a sequence pair or a single sequence. Defaults to
False
and the input is a single sequence.- 返回:
Number of tokens added to sequences
- 返回类型:
int
- build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]#
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
An Ernie sequence has the following format:
single sequence:
[CLS] X [SEP]
pair of sequences:
[CLS] A [SEP] B [SEP]
- 参数:
token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to
None
.
- 返回:
List of input_id with the appropriate special tokens.
- 返回类型:
List[int]
- build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#
Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
An ERNIE offset_mapping has the following format:
single sequence:
(0,0) X (0,0)
pair of sequences:
(0,0) A (0,0) B (0,0)
- 参数:
offset_mapping_ids_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.
offset_mapping_ids_1 (List[tuple], optional) -- Optional second list of wordpiece offsets for offset mapping pairs. Defaults to
None
.
- 返回:
A list of wordpiece offsets with the appropriate offsets of special tokens.
- 返回类型:
List[tuple]
- create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]#
Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A ERNIE sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).- 参数:
token_ids_0 (List[int]) -- A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to
None
.
- 返回:
List of token_type_id according to the given sequence(s).
- 返回类型:
List[int]
- get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]#
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
encode
methods. :param token_ids_0: List of ids of the first sequence. :type token_ids_0: List[int] :param token_ids_1: Optional second list of IDs for sequence pairs.Defaults to
None
.- 参数:
already_has_special_tokens (str, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to
False
.- 返回:
The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
- 返回类型:
List[int]
- class ErnieTinyTokenizer(vocab_file, sentencepiece_model_file, word_dict, do_lower_case=True, encoding='utf8', unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[源代码]#
-
Constructs a ErnieTiny tokenizer. It uses the
dict.wordseg.pickle
cut the text to words, and use thesentencepiece
tools to cut the words to sub-words.示例
from paddlenlp.transformers import ErnieTokenizer tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0') encoded_inputs = tokenizer('He was a puppeteer') # encoded_inputs: # { 'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2], # 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} # }
- 参数:
vocab_file (str) -- The file path of the vocabulary.
sentencepiece_model_file (str) -- The file path of sentencepiece model.
word_dict (str) -- The file path of word vocabulary, which is used to do chinese word segmentation.
do_lower_case (str, optional) -- Whether or not to lowercase the input when tokenizing. Defaults to`True`.
unk_token (str, optional) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token
inorder to be converted to an ID. Defaults to "[UNK]".sep_token (str, optional) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".
pad_token (str, optional) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".
cls_token (str, optional) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".
mask_token (str, optional) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".
示例
from paddlenlp.transformers import ErnieTinyTokenizer tokenizer = ErnieTinyTokenizer.from_pretrained('ernie-tiny') inputs = tokenizer('He was a puppeteer') ''' {'input_ids': [3, 941, 977, 16690, 269, 11346, 11364, 1337, 13742, 1684, 5], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} '''
- property vocab_size#
Return the size of vocabulary.
- 返回:
The size of vocabulary.
- 返回类型:
int
- convert_tokens_to_string(tokens)[源代码]#
Converts a sequence of tokens (list of string) to a single string. Since the usage of WordPiece introducing
##
to concat subwords, also removes##
when converting.- 参数:
tokens (list) -- A list of string representing tokens to be converted.
- 返回:
Converted string from tokens.
- 返回类型:
str
Examples: .. code-block:
from paddlenlp.transformers import ErnieTinyTokenizer tokenizer = ErnieTinyTokenizer.from_pretrained('ernie-tiny') inputs = tokenizer.tokenize('He was a puppeteer') #['▁h', '▁e', '▁was', '▁a', '▁pu', 'pp', 'e', '▁te', 'er'] strings = tokenizer.convert_tokens_to_string(tokens)
- save_resources(save_directory)[源代码]#
Save tokenizer related resources to files under
save_directory
.- 参数:
save_directory (str) -- Directory to save files into.
- num_special_tokens_to_add(pair=False)[源代码]#
Returns the number of added tokens when encoding a sequence with special tokens.
备注
This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
- 参数:
pair (bool, optional) -- Whether the input is a sequence pair or a single sequence. Defaults to
False
and the input is a single sequence.- 返回:
Number of tokens added to sequences
- 返回类型:
int
- build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]#
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
An ERNIE sequence has the following format:
single sequence:
[CLS] X [SEP]
pair of sequences:
[CLS] A [SEP] B [SEP]
- 参数:
token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to
None
.
- 返回:
List of input_id with the appropriate special tokens.
- 返回类型:
List[int]
- build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#
Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
An ERNIE offset_mapping has the following format:
single sequence:
(0,0) X (0,0)
pair of sequences:
(0,0) A (0,0) B (0,0)
- 参数:
offset_mapping_ids_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.
offset_mapping_ids_1 (List[tuple], optional) -- Optional second list of wordpiece offsets for offset mapping pairs. Defaults to
None
.
- 返回:
List of wordpiece offsets with the appropriate offsets of special tokens.
- 返回类型:
List[tuple]
- create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]#
Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A ERNIE sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).- 参数:
token_ids_0 (List[int]) -- A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to
None
.
- 返回:
List of token_type_id according to the given sequence(s).
- 返回类型:
List[int]
- get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]#
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
encode
methods.- 参数:
token_ids_0 (List[int]) -- List of ids of the first sequence.
token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to
None
.already_has_special_tokens (str, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to
False
.
- 返回:
The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
- 返回类型:
List[int]