tokenizer¶
-
class
SkepTokenizer
(vocab_file, bpe_vocab_file=None, bpe_json_file=None, do_lower_case=True, use_bpe_encoder=False, need_token_type_id=True, add_two_sep_token_inter=False, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[source]¶ Bases:
paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer
Constructs a Skep tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.
This tokenizer inherits from
PretrainedTokenizer
which contains most of the main methods. For more information regarding those methods, please refer to this superclass.- Parameters
vocab_file (str) – The vocabulary file path (ends with ‘.txt’) required to instantiate a
WordpieceTokenizer
.bpe_vocab_file (str, optional) – The vocabulary file path of a
BpeTokenizer
. Defaults toNone
.bpe_json_file (str, optional) – The json file path of a
BpeTokenizer
. Defaults toNone
.use_bpe_encoder (bool, optional) – Whether or not to use BPE Encoder. Defaults to
False
.need_token_type_id (bool, optional) – Whether or not to use token type id. Defaults to
True
.add_two_sep_token_inter (bool, optional) – Whether or not to add two different
sep_token
. Defaults toFalse
.unk_token (str, optional) – The special token for unknown words. Defaults to “[UNK]”.
sep_token (str, optional) – The special token for separator token. Defaults to “[SEP]”.
pad_token (str, optional) – The special token for padding. Defaults to “[PAD]”.
cls_token (str, optional) – The special token for cls. Defaults to “[CLS]”.
mask_token (str, optional) – The special token for mask. Defaults to “[MASK]”.
Examples
from paddlenlp.transformers import SkepTokenizer tokenizer = SkepTokenizer.from_pretrained('skep_ernie_2.0_large_en') encoded_inputs = tokenizer('He was a puppeteer') # encoded_inputs: # { # 'input_ids': [101, 2002, 2001, 1037, 13997, 11510, 102], # 'token_type_ids': [0, 0, 0, 0, 0, 0, 0] # }
-
property
vocab_size
¶ Return the size of vocabulary.
- Returns
the size of vocabulary.
- Return type
int
-
num_special_tokens_to_add
(pair=False)[source]¶ Returns the number of added tokens when encoding a sequence with special tokens.
- Parameters
pair (bool, optional) – Returns the number of added tokens in the case of a sequence pair if set to True, returns the number of added tokens in the case of a single sequence if set to False. Defaults to False.
- Returns
Number of tokens added to sequences
- Return type
int
-
build_offset_mapping_with_special_tokens
(offset_mapping_0, offset_mapping_1=None)[source]¶ Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
Should be overridden in a subclass if the model has a special way of building those.
- Parameters
offset_mapping_0 (List[tuple]) – List of char offsets to which the special tokens will be added.
offset_mapping_1 (List[tuple], optional) – Optional second list of char offsets for offset mapping pairs.
- Returns
List of char offsets with the appropriate offsets of special tokens.
- Return type
List[tuple]
-
build_inputs_with_special_tokens
(token_ids_0, token_ids_1=None)[source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
A skep_ernie_1.0_large_ch/skep_ernie_2.0_large_en sequence has the following format:
single sequence:
[CLS] X [SEP]
pair of sequences:
[CLS] A [SEP] B [SEP]
A skep_roberta_large_en sequence has the following format:
single sequence:
[CLS] X [SEP]
pair of sequences:
[CLS] A [SEP] [SEP] B [SEP]
- Parameters
token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to
None
.
- Returns
List of input_id with the appropriate special tokens.
- Return type
list[int]
-
create_token_type_ids_from_sequences
(token_ids_0, token_ids_1=None)[source]¶ Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A skep_ernie_1.0_large_ch/skep_ernie_2.0_large_en sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).note: There is no need token type ids for skep_roberta_large_ch model.
- Parameters
token_ids_0 (List[int]) – List of IDs.
token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to
None
.
- Returns
List of token_type_id according to the given sequence(s).
- Return type
List[int]
-
save_resources
(save_directory)[source]¶ Save tokenizer related resources to files under
save_directory
.- Parameters
save_directory (str) – Directory to save files into.
-
convert_tokens_to_string
(tokens: List[str])[source]¶ Converts a sequence of tokens (list of string) in a single string.
- Parameters
tokens (list) – A list of string representing tokens to be converted.
- Returns
Converted string from tokens.
- Return type
str
Examples
from paddlenlp.transformers import RoFormerTokenizer tokenizer = RoFormerTokenizer.from_pretrained('roformer-chinese-base') tokens = tokenizer.tokenize('欢迎使用百度飞桨') #['欢迎', '使用', '百度', '飞', '桨'] strings = tokenizer.convert_tokens_to_string(tokens) #'欢迎 使用 百度 飞 桨'
-
get_special_tokens_mask
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]¶ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
prepare_for_model
method.- Parameters
token_ids_0 (
List[int]
) – List of IDs.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.already_has_special_tokens (
bool
, optional, defaults toFalse
) – Whether or not the token list is already formatted with special tokens for the model.
- Returns
1 for a special token, 0 for a sequence token.
- Return type
A list of integers in the range [0, 1]