tokenizer

class SkepTokenizer(vocab_file, bpe_vocab_file=None, bpe_json_file=None, do_lower_case=True, use_bpe_encoder=False, need_token_type_id=True, add_two_sep_token_inter=False, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]')[源代码]

基类:paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer

Constructs a Skep tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.

This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数
  • vocab_file (str) -- The vocabulary file path (ends with '.txt') required to instantiate a WordpieceTokenizer.

  • bpe_vocab_file (str, optional) -- The vocabulary file path of a BpeTokenizer. Defaults to None.

  • bpe_json_file (str, optional) -- The json file path of a BpeTokenizer. Defaults to None.

  • use_bpe_encoder (bool, optional) -- Whether or not to use BPE Encoder. Defaults to False.

  • need_token_type_id (bool, optional) -- Whether or not to use token type id. Defaults to True.

  • add_two_sep_token_inter (bool, optional) -- Whether or not to add two different sep_token. Defaults to False.

  • unk_token (str, optional) -- The special token for unknown words. Defaults to "[UNK]".

  • sep_token (str, optional) -- The special token for separator token. Defaults to "[SEP]".

  • pad_token (str, optional) -- The special token for padding. Defaults to "[PAD]".

  • cls_token (str, optional) -- The special token for cls. Defaults to "[CLS]".

  • mask_token (str, optional) -- The special token for mask. Defaults to "[MASK]".

实际案例

from paddlenlp.transformers import SkepTokenizer
tokenizer = SkepTokenizer.from_pretrained('skep_ernie_2.0_large_en')
encoded_inputs = tokenizer('He was a puppeteer')
# encoded_inputs:
# {
#    'input_ids': [101, 2002, 2001, 1037, 13997, 11510, 102],
#    'token_type_ids': [0, 0, 0, 0, 0, 0, 0]
# }
property vocab_size

Return the size of vocabulary.

返回

the size of vocabulary.

返回类型

int

tokenize(text)[源代码]

Converts a string to a list of tokens.

参数

text (str) -- The text to be tokenized.

返回

A list of string representing converted tokens.

返回类型

List(str)

实际案例

from paddlenlp.transformers import SkepTokenizer

tokenizer = SkepTokenizer.from_pretrained('skep_ernie_2.0_large_en')
tokens = tokenizer.tokenize('He was a puppeteer')
'''
['he', 'was', 'a', 'puppet', '##eer']
'''
num_special_tokens_to_add(pair=False)[源代码]

Returns the number of added tokens when encoding a sequence with special tokens.

参数

pair (bool, optional) -- Returns the number of added tokens in the case of a sequence pair if set to True, returns the number of added tokens in the case of a single sequence if set to False. Defaults to False.

返回

Number of tokens added to sequences

返回类型

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

A skep_ernie_1.0_large_ch/skep_ernie_2.0_large_en sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] B [SEP]

A skep_roberta_large_en sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] [SEP] B [SEP]

参数
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of input_id with the appropriate special tokens.

返回类型

list[int]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A skep_ernie_1.0_large_ch/skep_ernie_2.0_large_en sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

note: There is no need token type ids for skep_roberta_large_ch model.

参数
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of token_type_id according to the given sequence(s).

返回类型

List[int]

save_resources(save_directory)[源代码]

Save tokenizer related resources to files under save_directory.

参数

save_directory (str) -- Directory to save files into.