tokenizer#

class SkepTokenizer(vocab_file, bpe_vocab_file=None, bpe_json_file=None, do_lower_case=True, use_bpe_encoder=False, need_token_type_id=True, add_two_sep_token_inter=False, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[源代码]#

基类:PretrainedTokenizer

Constructs a Skep tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.

This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数:
  • vocab_file (str) -- The vocabulary file path (ends with '.txt') required to instantiate a WordpieceTokenizer.

  • bpe_vocab_file (str, optional) -- The vocabulary file path of a BpeTokenizer. Defaults to None.

  • bpe_json_file (str, optional) -- The json file path of a BpeTokenizer. Defaults to None.

  • use_bpe_encoder (bool, optional) -- Whether or not to use BPE Encoder. Defaults to False.

  • need_token_type_id (bool, optional) -- Whether or not to use token type id. Defaults to True.

  • add_two_sep_token_inter (bool, optional) -- Whether or not to add two different sep_token. Defaults to False.

  • unk_token (str, optional) -- The special token for unknown words. Defaults to "[UNK]".

  • sep_token (str, optional) -- The special token for separator token. Defaults to "[SEP]".

  • pad_token (str, optional) -- The special token for padding. Defaults to "[PAD]".

  • cls_token (str, optional) -- The special token for cls. Defaults to "[CLS]".

  • mask_token (str, optional) -- The special token for mask. Defaults to "[MASK]".

示例

from paddlenlp.transformers import SkepTokenizer
tokenizer = SkepTokenizer.from_pretrained('skep_ernie_2.0_large_en')
encoded_inputs = tokenizer('He was a puppeteer')
# encoded_inputs:
# {
#    'input_ids': [101, 2002, 2001, 1037, 13997, 11510, 102],
#    'token_type_ids': [0, 0, 0, 0, 0, 0, 0]
# }
property vocab_size#

Return the size of vocabulary.

返回:

the size of vocabulary.

返回类型:

int

num_special_tokens_to_add(pair=False)[源代码]#

Returns the number of added tokens when encoding a sequence with special tokens.

参数:

pair (bool, optional) -- Returns the number of added tokens in the case of a sequence pair if set to True, returns the number of added tokens in the case of a single sequence if set to False. Defaults to False.

返回:

Number of tokens added to sequences

返回类型:

int

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

Should be overridden in a subclass if the model has a special way of building those.

参数:
  • offset_mapping_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.

  • offset_mapping_1 (List[tuple], optional) -- Optional second list of char offsets for offset mapping pairs.

返回:

List of char offsets with the appropriate offsets of special tokens.

返回类型:

List[tuple]

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]#

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

A skep_ernie_1.0_large_ch/skep_ernie_2.0_large_en sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] B [SEP]

A skep_roberta_large_en sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] [SEP] B [SEP]

参数:
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回:

List of input_id with the appropriate special tokens.

返回类型:

list[int]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]#

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A skep_ernie_1.0_large_ch/skep_ernie_2.0_large_en sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

note: There is no need token type ids for skep_roberta_large_ch model.

参数:
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回:

List of token_type_id according to the given sequence(s).

返回类型:

List[int]

save_resources(save_directory)[源代码]#

Save tokenizer related resources to files under save_directory.

参数:

save_directory (str) -- Directory to save files into.

convert_tokens_to_string(tokens: List[str])[源代码]#

Converts a sequence of tokens (list of string) in a single string.

参数:

tokens (list) -- A list of string representing tokens to be converted.

返回:

Converted string from tokens.

返回类型:

str

示例

from paddlenlp.transformers import RoFormerTokenizer

tokenizer = RoFormerTokenizer.from_pretrained('roformer-chinese-base')
tokens = tokenizer.tokenize('欢迎使用百度飞桨')
#['欢迎', '使用', '百度', '飞', '桨']
strings = tokenizer.convert_tokens_to_string(tokens)
#'欢迎 使用 百度 飞 桨'
get_special_tokens_mask(token_ids_0: List[int], token_ids_1: List[int] | None = None, already_has_special_tokens: bool = False) List[int][源代码]#

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

参数:
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs.

  • already_has_special_tokens (bool, optional, defaults to False) -- Whether or not the token list is already formatted with special tokens for the model.

返回:

1 for a special token, 0 for a sequence token.

返回类型:

A list of integers in the range [0, 1]

get_vocab() Dict[str, int][源代码]#

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

返回:

The vocabulary.

返回类型:

Dict[str, int]