tokenizer

class RoFormerTokenizer(vocab_file, do_lower_case=True, use_jieba=False, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]')[源代码]

基类:paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer

Constructs a RoFormer tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing, jieba pretokenizer and so on, and follows a WordPiece tokenizer to tokenize as subwords.

This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数
  • vocab_file (str) -- The vocabulary file path (ends with '.txt') required to instantiate a WordpieceTokenizer.

  • do_lower_case (bool,optional) -- Whether or not to lowercase the input when tokenizing. If you use the RoFormer pretrained model, lower is set to False when using the cased model, otherwise it is set to True. Defaults to`True`.

  • use_jieba (bool,optional) -- Whether or not to tokenize the text with jieba. Default: False.

  • unk_token (str,optional) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "[UNK]".

  • sep_token (str,optional) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".

  • pad_token (str,optional) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".

  • cls_token (str,optional) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".

  • mask_token (str,optional) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".

实际案例

from paddlenlp.transformers import RoFormerTokenizer
tokenizer = RoFormerTokenizer.from_pretrained('roformer-chinese-base')

tokens = tokenizer('欢迎使用百度飞桨')
'''
{'input_ids': [101, 22355, 8994, 25854, 5438, 2473, 102],
 'token_type_ids': [0, 0, 0, 0, 0, 0, 0]}
'''
property vocab_size

Return the size of vocabulary.

返回

The size of vocabulary.

返回类型

int

tokenize(text)[源代码]

Converts a string to a list of tokens.

参数

text (str) -- The text to be tokenized.

返回

A list of string representing converted tokens.

返回类型

List(str)

实际案例

from paddlenlp.transformers import RoFormerTokenizer

tokenizer = RoFormerTokenizer.from_pretrained('roformer-chinese-base')
tokens = tokenizer.tokenize('欢迎使用百度飞桨')
#['欢迎', '使用', '百度', '飞', '桨']
convert_tokens_to_string(tokens)[源代码]

Converts a sequence of tokens (list of string) in a single string.

参数

tokens (list) -- A list of string representing tokens to be converted.

返回

Converted string from tokens.

返回类型

str

实际案例

from paddlenlp.transformers import RoFormerTokenizer

tokenizer = RoFormerTokenizer.from_pretrained('roformer-chinese-base')
tokens = tokenizer.tokenize('欢迎使用百度飞桨')
#['欢迎', '使用', '百度', '飞', '桨']
strings = tokenizer.convert_tokens_to_string(tokens)
#'欢迎 使用 百度 飞 桨'
num_special_tokens_to_add(pair=False)[源代码]

Returns the number of added tokens when encoding a sequence with special tokens.

参数

pair (bool) -- Whether the input is a sequence pair or a single sequence. Defaults to False and the input is a single sequence.

返回

Number of tokens added to sequences.

返回类型

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

A Roformer sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] B [SEP]

参数
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of input_id with the appropriate special tokens.

返回类型

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

A RoFormer offset_mapping has the following format:

  • single sequence: (0,0) X (0,0)

  • pair of sequences: (0,0) A (0,0) B (0,0)`

参数
  • offset_mapping_ids_0 (List[tuple]) -- List of wordpiece offsets to which the special tokens will be added.

  • offset_mapping_ids_1 (List[tuple], optional) -- Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.

返回

List of wordpiece offsets with the appropriate offsets of special tokens.

返回类型

List[tuple]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A RoFormer sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数
  • token_ids_0 (List[int]) -- A list of inputs_ids for the first sequence.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of token_type_id according to the given sequence(s).

返回类型

List[int]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

参数
  • token_ids_0 (List[int]) -- A list of inputs_ids for the first sequence.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

  • already_has_special_tokens (bool, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to None.

返回

The list of integers either be 0 or 1: 1 for a special token, 0 for a sequence token.

返回类型

List[int]

class JiebaBasicTokenizer(vocab, do_lower_case=True)[源代码]

基类:paddlenlp.transformers.bert.tokenizer.BasicTokenizer

Runs basic tokenization with jieba (punctuation splitting, lower casing, jieba pretokenizer etc).

参数
  • vocab (paddlenlp.data.Vocab) -- An instance of paddlenlp.data.Vocab.

  • do_lower_case (bool) -- Whether the text strips accents and converts to lower case. If you use the RoFormer Pretrained model, lower is set to False when using the cased model, otherwise it is set to True. Defaults to True.