tokenizer#

class ErnieGramTokenizer(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[source]#

Bases: ErnieTokenizer

Constructs an ERNIE-Gram tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.

This tokenizer inherits from ErnieTokenizer. For more information regarding those methods, please refer to this superclass.

Parameters:
  • vocab_file (str) – The vocabulary file path (ends with ‘.txt’) required to instantiate a WordpieceTokenizer.

  • do_lower_case (str, optional) – Whether or not to lowercase the input when tokenizing. Defaults to True.

  • unk_token (str, optional) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to “[UNK]”.

  • sep_token (str, optional) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.

  • pad_token (str, optional) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.

  • cls_token (str, optional) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.

  • mask_token (str, optional) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.

Examples

from paddlenlp.transformers import ErnieGramTokenizer
tokenizer = ErnieGramTokenizer.from_pretrained('ernie-gram-zh')
encoded_inputs = tokenizer('He was a puppeteer')
# encoded_inputs:
# {
#   'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2],
#   'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
# }