tokenizer#
Tokenization classes for MegatronBert.
- class MegatronBertTokenizer(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[source]#
Bases:
BertTokenizer
Constructs a MegatronBert tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.
- Parameters:
vocab_file (str) – The vocabulary file path (ends with ‘.txt’) required to instantiate a
WordpieceTokenizer
.do_lower_case (bool) – Whether or not to lowercase the input when tokenizing. Defaults to`True`.
unk_token (str) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token
inorder to be converted to an ID. Defaults to “[UNK]”.sep_token (str) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.
pad_token (str) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.
cls_token (str) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.
mask_token (str) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.
Examples
from paddlenlp.transformers import MegatronBertTokenizer tokenizer = MegatronBertTokenizer.from_pretrained('MegatronBert-uncased') inputs = tokenizer('He was a puppeteer') print(inputs) ''' {'input_ids': [101, 2002, 2001, 1037, 13997, 11510, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0]} '''