tokenizer¶
-
class
RobertaTokenizer
(*args, **kwargs)[source]¶ Bases:
object
RobertaTokenizer is a generic tokenizer class that will be instantiated as either RobertaChineseTokenizer or RobertaBPETokenizer when created with the RobertaTokenizer.from_pretrained() class method.
-
class
RobertaChineseTokenizer
(vocab_file, do_lower_case=True, do_basic_tokenize=True, never_split=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', tokenize_chinese_chars=True, strip_accents=None, **kwargs)[source]¶ Bases:
paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer
Constructs a RoBerta tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.
This tokenizer inherits from
PretrainedTokenizer
which contains most of the main methods. For more information regarding those methods, please refer to this superclass.- Parameters
vocab_file (str) – The vocabulary file path (ends with ‘.txt’) required to instantiate a
WordpieceTokenizer
.do_lower_case (bool) – Whether or not to lowercase the input when tokenizing. Defaults to`True`.
unk_token (str) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token
inorder to be converted to an ID. Defaults to “[UNK]”.sep_token (str) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.
pad_token (str) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.
cls_token (str) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.
mask_token (str) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.
Examples
from paddlenlp.transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-wwm-ext') tokens = tokenizer('He was a puppeteer') #{'input_ids': [101, 9245, 9947, 143, 11227, 9586, 8418, 8854, 8180, 102], #'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}、
-
property
vocab_size
¶ Return the size of vocabulary.
- Returns
The size of vocabulary.
- Return type
int
-
get_vocab
()[source]¶ Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token]
is equivalent totokenizer.convert_tokens_to_ids(token)
whentoken
is in the vocab.- Returns
The vocabulary.
- Return type
Dict[str, int]
-
convert_tokens_to_string
(tokens)[source]¶ Converts a sequence of tokens (list of string) to a single string. Since the usage of WordPiece introducing
##
to concat subwords, also removes##
when converting.- Parameters
tokens (list) – A list of string representing tokens to be converted.
- Returns
Converted string from tokens.
- Return type
str
Examples
from paddlenlp.transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-wwm-ext') tokens = tokenizer.tokenize('He was a puppeteer') ''' ['he', 'was', 'a', 'puppet', '##eer'] ''' strings = tokenizer.convert_tokens_to_string(tokens) ''' he was a puppeteer '''
-
num_special_tokens_to_add
(pair=False)[source]¶ Returns the number of added tokens when encoding a sequence with special tokens.
- Parameters
pair (bool) – Whether the input is a sequence pair or a single sequence. Defaults to
False
and the input is a single sequence.- Returns
Number of tokens added to sequences.
- Return type
int
-
build_inputs_with_special_tokens
(token_ids_0, token_ids_1=None)[source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
A RoBERTa sequence has the following format:
single sequence:
[CLS] X [SEP]
pair of sequences:
[CLS] A [SEP] B [SEP]
- Parameters
token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
- Returns
List of input_id with the appropriate special tokens.
- Return type
List[int]
-
build_offset_mapping_with_special_tokens
(offset_mapping_0, offset_mapping_1=None)[source]¶ Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
A RoBERTa offset_mapping has the following format:
single sequence:
(0,0) X (0,0)
pair of sequences:
(0,0) A (0,0) B (0,0)
- Parameters
offset_mapping_0 (List[tuple]) – List of wordpiece offsets to which the special tokens will be added.
offset_mapping_1 (List[tuple], optional) – Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.
- Returns
A list of wordpiece offsets with the appropriate offsets of special tokens.
- Return type
List[tuple]
-
create_token_type_ids_from_sequences
(token_ids_0, token_ids_1=None)[source]¶ Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A RoBERTa sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).- Parameters
token_ids_0 (List[int]) – A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
- Returns
List of token_type_id according to the given sequence(s).
- Return type
List[int]
-
get_special_tokens_mask
(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]¶ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
encode
methods.- Parameters
token_ids_0 (List[int]) – A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
already_has_special_tokens (bool, optional) – Whether or not the token list is already formatted with special tokens for the model. Defaults to None.
- Returns
The list of integers either be 0 or 1: 1 for a special token, 0 for a sequence token.
- Return type
List[int]
-
class
RobertaBPETokenizer
(vocab_file, merges_file, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, **kwargs)[source]¶ Bases:
paddlenlp.transformers.gpt.tokenizer.GPTTokenizer
Constructs a Roberta tokenizer based on byte-level Byte-Pair-Encoding.
This tokenizer inherits from
GPTTokenizer
which contains most of the main methods. For more information regarding those methods, please refer to this superclass.- Parameters
vocab_file (str) – Path to the vocab file. The vocab file contains a mapping from vocabulary strings to indices.
merges_file (str) – Path to the merge file. The merge file is used to split the input sentence into “subword” units. The vocab file is then used to encode those units as intices.
errors (str) – Paradigm to follow when decoding bytes to UTF-8. Defaults to
'replace'
.
Examples
from paddlenlp.transformers import RobertaBPETokenizer tokenizer = RobertaBPETokenizer.from_pretrained('roberta-base') tokens = tokenizer('This is a simple Paddle') #{'input_ids': [0, 713, 16, 10, 2007, 221, 33151, 2], #'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0]}
-
get_vocab
()[source]¶ Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token]
is equivalent totokenizer.convert_tokens_to_ids(token)
whentoken
is in the vocab.- Returns
The vocabulary.
- Return type
Dict[str, int]
-
build_inputs_with_special_tokens
(token_ids_0, token_ids_1=None)[source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
-
get_offset_mapping
(text)[source]¶ Returns the map of tokens and the start and end index of their start and end character. Modified from https://github.com/bojone/bert4keras/blob/master/bert4keras/tokenizers.py#L372 :param text: Input text. :type text: str :param split_tokens: the tokens which has been split which can accelerate the operation. :type split_tokens: Optional[List[str]]
- Returns
The offset map of input text.
- Return type
list
-
build_offset_mapping_with_special_tokens
(offset_mapping_0, offset_mapping_1=None)[source]¶ Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
A Roberta offset_mapping has the following format:
single sequence:
(0,0) X (0,0)
pair of sequences:
(0,0) A (0,0) (0,0) B (0,0)
- Parameters
offset_mapping_0 (List[tuple]) – List of wordpiece offsets to which the special tokens will be added.
offset_mapping_1 (List[tuple], optional) – Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.
- Returns
A list of wordpiece offsets with the appropriate offsets of special tokens.
- Return type
List[tuple]
-
get_special_tokens_mask
(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]¶ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
encode
methods.- Parameters
token_ids_0 (List[int]) – A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
already_has_special_tokens (bool, optional) – Whether or not the token list is already formatted with special tokens for the model. Defaults to None.
- Returns
The list of integers either be 0 or 1: 1 for a special token, 0 for a sequence token.
- Return type
List[int]
-
create_token_type_ids_from_sequences
(token_ids_0, token_ids_1=None)[source]¶ Create the token type IDs corresponding to the sequences passed. [What are token type IDs?](../glossary#token-type-ids)
Should be overridden in a subclass if the model has a special way of building those.
- Parameters
token_ids_0 (
List[int]
) – The first tokenized sequence.token_ids_1 (
List[int]
, optional) – The second tokenized sequence.
- Returns
The token type ids.
- Return type
List[int]
-
convert_tokens_to_string
(tokens)[source]¶ Converts a sequence of tokens (string) in a single string.
-
num_special_tokens_to_add
(pair=False)[source]¶ Returns the number of added tokens when encoding a sequence with special tokens.
- Parameters
pair (bool) – Whether the input is a sequence pair or a single sequence. Defaults to
False
and the input is a single sequence.- Returns
Number of tokens added to sequences.
- Return type
int
-
prepare_for_tokenization
(text, is_split_into_words=False, **kwargs)[source]¶ Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining
kwargs
as well. We test thekwargs
at the end of the encoding process to be sure all the arguments have been used.- Parameters
text (
str
) – The text to prepare.is_split_into_words (
bool
, optional, defaults toFalse
) – Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.kwargs – Keyword arguments to use for the tokenization.
- Returns
The prepared text and the unused kwargs.
- Return type
Tuple[str, Dict[str, Any]]