tokenizer¶
-
class
UNIMOTokenizer
(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[source]¶ Bases:
paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer
Constructs an UNIMO tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.
This tokenizer inherits from
PretrainedTokenizer
which contains most of the main methods. For more information regarding those methods, please refer to this superclass.- Parameters
vocab_file (str) – The vocabulary file path (ends with ‘.txt’) required to instantiate a
WordpieceTokenizer
.do_lower_case (str, optional) – Whether or not to lowercase the input when tokenizing. Defaults to`True`.
unk_token (str) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token
inorder to be converted to an ID. Defaults to “[UNK]”.sep_token (str) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.
pad_token (str) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.
cls_token (str) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.
mask_token (str) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.
Examples
from paddlenlp.transformers import UNIMOTokenizer tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0') encoded_inputs = tokenizer('He was a puppeteer') # encoded_inputs #{ # 'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2], # 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] #}
-
property
vocab_size
¶ Return the size of vocabulary.
- Returns
The size of vocabulary.
- Return type
int
-
static
load_vocabulary
(filepath, unk_token=None, pad_token=None, bos_token=None, eos_token=None, **kwargs)[source]¶ Instantiate an instance of
Vocab
from a file reserving all tokens by usingVocab.from_dict
. The file contains a token per line, and the line number would be the index of corresponding token.- Parameters
filepath (str) – path of file to construct vocabulary.
unk_token (str) – special token for unknown token. If no need, it also could be
None
. Defaults toNone
.pad_token (str) – special token for padding token. If no need, it also could be
None
. Defaults toNone
.bos_token (str) – special token for bos token. If no need, it also could be
None
. Defaults toNone
.eos_token (str) – special token for eos token. If no need, it also could be
None
. Defaults toNone
.**kwargs (dict) – keyword arguments for
Vocab.from_dict
.
- Returns
An instance of
Vocab
.- Return type
-
get_vocab
()[source]¶ Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token]
is equivalent totokenizer.convert_tokens_to_ids(token)
whentoken
is in the vocab.- Returns
The vocabulary.
- Return type
Dict[str, int]
-
convert_tokens_to_string
(tokens)[source]¶ Converts a sequence of tokens (list of string) in a single string. Since the usage of WordPiece introducing
##
to concat subwords, also remove##
when converting.- Parameters
tokens (list) – A list of string representing tokens to be converted.
- Returns
Converted string from tokens.
- Return type
str
Examples
from paddlenlp.transformers import UNIMOTokenizer tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0') tokens = tokenizer.tokenize('He was a puppeteer') strings = tokenizer.convert_tokens_to_string(tokens) ''' he was a puppeteer '''
-
num_special_tokens_to_add
(pair=False)[source]¶ Returns the number of added tokens when encoding a sequence with special tokens.
- Parameters
pair (bool) – Whether the input is a sequence pair or a single sequence. Defaults to
False
and the input is a single sequence.- Returns
Number of tokens added to sequences.
- Return type
int
-
build_inputs_with_special_tokens
(token_ids_0, token_ids_1=None)[source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
A UNIMO sequence has the following format:
single sequence:
[CLS] X [SEP]
pair of sequences:
[CLS] A [SEP] B [SEP]
- Parameters
token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to
None
.
- Returns
List of input_id with the appropriate special tokens.
- Return type
List[int]
-
merge_subword
(tokens)[source]¶ Converts the subwords in a sequence of tokens (list of string) to whole words, also remove
##
when converting.- Parameters
tokens (List[str]) – A list of string representing tokens to be converted.
- Returns
Converted sequence of whole words.
- Return type
List[str]
-
build_offset_mapping_with_special_tokens
(offset_mapping_0, offset_mapping_1=None)[source]¶ Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
A UNIMO offset_mapping has the following format:
- single sequence: ``(0,0) X (0,0)`` - pair of sequences: `(0,0) A (0,0) B (0,0)``
- Parameters
offset_mapping_ids_0 (List[tuple]) – List of char offsets to which the special tokens will be added.
offset_mapping_ids_1 (List[tuple], optional) – Optional second list of char offsets for offset mapping pairs. Defaults to
None
.
- Returns
- List of char offsets with the appropriate offsets
of special tokens.
- Return type
List[tuple]
-
create_token_type_ids_from_sequences
(token_ids_0, token_ids_1=None)[source]¶ Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A UNIMO sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).- Parameters
token_ids_0 (List[int]) – List of IDs.
token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to
None
.
- Returns
List of token_type_id according to the given sequence(s).
- Return type
List[int]
-
gen_encode
(source, title=None, target=None, max_seq_len=512, max_title_len=128, max_target_len=128, return_position_ids=True, return_token_type_ids=True, return_attention_mask=True, return_length=False, add_start_token_for_decoding=False, pad_to_max_seq_len=False, return_tensors=False, is_split_into_words=False, continuous_position=False)[source]¶ Main method for encoding the source for generation. It will return a dictionary containing the encoded sequence and other relative informations which meets the input format requirements of the UNIMO-text model.
- Parameters
source (str) – The source text of generation. It should be a string.
target (str, optional) – The target text of generation. It should be set when training the model and should be None when running inference. Defaults to None.
title (str, optional) – The additional information of some of the generation tasks such as summary. Defaults to None.
max_seq_len (int, optional) – The maximum encoded sequence length. Defaults to 512.
max_target_len (int, optional) – The maximum encoded sequence length of the input
target
. Defaults to 128.max_title_len (int, optional) – The maximum encoded sequence length of the input
title
. Defaults to 128.return_position_ids (bool, optional) – Whether to return the position_ids. Defaults to True.
return_token_type_ids (bool, optional) – Whether to return the token_type_ids. Defaults to True.
return_attention_mask (bool, optional) – Whether to return the attention_mask. Defaults to True.
return_length (bool, optional) – Whether to return the length of the encoded sequence. Defaults to False.
add_start_token_for_decoding (bool, optional) – Whether to add the special token “[CLS]” at the end of sequence as the begining of the target when running inference to force the model to start generating target sequence. Defaults to False.
pad_to_max_seq_len (bool, optional) – Whether to pad the returned sequences to the
max_seq_len
. Note that, in this method, returned sequences will be padded on the left. Defaults to False.return_tensors (bool, optional) – Whether to convert the returned sequences to Tensor. Defaults to False.
is_split_into_words (bool, optional) – Whether or not the input text (
source
,target
andtitle
) has been pretokenized. Defaults to False.continuous_position (bool, optional) – Whether the position ids is continuous between source ids and target ids. Defaults to False.
- Returns
A dictionary containing the encoded sequence and other relative informations.
With the corresponding fields:
- input_ids (list[int]|Tensor):
A list of indices of input tokens to be feed to UNIMO-text model. If
return_tensors
is True, it is a Tensor with shape [1, sequence_length] and data type ‘int64’.
- token_type_ids (list[int]|Tensor, optional):
A list of segment token indices to indicate whether the token belongs to the dialogue target. If
return_tensors
is True, it is a Tensor with shape [1, sequence_length] and data type ‘int64’. Being returned whenreturn_token_type_ids
is set to True.
- position_ids (list[int]|Tensor, optional):
A list of The position indices. If
return_tensors
is True, it is a Tensor with shape [1, sequence_length] and data type ‘int64’. Being returned whenreturn_position_ids
is set to True.
- attention_mask (numpy.ndarray|Tensor, optional):
A numpy.ndarray to prevents attention to some unwanted positions, with shape [sequence_length, sequence_length] and data type ‘float32’. If
return_tensors
is True, it is a Tensor with shape [1, 1, sequence_length, sequence_length] and data type ‘float32’. Being returned whenreturn_attention_mask
is set to True.
- seq_len (int, optional):
The actual length of the
input_ids
, excluding the pad token. Being returned whenreturn_length
is set to True.
- Return type
dict
Example
from paddlenlp.transformers import UNIMOTokenizer tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0') inputs = tokenizer.gen_encode('He was a puppeteer') #{'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2], #'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], #'position_ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], #'attention_mask': array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], #[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)}