tokenizer#

class UNIMOTokenizer(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[源代码]#

基类:PretrainedTokenizer

Constructs an UNIMO tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.

This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数:
  • vocab_file (str) -- The vocabulary file path (ends with '.txt') required to instantiate a WordpieceTokenizer.

  • do_lower_case (str, optional) -- Whether or not to lowercase the input when tokenizing. Defaults to`True`.

  • unk_token (str) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "[UNK]".

  • sep_token (str) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".

  • pad_token (str) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".

  • cls_token (str) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".

  • mask_token (str) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".

示例

from paddlenlp.transformers import UNIMOTokenizer
tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0')
encoded_inputs = tokenizer('He was a puppeteer')
# encoded_inputs
#{
#   'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2],
#   'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#}
property vocab_size#

Return the size of vocabulary.

返回:

The size of vocabulary.

返回类型:

int

static load_vocabulary(filepath, unk_token=None, pad_token=None, bos_token=None, eos_token=None, **kwargs)[源代码]#

Instantiate an instance of Vocab from a file reserving all tokens by using Vocab.from_dict. The file contains a token per line, and the line number would be the index of corresponding token.

参数:
  • filepath (str) -- path of file to construct vocabulary.

  • unk_token (str) -- special token for unknown token. If no need, it also could be None. Defaults to None.

  • pad_token (str) -- special token for padding token. If no need, it also could be None. Defaults to None.

  • bos_token (str) -- special token for bos token. If no need, it also could be None. Defaults to None.

  • eos_token (str) -- special token for eos token. If no need, it also could be None. Defaults to None.

  • **kwargs (dict) -- keyword arguments for Vocab.from_dict.

返回:

An instance of Vocab.

返回类型:

Vocab

get_vocab()[源代码]#

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

返回:

The vocabulary.

返回类型:

Dict[str, int]

convert_tokens_to_string(tokens)[源代码]#

Converts a sequence of tokens (list of string) in a single string. Since the usage of WordPiece introducing ## to concat subwords, also remove ## when converting.

参数:

tokens (list) -- A list of string representing tokens to be converted.

返回:

Converted string from tokens.

返回类型:

str

示例

from paddlenlp.transformers import UNIMOTokenizer

tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0')
tokens = tokenizer.tokenize('He was a puppeteer')

strings = tokenizer.convert_tokens_to_string(tokens)
'''
he was a puppeteer
'''
num_special_tokens_to_add(pair=False)[源代码]#

Returns the number of added tokens when encoding a sequence with special tokens.

参数:

pair (bool) -- Whether the input is a sequence pair or a single sequence. Defaults to False and the input is a single sequence.

返回:

Number of tokens added to sequences.

返回类型:

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]#

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

A UNIMO sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] B [SEP]

参数:
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回:

List of input_id with the appropriate special tokens.

返回类型:

List[int]

merge_subword(tokens)[源代码]#

Converts the subwords in a sequence of tokens (list of string) to whole words, also remove ## when converting.

参数:

tokens (List[str]) -- A list of string representing tokens to be converted.

返回:

Converted sequence of whole words.

返回类型:

List[str]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

A UNIMO offset_mapping has the following format:

- single sequence: ``(0,0) X (0,0)``
- pair of sequences: `(0,0) A (0,0) B (0,0)``
参数:
  • offset_mapping_ids_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.

  • offset_mapping_ids_1 (List[tuple], optional) -- Optional second list of char offsets for offset mapping pairs. Defaults to None.

返回:

List of char offsets with the appropriate offsets

of special tokens.

返回类型:

List[tuple]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]#

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A UNIMO sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数:
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回:

List of token_type_id according to the given sequence(s).

返回类型:

List[int]

gen_encode(source, title=None, target=None, max_seq_len=512, max_title_len=128, max_target_len=128, return_position_ids=True, return_token_type_ids=True, return_attention_mask=True, return_length=False, add_start_token_for_decoding=False, pad_to_max_seq_len=False, return_tensors=False, is_split_into_words=False, continuous_position=False)[源代码]#

Main method for encoding the source for generation. It will return a dictionary containing the encoded sequence and other relative informations which meets the input format requirements of the UNIMO-text model.

参数:
  • source (str) -- The source text of generation. It should be a string.

  • target (str, optional) -- The target text of generation. It should be set when training the model and should be None when running inference. Defaults to None.

  • title (str, optional) -- The additional information of some of the generation tasks such as summary. Defaults to None.

  • max_seq_len (int, optional) -- The maximum encoded sequence length. Defaults to 512.

  • max_target_len (int, optional) -- The maximum encoded sequence length of the input target. Defaults to 128.

  • max_title_len (int, optional) -- The maximum encoded sequence length of the input title. Defaults to 128.

  • return_position_ids (bool, optional) -- Whether to return the position_ids. Defaults to True.

  • return_token_type_ids (bool, optional) -- Whether to return the token_type_ids. Defaults to True.

  • return_attention_mask (bool, optional) -- Whether to return the attention_mask. Defaults to True.

  • return_length (bool, optional) -- Whether to return the length of the encoded sequence. Defaults to False.

  • add_start_token_for_decoding (bool, optional) -- Whether to add the special token "[CLS]" at the end of sequence as the beginning of the target when running inference to force the model to start generating target sequence. Defaults to False.

  • pad_to_max_seq_len (bool, optional) -- Whether to pad the returned sequences to the max_seq_len. Note that, in this method, returned sequences will be padded on the left. Defaults to False.

  • return_tensors (bool, optional) -- Whether to convert the returned sequences to Tensor. Defaults to False.

  • is_split_into_words (bool, optional) -- Whether or not the input text (source, target and title) has been pretokenized. Defaults to False.

  • continuous_position (bool, optional) -- Whether the position ids is continuous between source ids and target ids. Defaults to False.

返回:

A dictionary containing the encoded sequence and other relative informations.

With the corresponding fields:

  • input_ids (list[int]|Tensor):

    A list of indices of input tokens to be feed to UNIMO-text model. If return_tensors is True, it is a Tensor with shape [1, sequence_length] and data type 'int64'.

  • token_type_ids (list[int]|Tensor, optional):

    A list of segment token indices to indicate whether the token belongs to the dialogue target. If return_tensors is True, it is a Tensor with shape [1, sequence_length] and data type 'int64'. Being returned when return_token_type_ids is set to True.

  • position_ids (list[int]|Tensor, optional):

    A list of The position indices. If return_tensors is True, it is a Tensor with shape [1, sequence_length] and data type 'int64'. Being returned when return_position_ids is set to True.

  • attention_mask (numpy.ndarray|Tensor, optional):

    A numpy.ndarray to prevents attention to some unwanted positions, with shape [sequence_length, sequence_length] and data type 'float32'. If return_tensors is True, it is a Tensor with shape [1, 1, sequence_length, sequence_length] and data type 'float32'. Being returned when return_attention_mask is set to True.

  • seq_len (int, optional):

    The actual length of the input_ids, excluding the pad token. Being returned when return_length is set to True.

返回类型:

dict

示例

from paddlenlp.transformers import UNIMOTokenizer
tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0')
inputs = tokenizer.gen_encode('He was a puppeteer')
#{'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2],
#'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
#'position_ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
#'attention_mask': array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
#[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)}