tokenizer

class T5Tokenizer(sentencepiece_model_file, do_lower_case=False, remove_space=True, keep_accents=True, eos_token='</s>', unk_token='<unk>', pad_token='<pad>', extra_ids=100, additional_special_tokens=[], **kwargs)[source]

Bases: paddlenlp.transformers.albert.tokenizer.AlbertEnglishTokenizer

Constructs a T5 tokenizer based on SentencePiece . This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

Parameters
  • sentencepiece_model_file (str) – The vocabulary file (ends with ‘.spm’) required to instantiate a SentencePiece tokenizer.

  • do_lower_case (bool) – Whether or not to lowercase the input when tokenizing. Defaults to False.

  • remove_space (bool) – Whether or note to remove space when tokenizing. Defaults to True.

  • keep_accents (bool) – Whether or note to keep accents when tokenizing. Defaults to False.

  • eos_token (str) – A special token representing the eos (end-of-sentence) token. Defaults to “</s>”.

  • unk_token (str) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to “<unk>”.

  • pad_token (str) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “<pad>”.

property vocab_size

Size of the base vocabulary (without the added tokens).

Type

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1)[source]

Build model inputs from a sequence or a pair of sequence.

An Reformer sequence has the following format:

  • single sequence: X </s>

  • pair of sequences: A </s> B </s>

Parameters
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.

Returns

List of input_id with the appropriate special tokens.

Return type

List[int]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]

Create a mask from the two sequences.

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of token_type_id according to the given sequence(s).

Return type

List[int]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

Parameters
  • token_ids_0 (List[int]) – List of ids of the first sequence.

  • token_ids_1 (List[int], optional) – List of ids of the second sequence.

  • already_has_special_tokens (bool, optional) – Whether or not the token list is already formatted with special tokens for the model. Defaults to None.

Returns

The list of integers in the range [0, 1]:

1 for a special token, 0 for a sequence token.

Return type

List[int]

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) in a single string.

decode(token_ids, skip_special_tokens=False, clean_up_tokenization_spaces=True)[source]

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

Parameters
  • token_ids (Union[List[int], Tensor]) – List of tokenized input ids.

  • skip_special_tokens (bool, optional) – Whether or not to remove special tokens in the decoding. Defaults to False.

  • clean_up_tokenization_spaces (bool, optional) – Whether or not to clean up the tokenization spaces. Defaults to True.

Returns

The decoded sentence.

Return type

str

batch_decode(sequences, skip_special_tokens=False, clean_up_tokenization_spaces=True)[source]

Convert a list of lists of token ids into a list of strings by calling decode.

Parameters
  • sequences (Union[List[int], List[List[int]], Tensor]) – List of tokenized input ids.

  • skip_special_tokens (bool, optional) – Whether or not to remove special tokens in the decoding. Defaults to False.

  • clean_up_tokenization_spaces (bool, optional) – Whether or not to clean up the tokenization spaces. Defaults to True.

Returns

The list of decoded sentences.

Return type

List[str]

static clean_up_tokenization(out_string)[source]

Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.

Parameters

out_string (str) – The text to clean up.

Returns

The cleaned-up string.

Return type

str