tokenizer

class ReformerTokenizer(sentencepiece_model_file, do_lower_case=False, remove_space=True, keep_accents=False, eos_token='</s>', unk_token='<unk>', pad_token='<unk>', **kwargs)[source]

Bases: paddlenlp.transformers.albert.tokenizer.AlbertEnglishTokenizer

Constructs a Reformer tokenizer based on SentencePiece . This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

Parameters
  • sentencepiece_model_file (str) – The vocabulary file (ends with ‘.spm’) required to instantiate a SentencePiece tokenizer.

  • do_lower_case (bool) – Whether or not to lowercase the input when tokenizing. Defaults to False.

  • remove_space (bool) – Whether or note to remove space when tokenizing. Defaults to True.

  • keep_accents (bool) – Whether or note to keep accents when tokenizing. Defaults to False.

  • eos_token (str) – A special token representing the eos (end-of-sentence) token. Defaults to “</s>”.

  • unk_token (str) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to “<unk>”.

  • pad_token (str) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “<unk>”.

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]

Build model inputs from a sequence or a pair of sequence.

An Reformer sequence has the following format:

  • single sequence: X

  • pair of sequences: ``A B ``

Parameters
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.

Returns

List of input_id with the appropriate special tokens.

Return type

List[int]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]

Create a mask from the two sequences.

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of token_type_id according to the given sequence(s).

Return type

List[int]