tokenizer

class ReformerTokenizer(sentencepiece_model_file, do_lower_case=False, remove_space=True, keep_accents=False, eos_token='</s>', unk_token='<unk>', pad_token='<unk>', **kwargs)[源代码]

基类:paddlenlp.transformers.albert.tokenizer.AlbertEnglishTokenizer

Constructs a Reformer tokenizer based on SentencePiece . This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数
  • sentencepiece_model_file (str) -- The vocabulary file (ends with '.spm') required to instantiate a SentencePiece tokenizer.

  • do_lower_case (bool) -- Whether or not to lowercase the input when tokenizing. Defaults to False.

  • remove_space (bool) -- Whether or note to remove space when tokenizing. Defaults to True.

  • keep_accents (bool) -- Whether or note to keep accents when tokenizing. Defaults to False.

  • eos_token (str) -- A special token representing the eos (end-of-sentence) token. Defaults to "</s>".

  • unk_token (str) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "<unk>".

  • pad_token (str) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "<unk>".

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]

Build model inputs from a sequence or a pair of sequence.

An Reformer sequence has the following format:

  • single sequence: X

  • pair of sequences: ``A B ``

参数
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of input_id with the appropriate special tokens.

返回类型

List[int]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]

Create a mask from the two sequences.

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs.

返回

List of token_type_id according to the given sequence(s).

返回类型

List[int]