tokenizer#

class ReformerTokenizer(sentencepiece_model_file, do_lower_case=False, remove_space=True, keep_accents=True, eos_token='</s>', unk_token='<unk>', pad_token='<pad>', extra_ids=100, additional_special_tokens=[], sp_model_kwargs=None, **kwargs)[源代码]#

基类:AlbertEnglishTokenizer

Constructs a Reformer tokenizer based on SentencePiece . This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数:
  • sentencepiece_model_file (str) -- The vocabulary file (ends with '.spm') required to instantiate a SentencePiece tokenizer.

  • do_lower_case (bool) -- Whether or not to lowercase the input when tokenizing. Defaults to False.

  • remove_space (bool) -- Whether or note to remove space when tokenizing. Defaults to True.

  • keep_accents (bool) -- Whether or note to keep accents when tokenizing. Defaults to False.

  • eos_token (str) -- A special token representing the eos (end-of-sentence) token. Defaults to "</s>".

  • unk_token (str) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "<unk>".

  • pad_token (str) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "<unk>".

property vocab_size#

Size of the base vocabulary (without the added tokens).

Type:

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1)[源代码]#

Build model inputs from a sequence or a pair of sequence.

An Reformer sequence has the following format:

  • single sequence: X </s>

  • pair of sequences: A </s> B </s>

参数:
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回:

List of input_id with the appropriate special tokens.

返回类型:

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

Should be overridden in a subclass if the model has a special way of building those.

参数:
  • offset_mapping_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.

  • offset_mapping_1 (List[tuple], optional) -- Optional second list of char offsets for offset mapping pairs.

返回:

List of char offsets with the appropriate offsets of special tokens.

返回类型:

List[tuple]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]#

Create a mask from the two sequences.

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数:
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs.

返回:

List of token_type_id according to the given sequence(s).

返回类型:

List[int]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]#

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

参数:
  • token_ids_0 (List[int]) -- List of ids of the first sequence.

  • token_ids_1 (List[int], optional) -- List of ids of the second sequence.

  • already_has_special_tokens (bool, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to None.

返回:

The list of integers in the range [0, 1]:

1 for a special token, 0 for a sequence token.

返回类型:

List[int]

convert_tokens_to_string(tokens)[源代码]#

Converts a sequence of tokens (string) in a single string.