tokenizer#

class BigBirdTokenizer(sentencepiece_model_file, do_lower_case=False, remove_space=True, keep_accents=True, eos_token='</s>', unk_token='<unk>', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', extra_ids=100, additional_special_tokens=[], sp_model_kwargs=None, encoding='utf8', **kwargs)[源代码]#

基类:AlbertEnglishTokenizer

Constructs an BigBird tokenizer based on SentencePiece.

This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数:
  • sentencepiece_model_file (str) -- The vocabulary file (ends with '.spm') required to instantiate a SentencePiece tokenizer.

  • do_lower_case (bool) -- Whether the text strips accents and convert to Whether or not to lowercase the input when tokenizing. Defaults to`True`.

  • unk_token (str) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "[UNK]".

  • sep_token (str) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".

  • pad_token (str) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".

  • cls_token (str) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".

  • mask_token (str) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".

抛出:

ValueError -- If file sentencepiece_model_file doesn't exist.

property vocab_size#

Size of the base vocabulary (without the added tokens).

Type:

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1)[源代码]#

Build model inputs from a sequence or a pair of sequence.

An BigBird sequence has the following format:

  • single sequence: X </s>

  • pair of sequences: A </s> B </s>

参数:
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回:

List of input_id with the appropriate special tokens.

返回类型:

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

Should be overridden in a subclass if the model has a special way of building those.

参数:
  • offset_mapping_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.

  • offset_mapping_1 (List[tuple], optional) -- Optional second list of char offsets for offset mapping pairs.

返回:

List of char offsets with the appropriate offsets of special tokens.

返回类型:

List[tuple]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]#

Create a mask from the two sequences.

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数:
  • token_ids_0 (List[int]) -- List of IDs.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs.

返回:

List of token_type_id according to the given sequence(s).

返回类型:

List[int]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]#

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

参数:
  • token_ids_0 (List[int]) -- List of ids of the first sequence.

  • token_ids_1 (List[int], optional) -- List of ids of the second sequence.

  • already_has_special_tokens (bool, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to None.

返回:

The list of integers in the range [0, 1]:

1 for a special token, 0 for a sequence token.

返回类型:

List[int]

convert_tokens_to_string(tokens)[源代码]#

Converts a sequence of tokens (string) in a single string.