tokenizer_utils_base#

class AddedToken(content: str = <factory>, single_word: bool = False, lstrip: bool = False, rstrip: bool = False, normalized: bool = True)[源代码]#

基类:object

AddedToken represents a token to be added to a Tokenizer An AddedToken can have special options defining the way it should behave.

class FastEncoding[源代码]#

基类:object

This is dummy class reserved for fast tokenizer

class ExplicitEnum(value)[源代码]#

基类:Enum

Enum with more explicit error message for missing values.

class PaddingStrategy(value)[源代码]#

基类:ExplicitEnum

Possible values for the padding argument in [PretrainedTokenizerBase.__call__]. Useful for tab-completion in an IDE.

class TensorType(value)[源代码]#

基类:ExplicitEnum

Possible values for the return_tensors argument in [PretrainedTokenizerBase.__call__]. Useful for tab-completion in an IDE.

to_py_obj(obj)[源代码]#

Convert a Paddle tensor, Numpy array or python list to a python list.

class TruncationStrategy(value)[源代码]#

基类:ExplicitEnum

Possible values for the truncation argument in [PretrainedTokenizerBase.__call__]. Useful for tab-completion in an IDE.

class CharSpan(start: int, end: int)[源代码]#

基类:tuple

Character span in the original string.

参数:
  • start (int) -- Index of the first character in the original string.

  • end (int) -- Index of the character following the last character in the original string.

start: int#

Alias for field number 0

end: int#

Alias for field number 1

class TokenSpan(start: int, end: int)[源代码]#

基类:tuple

Token span in an encoded string (list of tokens).

参数:
  • start (int) -- Index of the first token in the span.

  • end (int) -- Index of the token following the last token in the span.

start: int#

Alias for field number 0

end: int#

Alias for field number 1

class BatchEncoding(data: Dict[str, Any] | None = None, encoding: FastEncoding | Sequence[FastEncoding] | None = None, tensor_type: str | None = None, prepend_batch_axis: bool = False, n_sequences: int | None = None)[源代码]#

基类:UserDict

Holds the output of the [PretrainedTokenizerBase.__call__], [PretrainedTokenizerBase.encode_plus] and [PretrainedTokenizerBase.batch_encode_plus] methods (tokens, attention_masks, etc).

This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.

参数:
  • data (dict) -- Dictionary of lists/arrays/tensors returned by the __call__/encode/batch_encode methods ('input_ids', 'attention_mask', etc.).

  • tensor_type (Union[None, str, TensorType], optional) -- You can give a tensor_type here to convert the lists of integers in Paddle/Numpy Tensors at initialization.

  • prepend_batch_axis (bool, optional, defaults to False) -- Whether or not to add a batch axis when converting to tensors (see tensor_type above).

property n_sequences: int | None#

The number of sequences used to generate each sample from the batch encoded in this [BatchEncoding]. Currently can be one of None (unknown), 1 (a single sentence) or 2 (a pair of sentences)

Type:

Optional[int]

property is_fast: bool#

Indicate whether this [BatchEncoding] was generated from the result of a [PretrainedFastTokenizer] or not.

Type:

bool

keys() a set-like object providing a view on D's keys[源代码]#
values() an object providing a view on D's values[源代码]#
items() a set-like object providing a view on D's items[源代码]#
property encodings: List[FastEncoding] | None#

The list all encodings from the tokenization process. Returns None if the input was tokenized through Python (i.e., not a fast) tokenizer.

Type:

Optional[List[FastEncoding]]

tokens(batch_index: int = 0) List[str][源代码]#

Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to integer indices) at a given batch index (only works for the output of a fast tokenizer).

参数:

batch_index (int, optional, defaults to 0) -- The index to access in the batch.

返回:

The list of tokens at that index.

返回类型:

List[str]

sequence_ids(batch_index: int = 0) List[int | None][源代码]#

Return a list mapping the tokens to the id of their original sentences:

  • None for special tokens added around or between sequences,

  • 0 for tokens corresponding to words in the first sequence,

  • 1 for tokens corresponding to words in the second sequence when a pair of sequences was jointly encoded.

参数:

batch_index (int, optional, defaults to 0) -- The index to access in the batch.

返回:

A list indicating the sequence id corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding sequence.

返回类型:

List[Optional[int]]

words(batch_index: int = 0) List[int | None][源代码]#

Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.

参数:

batch_index (int, optional, defaults to 0) -- The index to access in the batch.

返回:

A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).

返回类型:

List[Optional[int]]

word_ids(batch_index: int = 0) List[int | None][源代码]#

Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.

参数:

batch_index (int, optional, defaults to 0) -- The index to access in the batch.

返回:

A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).

返回类型:

List[Optional[int]]

token_to_sequence(batch_or_token_index: int, token_index: int | None = None) int[源代码]#

Get the index of the sequence represented by the given token. In the general use case, this method returns 0 for a single sequence or the first sequence of a pair, and 1 for the second sequence of a pair

Can be called as:

  • self.token_to_sequence(token_index) if batch size is 1

  • self.token_to_sequence(batch_index, token_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

参数:
  • batch_or_token_index (int) -- Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) -- If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.

返回:

Index of the word in the input sequence.

返回类型:

int

token_to_word(batch_or_token_index: int, token_index: int | None = None) int[源代码]#

Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.

Can be called as:

  • self.token_to_word(token_index) if batch size is 1

  • self.token_to_word(batch_index, token_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

参数:
  • batch_or_token_index (int) -- Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) -- If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.

返回:

Index of the word in the input sequence.

返回类型:

int

word_to_tokens(batch_or_word_index: int, word_index: int | None = None, sequence_index: int = 0) TokenSpan | None[源代码]#

Get the encoded token span corresponding to a word in a sequence of the batch.

Token spans are returned as a [TokenSpan] with:

  • start -- Index of the first token.

  • end -- Index of the token following the last token.

Can be called as:

  • self.word_to_tokens(word_index, sequence_index: int = 0) if batch size is 1

  • self.word_to_tokens(batch_index, word_index, sequence_index: int = 0) if batch size is greater or equal to 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

参数:
  • batch_or_word_index (int) -- Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the word in the sequence.

  • word_index (int, optional) -- If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

  • sequence_index (int, optional, defaults to 0) -- If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.

返回:

Optional [TokenSpan] Span of tokens in the encoded sequence. Returns None if no tokens correspond to the word.

token_to_chars(batch_or_token_index: int, token_index: int | None = None) CharSpan[源代码]#

Get the character span corresponding to an encoded token in a sequence of the batch.

Character spans are returned as a [CharSpan] with:

  • start -- Index of the first character in the original string associated to the token.

  • end -- Index of the character following the last character in the original string associated to the token.

Can be called as:

  • self.token_to_chars(token_index) if batch size is 1

  • self.token_to_chars(batch_index, token_index) if batch size is greater or equal to 1

参数:
  • batch_or_token_index (int) -- Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) -- If a batch index is provided in batch_or_token_index, this can be the index of the token or tokens in the sequence.

返回:

Span of characters in the original string.

返回类型:

[CharSpan]

char_to_token(batch_or_char_index: int, char_index: int | None = None, sequence_index: int = 0) int[源代码]#

Get the index of the token in the encoded output comprising a character in the original string for a sequence of the batch.

Can be called as:

  • self.char_to_token(char_index) if batch size is 1

  • self.char_to_token(batch_index, char_index) if batch size is greater or equal to 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

参数:
  • batch_or_char_index (int) -- Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence

  • char_index (int, optional) -- If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

  • sequence_index (int, optional, defaults to 0) -- If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.

返回:

Index of the token.

返回类型:

int

word_to_chars(batch_or_word_index: int, word_index: int | None = None, sequence_index: int = 0) CharSpan[源代码]#

Get the character span in the original string corresponding to given word in a sequence of the batch.

Character spans are returned as a CharSpan NamedTuple with:

  • start: index of the first character in the original string

  • end: index of the character following the last character in the original string

Can be called as:

  • self.word_to_chars(word_index) if batch size is 1

  • self.word_to_chars(batch_index, word_index) if batch size is greater or equal to 1

参数:
  • batch_or_word_index (int) -- Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence

  • word_index (int, optional) -- If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

  • sequence_index (int, optional, defaults to 0) -- If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.

返回:

Span(s) of the associated character or characters in the string. CharSpan are NamedTuple with:

  • start: index of the first character associated to the token in the original string

  • end: index of the character following the last character associated to the token in the original string

返回类型:

CharSpan or List[CharSpan]

char_to_word(batch_or_char_index: int, char_index: int | None = None, sequence_index: int = 0) int[源代码]#

Get the word in the original string corresponding to a character in the original string of a sequence of the batch.

Can be called as:

  • self.char_to_word(char_index) if batch size is 1

  • self.char_to_word(batch_index, char_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

参数:
  • batch_or_char_index (int) -- Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the character in the original string.

  • char_index (int, optional) -- If a batch index is provided in batch_or_token_index, this can be the index of the character in the original string.

  • sequence_index (int, optional, defaults to 0) -- If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.

返回:

Index or indices of the associated encoded token(s).

返回类型:

int or List[int]

convert_to_tensors(tensor_type: str | TensorType | None = None, prepend_batch_axis: bool = False)[源代码]#

Convert the inner content to tensors.

参数:
  • tensor_type (str or [TensorType], optional) -- The type of tensors to use. If str, should be one of the values of the enum [TensorType]. If None, no modification is done.

  • prepend_batch_axis (int, optional, defaults to False) -- Whether or not to add the batch dimension during the conversion.

class SpecialTokensMixin(verbose=True, **kwargs)[源代码]#

基类:object

A mixin derived by [PretrainedTokenizer] to handle specific behaviors related to special tokens. In particular, this class hold the attributes which can be used to directly access these special tokens in a model-independent manner and allow to set and update the special tokens.

参数:
  • bos_token (str or AddedToken, optional) -- A special token representing the beginning of a sentence.

  • eos_token (str or AddedToken, optional) -- A special token representing the end of a sentence.

  • unk_token (str or AddedToken, optional) -- A special token representing an out-of-vocabulary token.

  • sep_token (str or AddedToken, optional) -- A special token separating two different sentences in the same input (used by BERT for instance).

  • pad_token (str or AddedToken, optional) -- A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation.

  • cls_token (str or AddedToken, optional) -- A special token representing the class of the input (used by BERT for instance).

  • mask_token (str or AddedToken, optional) -- A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT).

  • additional_special_tokens (tuple or list of str or AddedToken, optional) -- A tuple or a list of additional special tokens.

sanitize_special_tokens() int[源代码]#

Make sure that all the special tokens attributes of the tokenizer (tokenizer.mask_token, tokenizer.cls_token, etc.) are in the vocabulary.

Add the missing ones to the vocabulary if needed.

返回:

The number of tokens added in the vocabulary during the operation.

返回类型:

int

add_special_tokens(special_tokens_dict: Dict[str, str | AddedToken]) int[源代码]#

Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).

Note,None When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the [resize_token_embeddings] method.

Using add_special_tokens will ensure your special tokens can be used in several ways:

  • Special tokens are carefully handled by the tokenizer (they are never split).

  • You can easily refer to special tokens using tokenizer class attributes like tokenizer.cls_token. This makes it easy to develop model-agnostic training and fine-tuning scripts.

When possible, special tokens are already registered for provided pretrained models (for instance [BertTokenizer] cls_token is already registered to be :obj*'[CLS]'* and XLM's one is also registered to be '</s>').

参数:

special_tokens_dict (dictionary str to str or AddedToken) --

Keys should be in the list of predefined special attributes: [bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens].

Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the unk_token to them).

返回:

Number of tokens added to the vocabulary.

返回类型:

int

Examples:

```python # Let's see how to add a new classification token to GPT-2 tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2Model.from_pretrained("gpt2")

special_tokens_dict = {"cls_token": "<CLS>"}

num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print("We have added", num_added_toks, "tokens") # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer))

assert tokenizer.cls_token == "<CLS>" ```

add_tokens(new_tokens: str | AddedToken | List[str | AddedToken], special_tokens: bool = False) int[源代码]#

Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary.

Note,None When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the [resize_token_embeddings] method.

参数:
  • new_tokens (str, AddedToken or a list of str or AddedToken) -- Tokens are only added if they are not already in the vocabulary. AddedToken wraps a string token to let you personalize its behavior: whether this token should only match against a single word, whether this token should strip all potential whitespaces on the left side, whether this token should strip all potential whitespaces on the right side, etc.

  • special_tokens (bool, optional, defaults to False) -- Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).

返回:

Number of tokens added to the vocabulary.

返回类型:

int

Examples:

```python # Let's see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = BertModel.from_pretrained("bert-base-uncased")

num_added_toks = tokenizer.add_tokens(["new_tok1", "my_new-tok2"]) print("We have added", num_added_toks, "tokens") # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer)) ```

property bos_token: str#

Beginning of sentence token. Log an error if used while not having been set.

Type:

str

property eos_token: str#

End of sentence token. Log an error if used while not having been set.

Type:

str

property unk_token: str#

Unknown token. Log an error if used while not having been set.

Type:

str

property sep_token: str#

Separation token, to separate context and query in an input sequence. Log an error if used while not having been set.

Type:

str

property pad_token: str#

Padding token. Log an error if used while not having been set.

Type:

str

property cls_token: str#

Classification token, to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.

Type:

str

property mask_token: str#

Mask token, to use when training a model with masked-language modeling. Log an error if used while not having been set.

Type:

str

property additional_special_tokens: List[str]#

All the additional special tokens you may want to use. Log an error if used while not having been set.

Type:

List[str]

property pad_token_type_id: int#

Id of the padding token type in the vocabulary.

Type:

int

property bos_token_id: int | None#

Id of the beginning of sentence token in the vocabulary. Returns None if the token has not been set.

Type:

Optional[int]

property eos_token_id: int | None#

Id of the end of sentence token in the vocabulary. Returns None if the token has not been set.

Type:

Optional[int]

property unk_token_id: int | None#

Id of the unknown token in the vocabulary. Returns None if the token has not been set.

Type:

Optional[int]

property sep_token_id: int | None#

Id of the separation token in the vocabulary, to separate context and query in an input sequence. Returns None if the token has not been set.

Type:

Optional[int]

property pad_token_id: int | None#

Id of the padding token in the vocabulary. Returns None if the token has not been set.

Type:

Optional[int]

property cls_token_id: int | None#

Id of the classification token in the vocabulary, to extract a summary of an input sequence leveraging self-attention along the full depth of the model.

Returns None if the token has not been set.

Type:

Optional[int]

property mask_token_id: int | None#

Id of the mask token in the vocabulary, used when training a model with masked-language modeling. Returns None if the token has not been set.

Type:

Optional[int]

property additional_special_tokens_ids: List[int]#

Ids of all the additional special tokens in the vocabulary. Log an error if used while not having been set.

Type:

List[int]

property special_tokens_map: Dict[str, str | List[str]]#

A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values ('<unk>', '<cls>', etc.).

Convert potential tokens of AddedToken type to string.

Type:

Dict[str, Union[str, List[str]]]

property special_tokens_map_extended: Dict[str, str | AddedToken | List[str | AddedToken]]#

A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values ('<unk>', '<cls>', etc.).

Don't convert tokens of AddedToken type to string so they can be used to control more finely how special tokens are tokenized.

Type:

Dict[str, Union[str, AddedToken, List[Union[str, AddedToken]]]]

property all_special_tokens: List[str]#

All the special tokens ('<unk>', '<cls>', etc.) mapped to class attributes.

Convert tokens of AddedToken type to string.

Type:

List[str]

property all_special_tokens_extended: List[str | AddedToken]#

All the special tokens ('<unk>', '<cls>', etc.) mapped to class attributes.

Don't convert tokens of AddedToken type to string so they can be used to control more finely how special tokens are tokenized.

Type:

List[Union[str, AddedToken]]

property all_special_ids: List[int]#

List the ids of the special tokens('<unk>', '<cls>', etc.) mapped to class attributes.

Type:

List[int]

class PretrainedTokenizerBase(**kwargs)[源代码]#

基类:SpecialTokensMixin

Base class for [PretrainedTokenizer].

Class attributes (overridden by derived classes)

  • resource_files_names (Dict[str, str]) -- A dictionary with, as keys, the __init__ keyword name of each

    vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).

  • pretrained_resource_files_map (Dict[str, Dict[str, str]]) -- A dictionary of dictionaries, with the

    high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.

  • max_model_input_sizes (Dict[str, Optional[int]]) -- A dictionary with, as keys, the short-cut-names

    of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.

  • pretrained_init_configuration (Dict[str, Dict[str, Any]]) -- A dictionary with, as keys, the

    short-cut-names of the pretrained models, and as associated values, a dictionary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the [from_pretrained] method.

  • model_input_names (List[str]) -- A list of inputs expected in the forward pass of the model.

  • padding_side (str) -- The default value for the side on which the model should have padding applied.

    Should be 'right' or 'left'.

  • truncation_side (str) -- The default value for the side on which the model should have truncation

    applied. Should be 'right' or 'left'.

参数:
  • model_max_length (int, optional) -- The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with [from_pretrained], this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).

  • padding_side (str, optional) -- The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name.

  • truncation_side (str, optional) -- The side on which the model should have truncation applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name.

  • model_input_names (List[string], optional) -- The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.

  • bos_token (str or AddedToken, optional) -- A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.

  • eos_token (str or AddedToken, optional) -- A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.

  • unk_token (str or AddedToken, optional) -- A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.

  • sep_token (str or AddedToken, optional) -- A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.

  • pad_token (str or AddedToken, optional) -- A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.

  • cls_token (str or AddedToken, optional) -- A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.

  • mask_token (str or AddedToken, optional) -- A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.

  • additional_special_tokens (tuple or list of str or AddedToken, optional) -- A tuple or a list of additional special tokens. Add them here to ensure they won't be split by the tokenization process. Will be associated to self.additional_special_tokens and self.additional_special_tokens_ids.

property max_len_single_sentence: int#

The maximum length of a sentence that can be fed to the model.

Type:

int

property max_len_sentences_pair: int#

The maximum combined length of a pair of sentences that can be fed to the model.

Type:

int

get_vocab() Dict[str, int][源代码]#

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

返回:

The vocabulary.

返回类型:

Dict[str, int]

classmethod from_pretrained(pretrained_model_name_or_path, *args, **kwargs)[源代码]#

Creates an instance of PretrainedTokenizer. Related resources are loaded by specifying name of a built-in pretrained model, or a community-contributed pretrained model, or a local file directory path.

参数:
  • pretrained_model_name_or_path (str) --

    Name of pretrained model or dir path to load from. The string can be:

    • Name of built-in pretrained model

    • Name of a community-contributed pretrained model.

    • Local directory path which contains tokenizer related resources and tokenizer config file ("tokenizer_config.json").

  • from_hf_hub (bool, optional) -- whether to load from Huggingface Hub

  • subfolder (str, optional) -- Only works when loading from Huggingface Hub.

  • *args (tuple) -- position arguments for model __init__. If provided, use these as position argument values for tokenizer initialization.

  • **kwargs (dict) -- keyword arguments for model __init__. If provided, use these to update pre-defined keyword argument values for tokenizer initialization.

返回:

An instance of PretrainedTokenizer.

返回类型:

PretrainedTokenizer

示例

from paddlenlp.transformers import BertTokenizer

# Name of built-in pretrained model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# Name of community-contributed pretrained model
tokenizer = BertTokenizer.from_pretrained('yingyibiao/bert-base-uncased-sst-2-finetuned')

# Load from local directory path
tokenizer = BertTokenizer.from_pretrained('./my_bert/')
save_pretrained(save_directory, filename_prefix: str | None = None, **kwargs)[源代码]#

Save tokenizer configuration and related resources to files under save_directory. The tokenizer configuration would be saved into tokenizer_config_file indicating file (thus tokenizer_config.json), and resources would be saved into resource_files_names indicating files by using self.save_resources(save_directory).

The save_directory can be used in from_pretrained as argument value of pretrained_model_name_or_path to re-load the tokenizer.

参数:
  • save_directory (str) -- Directory to save files into.

  • filename_prefix -- (str, optional): A prefix to add to the names of the files saved by the tokenizer.

示例

from paddlenlp.transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer.save_pretrained('trained_model')
# reload from save_directory
tokenizer = BertTokenizer.from_pretrained('trained_model')
save_resources(save_directory)[源代码]#

Save tokenizer related resources to resource_files_names indicating files under save_directory by copying directly. Override it if necessary.

参数:

save_directory (str) -- Directory to save files into.

save_to_hf_hub(repo_id: str, private: bool | None = None, subfolder: str | None = None, commit_message: str | None = None, revision: str | None = None, create_pr: bool = False)[源代码]#

Uploads all elements of this tokenizer to a new HuggingFace Hub repository. :param repo_id: Repository name for your model/tokenizer in the Hub. :type repo_id: str :param private: Whether the model/tokenizer is set to private :type private: bool, optional :param subfolder: Push to a subfolder of the repo instead of the root :type subfolder: str, optional :param commit_message: f"Upload {path_in_repo} with huggingface_hub" :type commit_message: str, optional :param revision: :type revision: str, optional :param create_pr: If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch.

If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server.

Returns: The url of the commit of your model in the given repository.

save_to_aistudio(repo_id, private=True, license='Apache License 2.0', exist_ok=True, subfolder=None, **kwargs)[源代码]#

Uploads all elements of this model to a new AiStudio Hub repository. :param repo_id: Repository name for your model/tokenizer in the Hub. :type repo_id: str :param token: Your token for the Hub. :type token: str :param private: Whether the model/tokenizer is set to private. Defaults to True. :type private: bool, optional :param license: The license of your model/tokenizer. Defaults to: "Apache License 2.0". :type license: str :param exist_ok: Whether to override existing repository. Defaults to: True. :type exist_ok: bool, optional :param subfolder: Push to a subfolder of the repo instead of the root :type subfolder: str, optional

tokenize(text: str, pair: str | None = None, add_special_tokens: bool = False, **kwargs) List[str][源代码]#

Converts a string in a sequence of tokens, replacing unknown tokens with the unk_token.

参数:
  • text (str) -- The sequence to be encoded.

  • pair (str, optional) -- A second sequence to be encoded with the first.

  • add_special_tokens (bool, optional, defaults to False) -- Whether or not to add the special tokens associated with the corresponding model.

  • kwargs (additional keyword arguments, optional) -- Will be passed to the underlying model specific encode method. See details in [__call__]

返回:

The list of tokens.

返回类型:

List[str]

__call__(text: str | List[str] | List[List[str]], text_pair: str | List[str] | List[List[str]] | None = None, max_length: int | None = None, stride: int = 0, is_split_into_words: bool | str = False, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = False, return_position_ids: bool | None = None, return_token_type_ids: bool | None = None, return_attention_mask: bool | None = None, return_length: bool = False, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_dict: bool = True, return_offsets_mapping: bool = False, add_special_tokens: bool = True, pad_to_multiple_of: int | None = None, return_tensors: str | TensorType | None = None, verbose: bool = True, **kwargs)[源代码]#

Performs tokenization and uses the tokenized tokens to prepare model inputs. It supports sequence or sequence pair as input, and batch input is allowed. self.encode() or self.batch_encode() would be called separately for single or batch input depending on input format and is_split_into_words argument.

参数:
  • text (str, List[str] or List[List[str]]) -- The sequence or batch of sequences to be processed. One sequence is a string or a list of strings depending on whether it has been pretokenized. If each sequence is provided as a list of strings (pretokenized), you must set is_split_into_words as True to disambiguate with a batch of sequences.

  • text_pair (str, List[str] or List[List[str]], optional) -- Same as text argument, while it represents for the latter sequence of the sequence pair.

  • max_length (int, optional) -- If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those overflowing tokens will be added to the returned dictionary when return_overflowing_tokens is True. Defaults to None.

  • stride (int, optional) -- Only available for batch input of sequence pair and mainly for question answering usage. When for QA, text represents questions and text_pair represents contexts. If stride is set to a positive number, the context will be split into multiple spans where stride defines the number of (tokenized) tokens to skip from the start of one span to get the next span, thus will produce a bigger batch than inputs to include all spans. Moreover, 'overflow_to_sample' and 'offset_mapping' preserving the original example and position information will be added to the returned dictionary. Defaults to 0.

  • is_split_into_words (Union[bool, str], optional) -- when the text is words or tokens, is_split_into_words should be True or token. True: means that the text should be words which should be tokenized. token: means that the text should be tokens which already be tokenized, so it should not be tokenized again.

  • padding (bool, str or [PaddingStrategy], optional) --

    Activates and controls padding. Accepts the following values:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

    Defaults to False.

  • truncation (bool, str or [TruncationStrategy], optional) --

    Activates and controls truncation. Accepts the following values:

    • True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

    Defaults to False.

  • return_position_ids (bool, optional) -- Whether to include tokens position ids in the returned dictionary. Defaults to False.

  • return_token_type_ids (bool, optional) -- Whether to include token type ids in the returned dictionary. Defaults to True.

  • return_attention_mask (bool, optional) -- Whether to include the attention mask in the returned dictionary. Defaults to False.

  • return_length (bool, optional) -- Whether to include the length of each encoded inputs in the returned dictionary. Defaults to False.

  • return_overflowing_tokens (bool, optional) -- Whether to include overflowing token information in the returned dictionary. Defaults to False.

  • return_special_tokens_mask (bool, optional) -- Whether to include special tokens mask information in the returned dictionary. Defaults to False.

  • return_dict (bool, optional) --

    Decide the format for returned encoded batch inputs. Only works when input is a batch of data.

    - If True, encoded inputs would be a dictionary like:
        {'input_ids': [[1, 4444, 4385, 1545, 6712],[1, 4444, 4385]],
        'token_type_ids': [[0, 0, 0, 0, 0], [0, 0, 0]]}
    - If False, encoded inputs would be a list like:
        [{'input_ids': [1, 4444, 4385, 1545, 6712],
          'token_type_ids': [0, 0, 0, 0, 0]},
         {'input_ids': [1, 4444, 4385], 'token_type_ids': [0, 0, 0]}]
    

    Defaults to True.

  • return_offsets_mapping (bool, optional) -- Whether to include the list of pair preserving the index of start and end char in original input for each token in the returned dictionary. Would be automatically set to True when stride > 0. Defaults to False.

  • add_special_tokens (bool, optional) -- Whether to add the special tokens associated with the corresponding model to the encoded inputs. Defaults to True

  • pad_to_multiple_of (int, optional) -- If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). Defaults to None.

  • return_tensors (str or [TensorType], optional) --

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'pd': Return Paddle paddle.Tensor objects.

    • 'np': Return Numpy np.ndarray objects.

    Defaults to None.

  • verbose (bool, optional) -- Whether or not to print more information and warnings. Defaults to True.

返回:

The dict has the following optional items:

  • input_ids (list[int] or list[list[int]]): List of token ids to be fed to a model.

  • position_ids (list[int] or list[list[int]], optional): List of token position ids to be fed to a model. Included when return_position_ids is True

  • token_type_ids (list[int] or list[list[int]], optional): List of token type ids to be fed to a model. Included when return_token_type_ids is True.

  • attention_mask (list[int] or list[list[int]], optional): List of integers valued 0 or 1, where 0 specifies paddings and should not be attended to by the model. Included when return_attention_mask is True.

  • seq_len (int or list[int], optional): The input_ids length. Included when return_length is True.

  • overflowing_tokens (list[int] or list[list[int]], optional): List of overflowing tokens. Included when if max_length is specified and return_overflowing_tokens is True.

  • num_truncated_tokens (int or list[int], optional): The number of overflowing tokens. Included when if max_length is specified and return_overflowing_tokens is True.

  • special_tokens_mask (list[int] or list[list[int]], optional): List of integers valued 0 or 1, with 0 specifying special added tokens and 1 specifying sequence tokens. Included when return_special_tokens_mask is True.

  • offset_mapping (list[int], optional): list of pair preserving the index of start and end char in original input for each token. For a sqecial token, the index pair is (0, 0). Included when return_overflowing_tokens is True or stride > 0.

  • overflow_to_sample (int or list[int], optional): Index of example from which this feature is generated. Included when stride works.

返回类型:

dict or list[dict] (for batch input)

encode(text, text_pair=None, add_special_tokens=True, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = False, max_length: int | None = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: int | None = None, return_tensors: str | TensorType | None = None, return_token_type_ids: bool | None = None, return_attention_mask: bool | None = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, return_position_ids=None, **kwargs) BatchEncoding[源代码]#

Tokenize and prepare for the model a sequence or a pair of sequences.

参数:
  • text (str, List[str] or List[int]) -- The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

  • text_pair (str, List[str] or List[int], optional) -- Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

encode_plus(text: str | List[str] | List[int], text_pair: str | List[str] | List[int] | None = None, add_special_tokens: bool = True, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy | None = None, max_length: int | None = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: int | None = None, return_tensors: str | TensorType | None = None, return_token_type_ids: bool | None = None, return_attention_mask: bool | None = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) BatchEncoding[源代码]#

Tokenize and prepare for the model a sequence or a pair of sequences.

<Tip warning={true}>

This method is deprecated, __call__ should be used instead.

</Tip>

参数:
  • text (str, List[str] or List[int] (the latter only for not-fast tokenizers)) -- The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

  • text_pair (str, List[str] or List[int], optional) -- Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

batch_encode(batch_text_or_text_pairs: List[str] | List[Tuple[str, str]] | List[List[str]] | List[Tuple[List[str], List[str]]] | List[List[int]] | List[Tuple[List[int], List[int]]], max_length=None, stride: int = 0, is_split_into_words: bool = False, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = False, return_position_ids=None, return_token_type_ids=None, return_attention_mask=None, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False, return_dict=True, return_offsets_mapping=False, add_special_tokens=True, pad_to_multiple_of: int | None = None, return_tensors: str | TensorType | None = None, verbose: bool = True, **kwargs) BatchEncoding[源代码]#

Performs tokenization and uses the tokenized tokens to prepare model inputs. It supports batch inputs of sequence or sequence pair.

参数:

batch_text_or_text_pairs (list) -- The element of list can be sequence or sequence pair, and the sequence is a string or a list of strings depending on whether it has been pretokenized. If each sequence is provided as a list of strings (pretokenized), you must set is_split_into_words as True to disambiguate with a sequence pair.

返回:

The dict has the following optional items:

返回类型:

dict or list[dict]

pad(encoded_inputs: BatchEncoding | List[BatchEncoding] | Dict[str, List[int]] | Dict[str, List[List[int]]] | List[Dict[str, List[int]]], padding: bool | str | PaddingStrategy = True, max_length: int | None = None, pad_to_multiple_of: int | None = None, return_attention_mask: bool | None = None, return_tensors: str | TensorType | None = None, verbose: bool = True) BatchEncoding[源代码]#

Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch.

Padding side (left/right) padding token ids are defined at the tokenizer level (with self.padding_side, self.pad_token_id and self.pad_token_type_id)

<Tip>

If the encoded_inputs passed are dictionary of numpy arrays, Paddle tensors, the result will use the same type unless you provide a different tensor type with return_tensors. </Tip>

参数:
  • encoded_inputs ([BatchEncoding], list of [BatchEncoding], Dict[str, List[int]], Dict[str, List[List[int]] or List[Dict[str, List[int]]]) --

    Tokenized inputs. Can represent one input ([BatchEncoding] or Dict[str, List[int]]) or a batch of tokenized inputs (list of [BatchEncoding], Dict[str, List[List[int]]] or List[Dict[str, List[int]]]) so you can use this method during preprocessing as well as in a Paddle Dataloader collate function.

    Instead of List[int] you can have tensors (numpy arrays, Paddle tensors), see the note above for the return type.

  • padding (bool, str or [PaddingStrategy], optional, defaults to True) --

    Select a strategy to pad the returned sequences (according to the model's padding side and padding

    index) among:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

  • max_length (int, optional) -- Maximum length of the returned list and optionally padding length (see above).

  • pad_to_multiple_of (int, optional) --

    If set will pad the sequence to a multiple of the provided value.

    This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • return_attention_mask (bool, optional) --

    Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer's default, defined by the return_outputs attribute.

    [What are attention masks?](../glossary#attention-mask)

  • return_tensors (str or [TensorType], optional) --

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'pd': Return Paddle paddle.Tensor objects.

    • 'np': Return Numpy np.ndarray objects.

  • verbose (bool, optional, defaults to True) -- Whether or not to print more information and warnings.

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: List[int] | None = None) List[int][源代码]#

Create the token type IDs corresponding to the sequences passed. [What are token type IDs?](../glossary#token-type-ids)

Should be overridden in a subclass if the model has a special way of building those.

参数:
  • token_ids_0 (List[int]) -- The first tokenized sequence.

  • token_ids_1 (List[int], optional) -- The second tokenized sequence.

返回:

The token type ids.

返回类型:

List[int]

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: List[int] | None = None) List[int][源代码]#

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

This implementation does not add special tokens and this method should be overridden in a subclass.

参数:
  • token_ids_0 (List[int]) -- The first tokenized sequence.

  • token_ids_1 (List[int], optional) -- The second tokenized sequence.

返回:

The model input with special tokens.

返回类型:

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

Should be overridden in a subclass if the model has a special way of building those.

参数:
  • offset_mapping_0 (List[tuple]) -- List of char offsets to which the special tokens will be added.

  • offset_mapping_1 (List[tuple], optional) -- Optional second list of char offsets for offset mapping pairs.

返回:

List of char offsets with the appropriate offsets of special tokens.

返回类型:

List[tuple]

prepare_for_model(ids, pair_ids=None, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = False, max_length: int | None = None, stride: int = 0, pad_to_multiple_of: int | None = None, return_tensors: str | TensorType | None = None, return_position_ids=None, return_token_type_ids: bool | None = None, return_attention_mask: bool | None = None, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False, return_offsets_mapping=False, add_special_tokens=True, verbose: bool = True, prepend_batch_axis: bool = False, **kwargs)[源代码]#

Performs tokenization and uses the tokenized tokens to prepare model inputs. It supports sequence or sequence pair as input, and batch input is not allowed.

truncate_sequences(ids: List[int], pair_ids: List[int] | None = None, num_tokens_to_remove: int = 0, truncation_strategy: str | TruncationStrategy = 'longest_first', stride: int = 0) Tuple[List[int], List[int], List[int]][源代码]#

Truncates a sequence pair in-place following the strategy.

参数:
  • ids (List[int]) -- Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • pair_ids (List[int], optional) -- Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • num_tokens_to_remove (int, optional, defaults to 0) -- Number of tokens to remove using the truncation strategy.

  • truncation_strategy (str or [TruncationStrategy], optional, defaults to False) --

    The strategy to follow for truncation. Can be:

    • 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • stride (int, optional, defaults to 0) -- If set to a positive number, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.

返回:

The truncated ids, the truncated pair_ids and the list of overflowing tokens. Note: The longest_first strategy returns empty list of overflowing tokens if a pair of sequences (or a batch of pairs) is provided.

返回类型:

Tuple[List[int], List[int], List[int]]

convert_tokens_to_string(tokens: List[str]) str[源代码]#

Converts a sequence of tokens in a single string. The most simple way to do it is " ".join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.

参数:

tokens (List[str]) -- The token to join in a string.

返回:

The joined tokens.

返回类型:

str

batch_decode(sequences: List[int] | List[List[int]] | ndarray | Tensor, skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True, **kwargs) List[str][源代码]#

Convert a list of lists of token ids into a list of strings by calling decode.

参数:
  • sequences (Union[List[int], List[List[int]], np.ndarray, paddle.Tensor]) -- List of tokenized input ids. Can be obtained using the __call__ method.

  • skip_special_tokens (bool, optional, defaults to False) -- Whether or not to remove special tokens in the decoding.

  • clean_up_tokenization_spaces (bool, optional, defaults to True) -- Whether or not to clean up the tokenization spaces.

  • kwargs (additional keyword arguments, optional) -- Will be passed to the underlying model specific decode method.

返回:

The list of decoded sentences.

返回类型:

List[str]

decode(token_ids: int | List[int] | ndarray | Tensor, skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True, **kwargs) str[源代码]#

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

参数:
  • token_ids (Union[int, List[int], np.ndarray, paddle.Tensor]) -- List of tokenized input ids. Can be obtained using the __call__ method.

  • skip_special_tokens (bool, optional, defaults to False) -- Whether or not to remove special tokens in the decoding.

  • clean_up_tokenization_spaces (bool, optional, defaults to True) -- Whether or not to clean up the tokenization spaces.

  • kwargs (additional keyword arguments, optional) -- Will be passed to the underlying model specific decode method.

返回:

The decoded sentence.

返回类型:

str

get_special_tokens_mask(token_ids_0: List[int], token_ids_1: List[int] | None = None, already_has_special_tokens: bool = False) List[int][源代码]#

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

参数:
  • token_ids_0 (List[int]) -- List of ids of the first sequence.

  • token_ids_1 (List[int], optional) -- List of ids of the second sequence.

  • already_has_special_tokens (bool, optional, defaults to False) -- Whether or not the token list is already formatted with special tokens for the model.

返回:

1 for a special token, 0 for a sequence token.

返回类型:

A list of integers in the range [0, 1]

static clean_up_tokenization(out_string: str) str[源代码]#

Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.

参数:

out_string (str) -- The text to clean up.

返回:

The cleaned-up string.

返回类型:

str