tokenizer¶
-
class
FunnelTokenizer
(vocab_file, do_lower_case=True, unk_token='<unk>', sep_token='<sep>', pad_token='<pad>', cls_token='<cls>', mask_token='<mask>', bos_token='<s>', eos_token='</s>', wordpieces_prefix='##', **kwargs)[源代码]¶ 基类:
paddlenlp.transformers.bert.tokenizer.BertTokenizer
-
property
vocab_size
¶ return the size of vocabulary. :returns: the size of vocabulary. :rtype: int
-
tokenize
(text)[源代码]¶ End-to-end tokenization for BERT models. :param text: The text to be tokenized. :type text: str
- 返回
A list of string representing converted tokens.
- 返回类型
list
-
convert_tokens_to_string
(tokens)[源代码]¶ Converts a sequence of tokens (list of string) in a single string. Since the usage of WordPiece introducing
##
to concat subwords, also remove##
when converting. :param tokens: A list of string representing tokens to be converted. :type tokens: list- 返回
Converted string from tokens.
- 返回类型
str
-
num_special_tokens_to_add
(pair=False)[源代码]¶ Returns the number of added tokens when encoding a sequence with special tokens.
注解
This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
- 参数
pair -- Returns the number of added tokens in the case of a sequence pair if set to True, returns the number of added tokens in the case of a single sequence if set to False.
- 返回
Number of tokens added to sequences
-
build_offset_mapping_with_special_tokens
(offset_mapping_0, offset_mapping_1=None)[源代码]¶ Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
A BERT offset_mapping has the following format:
- single sequence: ``(0,0) X (0,0)`` - pair of sequences: `(0,0) A (0,0) B (0,0)``
- 参数
offset_mapping_ids_0 (
List[tuple]
) -- List of char offsets to which the special tokens will be added.offset_mapping_ids_1 (
List[tuple]
,optional
) -- Optional second list of char offsets for offset mapping pairs.
- 返回
List of char offsets with the appropriate offsets of special tokens.
- 返回类型
List[tuple]
-
create_token_type_ids_from_sequences
(token_ids_0, token_ids_1=None)[源代码]¶ Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A BERT sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).- 参数
token_ids_0 (List[int]) -- A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.
- 返回
List of token_type_id according to the given sequence(s).
- 返回类型
List[int]
-
get_special_tokens_mask
(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]¶ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
encode
methods.- 参数
token_ids_0 (List[int]) -- List of ids of the first sequence.
token_ids_1 (List[int], optional) -- List of ids of the second sequence.
already_has_special_tokens (bool, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to None.
- 返回
The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
- 返回类型
results (List[int])
-
truncate_sequences
(ids, pair_ids=None, num_tokens_to_remove=0, truncation_strategy='longest_first', stride=0)[源代码]¶ Truncates a sequence pair in place to the maximum length.
- 参数
ids -- list of tokenized input ids. Can be obtained from a string by chaining the
tokenize
andconvert_tokens_to_ids
methods.pair_ids -- Optional second list of input ids. Can be obtained from a string by chaining the
tokenize
andconvert_tokens_to_ids
methods.num_tokens_to_remove (
int
,optional
, defaults to0
) -- number of tokens to remove using the truncation strategytruncation_strategy --
string selected in the following options: - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_seq_len
starting from the longest one at each token (when there is a pair of input sequences). Overflowing tokens only contains overflow from the first sequence.
'only_first': Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove.
'only_second': Only truncate the second sequence
'do_not_truncate': Does not truncate (raise an error if the input sequence is longer than max_seq_len)
stride (
int
,optional
, defaults to0
) -- If set to a number along with max_seq_len, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.
-
batch_encode
(batch_text_or_text_pairs, max_seq_len=512, pad_to_max_seq_len=False, stride=0, is_split_into_words=False, truncation_strategy='longest_first', return_position_ids=False, return_token_type_ids=True, return_attention_mask=False, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False)[源代码]¶ Performs tokenization and uses the tokenized tokens to prepare model inputs. It supports batch inputs of sequence or sequence pair. :param batch_text_or_text_pairs: The element of list can be sequence or sequence pair, and the
sequence is a string or a list of strings depending on whether it has been pretokenized. If each sequence is provided as a list of strings (pretokenized), you must set
is_split_into_words
asTrue
to disambiguate with a sequence pair.- 参数
max_seq_len (int, optional) -- If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those overflowing tokens will be added to the returned dictionary when
return_overflowing_tokens
isTrue
. Defaults toNone
.stride (int, optional) -- Only available for batch input of sequence pair and mainly for question answering usage. When for QA,
text
represents questions andtext_pair
represents contexts. Ifstride
is set to a positive number, the context will be split into multiple spans wherestride
defines the number of (tokenized) tokens to skip from the start of one span to get the next span, thus will produce a bigger batch than inputs to include all spans. Moreover, 'overflow_to_sample' and 'offset_mapping' preserving the original example and position information will be added to the returned dictionary. Defaults to 0.pad_to_max_seq_len (bool, optional) -- If set to
True
, the returned sequences would be padded up tomax_seq_len
specified length according to padding side (self.padding_side
) and padding token id. Defaults toFalse
.truncation_strategy (str, optional) -- String selected in the following options: - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under
max_seq_len
starting from the longest one at each token (when there is a pair of input sequences). - 'only_first': Only truncate the first sequence. - 'only_second': Only truncate the second sequence. - 'do_not_truncate': Do not truncate (raise an error if the input sequence is longer thanmax_seq_len
). Defaults to 'longest_first'.return_position_ids (bool, optional) -- Whether to include tokens position ids in the returned dictionary. Defaults to
False
.return_token_type_ids (bool, optional) -- Whether to include token type ids in the returned dictionary. Defaults to
True
.return_attention_mask (bool, optional) -- Whether to include the attention mask in the returned dictionary. Defaults to
False
.return_length (bool, optional) -- Whether to include the length of each encoded inputs in the returned dictionary. Defaults to
False
.return_overflowing_tokens (bool, optional) -- Whether to include overflowing token information in the returned dictionary. Defaults to
False
.return_special_tokens_mask (bool, optional) -- Whether to include special tokens mask information in the returned dictionary. Defaults to
False
.
- 返回
The dict has the following optional items: - input_ids (list[int]): List of token ids to be fed to a model. - position_ids (list[int], optional): List of token position ids to be
fed to a model. Included when
return_position_ids
isTrue
token_type_ids (list[int], optional): List of token type ids to be fed to a model. Included when
return_token_type_ids
isTrue
.attention_mask (list[int], optional): List of integers valued 0 or 1, where 0 specifies paddings and should not be attended to by the model. Included when
return_attention_mask
isTrue
.seq_len (int, optional): The input_ids length. Included when
return_length
isTrue
.overflowing_tokens (list[int], optional): List of overflowing tokens. Included when if
max_seq_len
is specified andreturn_overflowing_tokens
is True.num_truncated_tokens (int, optional): The number of overflowing tokens. Included when if
max_seq_len
is specified andreturn_overflowing_tokens
is True.special_tokens_mask (list[int], optional): List of integers valued 0 or 1, with 0 specifying special added tokens and 1 specifying sequence tokens. Included when
return_special_tokens_mask
isTrue
.offset_mapping (list[int], optional): list of pair preserving the index of start and end char in original input for each token. For a sqecial token, the index pair is
(0, 0)
. Included whenstride
works.overflow_to_sample (int, optional): Index of example from which this feature is generated. Included when
stride
works.
- 返回类型
list[dict]
-
rematch
(text)[源代码]¶ changed from https://github.com/bojone/bert4keras/blob/master/bert4keras/tokenizers.py#L372
-
property