tokenizer#
Tokenization classes for LUKE.
- class LukeTokenizer(vocab_file, entity_file, merges_file, do_lower_case=True, unk_token='<unk>', sep_token='</s>', pad_token='<pad>', cls_token='<s>', mask_token='<mask>', **kwargs)[source]#
Bases:
RobertaBPETokenizer
Constructs a Luke tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.
This tokenizer inherits from
PretrainedTokenizer
which contains most of the main methods. For more information regarding those methods, please refer to this superclass.- Parameters:
vocab_file (str) – The vocabulary file path (ends with ‘.json’) required to instantiate a
WordpieceTokenizer
.entity_file (str) – The entity vocabulary file path (ends with ‘.tsv’) required to instantiate a
EntityTokenizer
.do_lower_case (bool) – Whether or not to lowercase the input when tokenizing. Defaults to`True`.
unk_token (str) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token
inorder to be converted to an ID. Defaults to “[UNK]”.sep_token (str) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.
pad_token (str) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.
cls_token (str) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.
mask_token (str) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.
Examples
from paddlenlp.transformers import LukeTokenizer tokenizer = LukeTokenizer.from_pretrained('luke-large) tokens = tokenizer('Beyoncé lives in Los Angeles', entity_spans=[(0, 7), (17, 28)]) #{'input_ids': [0, 40401, 261, 12695, 1074, 11, 1287, 1422, 2], 'entity_ids': [1657, 32]}
- property sep_token_id#
Id of the separation token in the vocabulary, to separate context and query in an input sequence. Returns
None
if the token has not been set.- Type:
Optional[int]
- property cls_token_id#
Id of the classification token in the vocabulary, to extract a summary of an input sequence leveraging self-attention along the full depth of the model.
Returns
None
if the token has not been set.- Type:
Optional[int]
- property pad_token_id#
Id of the padding token in the vocabulary. Returns
None
if the token has not been set.- Type:
Optional[int]
- property unk_token_id#
Id of the unknown token in the vocabulary. Returns
None
if the token has not been set.- Type:
Optional[int]
- tokenize(text, add_prefix_space=False)[source]#
- Tokenize a string.
- Args:
- text (str):
The sentence to be tokenized.
- add_prefix_space (boolean, default False):
Begin the sentence with at least one space to get invariance to word order in GPT-2 (and Luke) tokenizers.
- convert_tokens_to_string(tokens)[source]#
Converts a sequence of tokens (string) in a single string.
- add_special_tokens(token_list: List[int] | Dict)[source]#
Adding special tokens if you need.
- Parameters:
token_list (List[int], Dict[List[int]]) – The special token list you provided. If you provide a Dict, the key of the Dict must be “additional_special_tokens” and the value must be token list.
- entity_encode(text, entities, max_mention_length, entity_spans, ent_sep=0, offset_a=1)[source]#
Convert the string entity to digital entity
- get_offset_mapping(text)[source]#
Returns the map of tokens and the start and end index of their start and end character. Modified from bojone/bert4keras :param text: Input text. :type text: str :param split_tokens: the tokens which has been split which can accelerate the operation. :type split_tokens: Optional[List[str]]
- Returns:
The offset map of input text.
- Return type:
list
- create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]#
Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A Luke sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).- Parameters:
token_ids_0 (List[int]) – A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
- Returns:
List of token_type_id according to the given sequence(s).
- Return type:
List[int]
- num_special_tokens_to_add(pair=False)[source]#
Returns the number of added tokens when encoding a sequence with special tokens.
- Parameters:
pair (bool) – Whether the input is a sequence pair or a single sequence. Defaults to
False
and the input is a single sequence.- Returns:
Number of tokens added to sequences.
- Return type:
int