tokenizer

Tokenization classes for LUKE.

class LukeTokenizer(vocab_file, entity_file, merges_file, do_lower_case=True, unk_token='<unk>', sep_token='</s>', pad_token='<pad>', cls_token='<s>', mask_token='<mask>', **kwargs)[源代码]

基类:paddlenlp.transformers.roberta.tokenizer.RobertaBPETokenizer

Constructs a Luke tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.

This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

参数
  • vocab_file (str) -- The vocabulary file path (ends with '.json') required to instantiate a WordpieceTokenizer.

  • entity_file (str) -- The entity vocabulary file path (ends with '.tsv') required to instantiate a EntityTokenizer.

  • do_lower_case (bool) -- Whether or not to lowercase the input when tokenizing. Defaults to`True`.

  • unk_token (str) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "[UNK]".

  • sep_token (str) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".

  • pad_token (str) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".

  • cls_token (str) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".

  • mask_token (str) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".

实际案例

from paddlenlp.transformers import LukeTokenizer
tokenizer = LukeTokenizer.from_pretrained('luke-large)

tokens = tokenizer('Beyoncé lives in Los Angeles', entity_spans=[(0, 7), (17, 28)])
#{'input_ids': [0, 40401, 261, 12695, 1074, 11, 1287, 1422, 2], 'entity_ids': [1657, 32]}
get_entity_vocab()[源代码]

Get the entity vocab

tokenize(text, add_prefix_space=False)[源代码]
Tokenize a string.
Args:
text (str):

The sentence to be tokenized.

add_prefix_space (boolean, default False):

Begin the sentence with at least one space to get invariance to word order in GPT-2 (and Luke) tokenizers.

convert_tokens_to_string(tokens)[源代码]

Converts a sequence of tokens (string) in a single string.

add_special_tokens(token_list: Union[List[int], Dict])[源代码]

Adding special tokens if you need.

参数

token_list (List[int], Dict[List[int]]) -- The special token list you provided. If you provide a Dict, the key of the Dict must be "additional_special_tokens" and the value must be token list.

convert_entity_to_id(entity: str)[源代码]

Convert the entity to id

entity_encode(text, entities, max_mention_length, entity_spans, ent_sep=0, offset_a=1)[源代码]

Convert the string entity to digital entity

get_offset_mapping(text)[源代码]

Returns the map of tokens and the start and end index of their start and end character. Modified from https://github.com/bojone/bert4keras/blob/master/bert4keras/tokenizers.py#L372 :param text: Input text. :type text: str :param split_tokens: the tokens which has been split which can accelerate the operation. :type split_tokens: Optional[List[str]]

返回

The offset map of input text.

返回类型

list

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A Luke sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数
  • token_ids_0 (List[int]) -- A list of inputs_ids for the first sequence.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of token_type_id according to the given sequence(s).

返回类型

List[int]

num_special_tokens_to_add(pair=False)[源代码]

Returns the number of added tokens when encoding a sequence with special tokens.

参数

pair (bool) -- Whether the input is a sequence pair or a single sequence. Defaults to False and the input is a single sequence.

返回

Number of tokens added to sequences.

返回类型

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.