tokenizer#

class ErnieTokenizer(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[source]#

Bases: PretrainedTokenizer

Constructs an ERNIE tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.

This tokenizer inherits from PretrainedTokenizer which contains most of the main methods. For more information regarding those methods, please refer to this superclass.

Parameters:
  • vocab_file (str) – The vocabulary file path (ends with ‘.txt’) required to instantiate a WordpieceTokenizer.

  • do_lower_case (str, optional) – Whether or not to lowercase the input when tokenizing. Defaults to`True`.

  • unk_token (str, optional) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token in order to be converted to an ID. Defaults to “[UNK]”.

  • sep_token (str, optional) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.

  • pad_token (str, optional) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.

  • cls_token (str, optional) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.

  • mask_token (str, optional) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.

Examples

from paddlenlp.transformers import ErnieTokenizer
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')

encoded_inputs = tokenizer('He was a puppeteer')
# encoded_inputs:
# { 'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2],
#  'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
# }
property vocab_size#

Return the size of vocabulary.

Returns:

The size of vocabulary.

Return type:

int

extend_chinese_char()[source]#

For, char level model such as ERNIE, we need add ## chinese token to demonstrate the segment information.

get_vocab()[source]#

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns:

The vocabulary.

Return type:

Dict[str, int]

convert_tokens_to_string(tokens)[source]#

Converts a sequence of tokens (list of string) in a single string. Since the usage of WordPiece introducing ## to concat subwords, also remove ## when converting.

Parameters:

tokens (List[str]) – A list of string representing tokens to be converted.

Returns:

Converted string from tokens.

Return type:

str

Examples

from paddlenlp.transformers import ErnieTokenizer
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')

tokens = tokenizer.tokenize('He was a puppeteer')
strings = tokenizer.convert_tokens_to_string(tokens)
#he was a puppeteer
num_special_tokens_to_add(pair=False)[source]#

Returns the number of added tokens when encoding a sequence with special tokens.

Note

This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

Parameters:

pair (bool, optional) – Whether the input is a sequence pair or a single sequence. Defaults to False and the input is a single sequence.

Returns:

Number of tokens added to sequences

Return type:

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]#

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

An Ernie sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] B [SEP]

Parameters:
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.

Returns:

List of input_id with the appropriate special tokens.

Return type:

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[source]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

An ERNIE offset_mapping has the following format:

  • single sequence: (0,0) X (0,0)

  • pair of sequences: (0,0) A (0,0) B (0,0)

Parameters:
  • offset_mapping_ids_0 (List[tuple]) – List of char offsets to which the special tokens will be added.

  • offset_mapping_ids_1 (List[tuple], optional) – Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.

Returns:

A list of wordpiece offsets with the appropriate offsets of special tokens.

Return type:

List[tuple]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]#

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A ERNIE sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

Parameters:
  • token_ids_0 (List[int]) – A list of inputs_ids for the first sequence.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.

Returns:

List of token_type_id according to the given sequence(s).

Return type:

List[int]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]#

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods. :param token_ids_0: List of ids of the first sequence. :type token_ids_0: List[int] :param token_ids_1: Optional second list of IDs for sequence pairs.

Defaults to None.

Parameters:

already_has_special_tokens (str, optional) – Whether or not the token list is already formatted with special tokens for the model. Defaults to False.

Returns:

The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type:

List[int]

class ErnieTinyTokenizer(vocab_file, sentencepiece_model_file, word_dict, do_lower_case=True, encoding='utf8', unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[source]#

Bases: PretrainedTokenizer

Constructs a ErnieTiny tokenizer. It uses the dict.wordseg.pickle cut the text to words, and use the sentencepiece tools to cut the words to sub-words.

Examples

from paddlenlp.transformers import ErnieTokenizer
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')

encoded_inputs = tokenizer('He was a puppeteer')
# encoded_inputs:
# { 'input_ids': [1, 4444, 4385, 1545, 6712, 10062, 9568, 9756, 9500, 2],
#  'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
# }
Parameters:
  • vocab_file (str) – The file path of the vocabulary.

  • sentencepiece_model_file (str) – The file path of sentencepiece model.

  • word_dict (str) – The file path of word vocabulary, which is used to do chinese word segmentation.

  • do_lower_case (str, optional) – Whether or not to lowercase the input when tokenizing. Defaults to`True`.

  • unk_token (str, optional) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to “[UNK]”.

  • sep_token (str, optional) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.

  • pad_token (str, optional) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.

  • cls_token (str, optional) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.

  • mask_token (str, optional) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.

Examples

from paddlenlp.transformers import ErnieTinyTokenizer
tokenizer = ErnieTinyTokenizer.from_pretrained('ernie-tiny')
inputs = tokenizer('He was a puppeteer')
'''
{'input_ids': [3, 941, 977, 16690, 269, 11346, 11364, 1337, 13742, 1684, 5],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
'''
property vocab_size#

Return the size of vocabulary.

Returns:

The size of vocabulary.

Return type:

int

convert_tokens_to_string(tokens)[source]#

Converts a sequence of tokens (list of string) to a single string. Since the usage of WordPiece introducing ## to concat subwords, also removes ## when converting.

Parameters:

tokens (list) – A list of string representing tokens to be converted.

Returns:

Converted string from tokens.

Return type:

str

Examples: .. code-block:

from paddlenlp.transformers import ErnieTinyTokenizer
tokenizer = ErnieTinyTokenizer.from_pretrained('ernie-tiny')
inputs = tokenizer.tokenize('He was a puppeteer')
#['▁h', '▁e', '▁was', '▁a', '▁pu', 'pp', 'e', '▁te', 'er']
strings = tokenizer.convert_tokens_to_string(tokens)
save_resources(save_directory)[source]#

Save tokenizer related resources to files under save_directory.

Parameters:

save_directory (str) – Directory to save files into.

num_special_tokens_to_add(pair=False)[source]#

Returns the number of added tokens when encoding a sequence with special tokens.

Note

This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

Parameters:

pair (bool, optional) – Whether the input is a sequence pair or a single sequence. Defaults to False and the input is a single sequence.

Returns:

Number of tokens added to sequences

Return type:

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]#

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

An ERNIE sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] B [SEP]

Parameters:
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.

Returns:

List of input_id with the appropriate special tokens.

Return type:

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[source]#

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

An ERNIE offset_mapping has the following format:

  • single sequence: (0,0) X (0,0)

  • pair of sequences: (0,0) A (0,0) B (0,0)

Parameters:
  • offset_mapping_ids_0 (List[tuple]) – List of char offsets to which the special tokens will be added.

  • offset_mapping_ids_1 (List[tuple], optional) – Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.

Returns:

List of wordpiece offsets with the appropriate offsets of special tokens.

Return type:

List[tuple]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]#

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A ERNIE sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

Parameters:
  • token_ids_0 (List[int]) – A list of inputs_ids for the first sequence.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.

Returns:

List of token_type_id according to the given sequence(s).

Return type:

List[int]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]#

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

Parameters:
  • token_ids_0 (List[int]) – List of ids of the first sequence.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.

  • already_has_special_tokens (str, optional) – Whether or not the token list is already formatted with special tokens for the model. Defaults to False.

Returns:

The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type:

List[int]

get_vocab()[source]#

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns:

The vocabulary.

Return type:

Dict[str, int]