tokenizer

class BasicTokenizer(do_lower_case=True)[源代码]

基类:object

Runs basic tokenization (punctuation splitting, lower casing, etc.).

参数

do_lower_case (bool) -- Whether or not to lowercase the input when tokenizing. Defaults to True.

tokenize(text)[源代码]

Tokenizes a piece of text using basic tokenizer.

参数

text (str) -- A piece of text.

返回

A list of tokens.

返回类型

list(str)

实际案例

from paddlenlp.transformers import BasicTokenizer
basictokenizer = BasicTokenizer()
tokens = basictokenizer.tokenize('He was a puppeteer')
'''
['he', 'was', 'a', 'puppeteer']
'''
class BertTokenizer(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]')[源代码]

基类:paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer

Constructs a BERT tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.

参数
  • vocab_file (str) -- The vocabulary file path (ends with '.txt') required to instantiate a WordpieceTokenizer.

  • do_lower_case (bool) -- Whether or not to lowercase the input when tokenizing. Defaults to`True`.

  • unk_token (str) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "[UNK]".

  • sep_token (str) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".

  • pad_token (str) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".

  • cls_token (str) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".

  • mask_token (str) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".

实际案例

from paddlenlp.transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

inputs = tokenizer('He was a puppeteer')
print(inputs)

'''
{'input_ids': [101, 2002, 2001, 1037, 13997, 11510, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0]}
'''
property vocab_size

Return the size of vocabulary.

返回

The size of vocabulary.

返回类型

int

tokenize(text)[源代码]

Converts a string to a list of tokens.

参数

text (str) -- The text to be tokenized.

返回

A list of string representing converted tokens.

返回类型

List(str)

实际案例

from paddlenlp.transformers import BertTokenizer

berttokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokens = berttokenizer.tokenize('He was a puppeteer')

'''
['he', 'was', 'a', 'puppet', '##eer']
'''
convert_tokens_to_string(tokens)[源代码]

Converts a sequence of tokens (list of string) to a single string. Since the usage of WordPiece introducing ## to concat subwords, also removes ## when converting.

参数

tokens (list) -- A list of string representing tokens to be converted.

返回

Converted string from tokens.

返回类型

str

实际案例

from paddlenlp.transformers import BertTokenizer

berttokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokens = berttokenizer.tokenize('He was a puppeteer')
'''
['he', 'was', 'a', 'puppet', '##eer']
'''
strings = tokenizer.convert_tokens_to_string(tokens)
'''
he was a puppeteer
'''
num_special_tokens_to_add(pair=False)[源代码]

Returns the number of added tokens when encoding a sequence with special tokens.

参数

pair (bool) -- Whether the input is a sequence pair or a single sequence. Defaults to False and the input is a single sequence.

返回

Number of tokens added to sequences.

返回类型

int

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[源代码]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

A BERT sequence has the following format:

  • single sequence: [CLS] X [SEP]

  • pair of sequences: [CLS] A [SEP] B [SEP]

参数
  • token_ids_0 (List[int]) -- List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of input_id with the appropriate special tokens.

返回类型

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[源代码]

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

A BERT offset_mapping has the following format:

  • single sequence: (0,0) X (0,0)

  • pair of sequences: (0,0) A (0,0) B (0,0)

参数
  • offset_mapping_ids_0 (List[tuple]) -- List of wordpiece offsets to which the special tokens will be added.

  • offset_mapping_ids_1 (List[tuple], optional) -- Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.

返回

A list of wordpiece offsets with the appropriate offsets of special tokens.

返回类型

List[tuple]

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[源代码]

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

A BERT sequence pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

参数
  • token_ids_0 (List[int]) -- A list of inputs_ids for the first sequence.

  • token_ids_1 (List[int], optional) -- Optional second list of IDs for sequence pairs. Defaults to None.

返回

List of token_type_id according to the given sequence(s).

返回类型

List[int]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[源代码]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

参数
  • token_ids_0 (List[int]) -- A list of inputs_ids for the first sequence.

  • token_ids_1 (List[int], optinal) -- Optional second list of IDs for sequence pairs. Defaults to None.

  • already_has_special_tokens (bool, optional) -- Whether or not the token list is already formatted with special tokens for the model. Defaults to None.

返回

The list of integers either be 0 or 1: 1 for a special token, 0 for a sequence token.

返回类型

List[int]

class WordpieceTokenizer(vocab, unk_token, max_input_chars_per_word=100)[源代码]

基类:object

Runs WordPiece tokenization.

参数
  • vocab (Vocab|dict) -- Vocab of the word piece tokenizer.

  • unk_token (str) -- A specific token to replace all unknown tokens.

  • max_input_chars_per_word (int) -- If a word's length is more than max_input_chars_per_word, it will be dealt as unknown word. Defaults to 100.

tokenize(text)[源代码]

Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform tokenization using the given vocabulary.

参数

text -- A single token or whitespace separated tokens. This should have already been passed through BasicTokenizer.

返回

A list of wordpiece tokens.

返回类型

list (str)

实际案例

from paddlenlp.transformers import BertTokenizer, WordpieceTokenizer

berttokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
vocab  = berttokenizer.vocab
unk_token = berttokenizer.unk_token

wordpiecetokenizer = WordpieceTokenizer(vocab,unk_token)
inputs = wordpiecetokenizer.tokenize("unaffable")
print(inputs)
'''
["un", "##aff", "##able"]
'''