tokenizer¶
-
class
BasicTokenizer
(do_lower_case=True, never_split=None, tokenize_chinese_chars=True, strip_accents=None)[source]¶ Bases:
object
Runs basic tokenization (punctuation splitting, lower casing, etc.).
- Parameters
do_lower_case (bool) – Whether to lowercase the input when tokenizing. Defaults to
True
.never_split (Iterable) – Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
tokenize_chinese_chars (bool) – Whether to tokenize Chinese characters.
strip_accents – (bool): Whether to strip all accents. If this option is not specified, then it will be determined by the value for
lowercase
(as in the original BERT).
-
tokenize
(text, never_split=None)[source]¶ Tokenizes a piece of text using basic tokenizer.
- Parameters
text (str) – A piece of text.
never_split (List[str]) – List of token not to split.
- Returns
A list of tokens.
- Return type
list(str)
Examples
from paddlenlp.transformers import BasicTokenizer basictokenizer = BasicTokenizer() tokens = basictokenizer.tokenize('He was a puppeteer') ''' ['he', 'was', 'a', 'puppeteer'] '''
-
class
BertTokenizer
(vocab_file, do_lower_case=True, do_basic_tokenize=True, never_split=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', tokenize_chinese_chars=True, strip_accents=None, **kwargs)[source]¶ Bases:
paddlenlp.transformers.tokenizer_utils.PretrainedTokenizer
Constructs a BERT tokenizer. It uses a basic tokenizer to do punctuation splitting, lower casing and so on, and follows a WordPiece tokenizer to tokenize as subwords.
- Parameters
vocab_file (str) – The vocabulary file path (ends with ‘.txt’) required to instantiate a
WordpieceTokenizer
.do_lower_case (bool, optional) – Whether to lowercase the input when tokenizing. Defaults to
True
.do_basic_tokenize (bool, optional) – Whether to use a basic tokenizer before a WordPiece tokenizer. Defaults to
True
.never_split (Iterable, optional) – Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
. Defaults toNone
.unk_token (str, optional) – A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token
inorder to be converted to an ID. Defaults to “[UNK]”.sep_token (str, optional) – A special token separating two different sentences in the same input. Defaults to “[SEP]”.
pad_token (str, optional) – A special token used to make arrays of tokens the same size for batching purposes. Defaults to “[PAD]”.
cls_token (str, optional) – A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to “[CLS]”.
mask_token (str, optional) – A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to “[MASK]”.
tokenize_chinese_chars (bool, optional) – Whether to tokenize Chinese characters. Defaults to
True
.strip_accents – (bool, optional): Whether to strip all accents. If this option is not specified, then it will be determined by the value for
lowercase
(as in the original BERT). Defaults toNone
.
Examples
from paddlenlp.transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') inputs = tokenizer('He was a puppeteer') print(inputs) ''' {'input_ids': [101, 2002, 2001, 1037, 13997, 11510, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0]} '''
-
property
vocab_size
¶ Return the size of vocabulary.
- Returns
The size of vocabulary.
- Return type
int
-
get_vocab
()[source]¶ Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token]
is equivalent totokenizer.convert_tokens_to_ids(token)
whentoken
is in the vocab.- Returns
The vocabulary.
- Return type
Dict[str, int]
-
convert_tokens_to_string
(tokens)[source]¶ Converts a sequence of tokens (list of string) to a single string. Since the usage of WordPiece introducing
##
to concat subwords, also removes##
when converting.- Parameters
tokens (list) – A list of string representing tokens to be converted.
- Returns
Converted string from tokens.
- Return type
str
Examples
from paddlenlp.transformers import BertTokenizer berttokenizer = BertTokenizer.from_pretrained('bert-base-uncased') tokens = berttokenizer.tokenize('He was a puppeteer') ''' ['he', 'was', 'a', 'puppet', '##eer'] ''' strings = tokenizer.convert_tokens_to_string(tokens) ''' he was a puppeteer '''
-
num_special_tokens_to_add
(pair=False)[source]¶ Returns the number of added tokens when encoding a sequence with special tokens.
- Parameters
pair (bool) – Whether the input is a sequence pair or a single sequence. Defaults to
False
and the input is a single sequence.- Returns
Number of tokens added to sequences.
- Return type
int
-
build_inputs_with_special_tokens
(token_ids_0, token_ids_1=None)[source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
A BERT sequence has the following format:
single sequence:
[CLS] X [SEP]
pair of sequences:
[CLS] A [SEP] B [SEP]
- Parameters
token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
- Returns
List of input_id with the appropriate special tokens.
- Return type
List[int]
-
build_offset_mapping_with_special_tokens
(offset_mapping_0, offset_mapping_1=None)[source]¶ Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.
A BERT offset_mapping has the following format:
single sequence:
(0,0) X (0,0)
pair of sequences:
(0,0) A (0,0) B (0,0)
- Parameters
offset_mapping_ids_0 (List[tuple]) – List of wordpiece offsets to which the special tokens will be added.
offset_mapping_ids_1 (List[tuple], optional) – Optional second list of wordpiece offsets for offset mapping pairs. Defaults to None.
- Returns
A list of wordpiece offsets with the appropriate offsets of special tokens.
- Return type
List[tuple]
-
create_token_type_ids_from_sequences
(token_ids_0, token_ids_1=None)[source]¶ Create a mask from the two sequences passed to be used in a sequence-pair classification task.
A BERT sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If
token_ids_1
isNone
, this method only returns the first portion of the mask (0s).- Parameters
token_ids_0 (List[int]) – A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
- Returns
List of token_type_id according to the given sequence(s).
- Return type
List[int]
-
get_special_tokens_mask
(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]¶ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
encode
methods.- Parameters
token_ids_0 (List[int]) – A list of
inputs_ids
for the first sequence.token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs. Defaults to None.
already_has_special_tokens (bool, optional) – Whether or not the token list is already formatted with special tokens for the model. Defaults to None.
- Returns
The list of integers either be 0 or 1: 1 for a special token, 0 for a sequence token.
- Return type
List[int]
-
class
WordpieceTokenizer
(vocab, unk_token, max_input_chars_per_word=100)[source]¶ Bases:
object
Runs WordPiece tokenization.
- Parameters
vocab (Vocab|dict) – Vocab of the word piece tokenizer.
unk_token (str) – A specific token to replace all unknown tokens.
max_input_chars_per_word (int) – If a word’s length is more than max_input_chars_per_word, it will be dealt as unknown word. Defaults to 100.
-
tokenize
(text)[source]¶ Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform tokenization using the given vocabulary.
- Parameters
text – A single token or whitespace separated tokens. This should have already been passed through
BasicTokenizer
.- Returns
A list of wordpiece tokens.
- Return type
list (str)
Examples
from paddlenlp.transformers import BertTokenizer, WordpieceTokenizer berttokenizer = BertTokenizer.from_pretrained('bert-base-uncased') vocab = berttokenizer.vocab unk_token = berttokenizer.unk_token wordpiecetokenizer = WordpieceTokenizer(vocab,unk_token) inputs = wordpiecetokenizer.tokenize("unaffable") print(inputs) ''' ["un", "##aff", "##able"] '''