tokenizer#

class BertJapaneseTokenizer(vocab_file, do_lower_case=False, do_word_tokenize=True, do_subword_tokenize=True, word_tokenizer_type='mecab', subword_tokenizer_type='wordpiece', never_split=None, mecab_kwargs=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', **kwargs)[源代码]#

基类:BertTokenizer

Construct a BERT tokenizer for Japanese text, based on a MecabTokenizer.

参数:
  • vocab_file (str) -- The vocabulary file path (ends with '.txt') required to instantiate a WordpieceTokenizer.

  • do_lower_case (bool, optional) -- Whether or not to lowercase the input when tokenizing. Defaults to`False`.

  • do_word_tokenize (bool, optional) -- Whether to do word tokenization. Defaults to`True`.

  • do_subword_tokenize (bool, optional) -- Whether to do subword tokenization. Defaults to`True`.

  • word_tokenizer_type (str, optional) -- Type of word tokenizer. Defaults to`basic`.

  • subword_tokenizer_type (str, optional) -- Type of subword tokenizer. Defaults to`wordpiece`.

  • never_split (bool, optional) -- Kept for backward compatibility purposes. Defaults to`None`.

  • mecab_kwargs (str, optional) -- Dictionary passed to the MecabTokenizer constructor.

  • unk_token (str) -- A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. Defaults to "[UNK]".

  • sep_token (str) -- A special token separating two different sentences in the same input. Defaults to "[SEP]".

  • pad_token (str) -- A special token used to make arrays of tokens the same size for batching purposes. Defaults to "[PAD]".

  • cls_token (str) -- A special token used for sequence classification. It is the last token of the sequence when built with special tokens. Defaults to "[CLS]".

  • mask_token (str) -- A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Defaults to "[MASK]".

示例

from paddlenlp.transformers import BertJapaneseTokenizer
tokenizer = BertJapaneseTokenizer.from_pretrained('iverxin/bert-base-japanese/')

inputs = tokenizer('こんにちは')
print(inputs)

'''
{'input_ids': [2, 10350, 25746, 28450, 3], 'token_type_ids': [0, 0, 0, 0, 0]}
'''
class MecabTokenizer(do_lower_case=False, never_split=None, normalize_text=True, mecab_dic='ipadic', mecab_option=None)[源代码]#

基类:object

Runs basic tokenization with MeCab morphological parser.

tokenize(text, never_split=None, **kwargs)[源代码]#

Tokenizes a piece of text.

class CharacterTokenizer(vocab, unk_token, normalize_text=True)[源代码]#

基类:object

Runs Character tokenization.

tokenize(text)[源代码]#

Tokenizes a piece of text into characters.

For example, input = "apple"" wil return as output ["a", "p", "p", "l", "e"].

参数:

text -- A single token or whitespace separated tokens. This should have already been passed through BasicTokenizer.

返回:

A list of characters.