data_collator#

class DataCollatorWithPadding(tokenizer: PretrainedTokenizerBase, padding: bool | str | PaddingStrategy = True, max_length: int | None = None, pad_to_multiple_of: int | None = None, return_tensors: str = 'pd', return_attention_mask: bool | None = None)[源代码]#

基类:object

Data collator that will dynamically pad the inputs to the longest sequence in the batch.

参数:

tokenizer (paddlenlp.transformers.PretrainedTokenizer) -- The tokenizer used for encoding the data.

default_data_collator(features: List[InputDataClass], return_tensors='pd') Dict[str, Any][源代码]#

Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:

  • label: handles a single value (int or float) per object

  • label_ids: handles a list of values per object

Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it's useful.

class DefaultDataCollator(return_tensors: str = 'pd')[源代码]#

基类:DataCollatorMixin

Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:

  • label: handles a single value (int or float) per object

  • label_ids: handles a list of values per object

Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it's useful. This is an object (like other data collators) rather than a pure function like default_data_collator. This can be helpful if you need to set a return_tensors value at initialization. :param return_tensors: Return Tensor or numpy array. :type return_tensors: bool

class DataCollatorForTokenClassification(tokenizer: PretrainedTokenizerBase, padding: bool | str | PaddingStrategy = True, max_length: int | None = None, pad_to_multiple_of: int | None = None, label_pad_token_id: int = -100, return_tensors: str = 'pd')[源代码]#

基类:DataCollatorMixin

Data collator that will dynamically pad the inputs received, as well as the labels.

参数:
  • tokenizer ([PretrainedTokenizer] or [PretrainedFasterTokenizer]) -- The tokenizer used for encoding the data.

  • padding (bool, str or [PaddingStrategy], optional, defaults to True) --

    Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

  • max_length (int, optional) -- Maximum length of the returned list and optionally padding length (see above).

  • pad_to_multiple_of (int, optional) --

    If set will pad the sequence to a multiple of the provided value.

    This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • label_pad_token_id (int, optional, defaults to -100) -- The id to use when padding the labels (-100 will be automatically ignore by PaddlePaddle loss functions).

  • return_tensors (str) -- The type of Tensor to return. Allowable values are "np", "pt" and "tf".

class DataCollatorForSeq2Seq(tokenizer: PretrainedTokenizerBase, model: Any | None = None, padding: bool | str | PaddingStrategy = True, max_length: int | None = None, pad_to_multiple_of: int | None = None, label_pad_token_id: int = -100, return_tensors: str = 'pd', return_attention_mask: bool | None = None, max_label_length: int | None = None)[源代码]#

基类:object

Data collator that will dynamically pad the inputs received, as well as the labels.

参数:
  • tokenizer ([PretrainedTokenizer] or [PretrainedFasterTokenizer]) -- The tokenizer used for encoding the data.

  • model ([PreTrainedModel]) --

    The model that is being trained. If set and has the prepare_decoder_input_ids_from_labels, use it to prepare the decoder_input_ids

    This is useful when using label_smoothing to avoid calculating loss twice.

  • padding (bool, str or [PaddingStrategy], optional, defaults to True) --

    Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

  • max_length (int, optional) -- Maximum length of the returned list and optionally padding length (see above).

  • pad_to_multiple_of (int, optional) --

    If set will pad the sequence to a multiple of the provided value.

    This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • label_pad_token_id (int, optional, defaults to -100) -- The id to use when padding the labels (-100 will be automatically ignored by PaddlePaddle loss functions).

  • return_tensors (str) -- The type of Tensor to return. Allowable values are "np", "pt" and "tf".

  • max_label_length (int, optional, Pad label to max_label_length. defaults to None) --

class DataCollatorForLanguageModeling(tokenizer: PretrainedTokenizerBase, mlm: bool = True, mlm_probability: float = 0.15, pad_to_multiple_of: int | None = None, return_tensors: str = 'pd')[源代码]#

基类:DataCollatorMixin

Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they are not all of the same length. :param tokenizer: The tokenizer used for encoding the data. :type tokenizer: [PreTrainedTokenizer] or [PreTrainedTokenizerFast] :param mlm: Whether or not to use masked language modeling. If set to False, the labels are the same as the inputs

with the padding tokens ignored (by setting them to -100). Otherwise, the labels are -100 for non-masked tokens and the value to predict for the masked token.

参数:
  • mlm_probability (float, optional, defaults to 0.15) -- The probability with which to (randomly) mask tokens in the input, when mlm is set to True.

  • pad_to_multiple_of (int, optional) -- If set will pad the sequence to a multiple of the provided value.

  • return_tensors (str) -- The type of Tensor to return. Allowable values are "np", "pt" and "tf".

<Tip> For best performance, this data collator should be used with a dataset having items that are dictionaries or BatchEncoding, with the "special_tokens_mask" key, as returned by a [PreTrainedTokenizer] or a [PreTrainedTokenizerFast] with the argument return_special_tokens_mask=True. </Tip>

paddle_mask_tokens(inputs: Any, special_tokens_mask: Any | None = None) Tuple[Any, Any][源代码]#

Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.

numpy_mask_tokens(inputs: Any, special_tokens_mask: Any | None = None) Tuple[Any, Any][源代码]#

Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.

class DataCollatorForWholeWordMask(tokenizer: PretrainedTokenizerBase, mlm: bool = True, mlm_probability: float = 0.15, pad_to_multiple_of: int | None = None, return_tensors: str = 'pd')[源代码]#

基类:DataCollatorForLanguageModeling

Data collator used for language modeling that masks entire words. - collates batches of tensors, honoring their tokenizer's pad_token - preprocesses batches for masked language modeling <Tip> This collator relies on details of the implementation of subword tokenization by [BertTokenizer], specifically that subword tokens are prefixed with ##. For tokenizers that do not adhere to this scheme, this collator will produce an output that is roughly equivalent to [DataCollatorForLanguageModeling]. </Tip>

paddle_mask_tokens(inputs: Any, mask_labels: Any) Tuple[Any, Any][源代码]#

Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set 'mask_labels' means we use whole word mask (wwm), we directly mask idxs according to it's ref.

numpy_mask_tokens(inputs: Any, mask_labels: Any) Tuple[Any, Any][源代码]#

Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set 'mask_labels' means we use whole word mask (wwm), we directly mask idxs according to it's ref.