data_collator¶
-
class
DataCollatorWithPadding
(tokenizer: paddlenlp.transformers.tokenizer_utils_base.PretrainedTokenizerBase, padding: Union[bool, str, paddlenlp.transformers.tokenizer_utils_base.PaddingStrategy] = True, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_tensors: str = 'pd', return_attention_mask: Optional[bool] = None)[source]¶ Bases:
object
Data collator that will dynamically pad the inputs to the longest sequence in the batch.
- Parameters
tokenizer (
paddlenlp.transformers.PretrainedTokenizer
) – The tokenizer used for encoding the data.
-
default_data_collator
(features: List[InputDataClass], return_tensors='pd') → Dict[str, Any][source]¶ Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:
label
: handles a single value (int or float) per objectlabel_ids
: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful.
-
class
DefaultDataCollator
(return_tensors: str = 'pd')[source]¶ Bases:
paddlenlp.data.data_collator.DataCollatorMixin
Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:
label
: handles a single value (int or float) per objectlabel_ids
: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful. This is an object (like other data collators) rather than a pure function like default_data_collator. This can be helpful if you need to set a return_tensors value at initialization. :param return_tensors: Return Tensor or numpy array. :type return_tensors:
bool
-
class
DataCollatorForTokenClassification
(tokenizer: paddlenlp.transformers.tokenizer_utils_base.PretrainedTokenizerBase, padding: Union[bool, str, paddlenlp.transformers.tokenizer_utils_base.PaddingStrategy] = True, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, label_pad_token_id: int = - 100, return_tensors: str = 'pd')[source]¶ Bases:
paddlenlp.data.data_collator.DataCollatorMixin
Data collator that will dynamically pad the inputs received, as well as the labels.
- Parameters
tokenizer ([
PretrainedTokenizer
] or [PretrainedFasterTokenizer
]) – The tokenizer used for encoding the data.padding (
bool
,str
or [PaddingStrategy
], optional, defaults toTrue
) –Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (
int
, optional) – Maximum length of the returned list and optionally padding length (see above).pad_to_multiple_of (
int
, optional) –If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
label_pad_token_id (
int
, optional, defaults to -100) – The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions).return_tensors (
str
) – The type of Tensor to return. Allowable values are “np”, “pt” and “tf”.
-
class
DataCollatorForSeq2Seq
(tokenizer: paddlenlp.transformers.tokenizer_utils_base.PretrainedTokenizerBase, model: Optional[Any] = None, padding: Union[bool, str, paddlenlp.transformers.tokenizer_utils_base.PaddingStrategy] = True, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, label_pad_token_id: int = - 100, return_tensors: str = 'pd')[source]¶ Bases:
object
Data collator that will dynamically pad the inputs received, as well as the labels.
- Parameters
tokenizer ([
PretrainedTokenizer
] or [PretrainedFasterTokenizer
]) – The tokenizer used for encoding the data.model ([
PreTrainedModel
]) –The model that is being trained. If set and has the prepare_decoder_input_ids_from_labels, use it to prepare the decoder_input_ids
This is useful when using label_smoothing to avoid calculating loss twice.
padding (
bool
,str
or [PaddingStrategy
], optional, defaults toTrue
) –Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (
int
, optional) – Maximum length of the returned list and optionally padding length (see above).pad_to_multiple_of (
int
, optional) –If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
label_pad_token_id (
int
, optional, defaults to -100) – The id to use when padding the labels (-100 will be automatically ignored by PyTorch loss functions).return_tensors (
str
) – The type of Tensor to return. Allowable values are “np”, “pt” and “tf”.