modeling#

class FunnelModel(config: FunnelConfig)[source]#

Bases: FunnelPreTrainedModel

get_input_embeddings()[source]#

get input embedding of model

Returns:

embedding of model

Return type:

nn.Embedding

set_input_embeddings(new_embeddings)[source]#

set new input embedding for model

Parameters:

value (Embedding) – the new embedding of model

Raises:

NotImplementedError – Model has not implement set_input_embeddings method

forward(input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]#

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

class FunnelForSequenceClassification(config, num_classes=2)[source]#

Bases: FunnelPreTrainedModel

base_model_class#

alias of FunnelModel

forward(input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]#
labels (paddle.Tensor of shape (batch_size,), optional):

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

class FunnelForTokenClassification(config, num_classes=2)[source]#

Bases: FunnelPreTrainedModel

forward(input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]#
labels (paddle.Tensor of shape (batch_size, sequence_length), optional):

Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

class FunnelForQuestionAnswering(config)[source]#

Bases: FunnelPreTrainedModel

forward(input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]#
start_positions (paddle.Tensor of shape (batch_size,), optional):

Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

end_positions (paddle.Tensor of shape (batch_size,), optional):

Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.