modeling#

class ErnieGramModel(config: ErnieGramConfig)[source]#

Bases: ErnieGramPretrainedModel

The bare ERNIE-Gram Model transformer outputting raw hidden-states.

This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

Parameters:

config (ErnieGramConfig) – An instance of ErnieGramConfig used to construct ErnieGramModel.

forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, past_key_values: Tuple[Tuple[Tensor]] | None = None, use_cache: bool | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
Parameters:
  • input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. It’s data type should be int64 and has a shape of [batch_size, sequence_length].

  • token_type_ids (Tensor, optional) –

    Segment token indices to indicate first and second portions of the inputs. Indices can be either 0 or 1:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    It’s data type should be int64 and has a shape of [batch_size, sequence_length]. Defaults to None, which means no segment embeddings is added to token embeddings.

  • position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. Defaults to None. Shape as (batch_sie, num_tokens) and dtype as int32 or int64.

  • attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have -INF values and the others have 0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. We use whole-word-mask in ERNIE, so the whole word will have the same value. For example, “使用” as a word, “使” and “用” will have the same value. Defaults to None, which means nothing needed to be prevented attention to.

  • inputs_embeds (Tensor, optional) – If you want to control how to convert inputs_ids indices into associated vectors, you can pass an embedded representation directly instead of passing inputs_ids.

  • past_key_values (tuple(tuple(Tensor)), optional) – The length of tuple equals to the number of layers, and each inner tuple haves 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) which contains precomputed key and value hidden states of the attention blocks. If past_key_values are used, the user can optionally input only the last input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all input_ids of shape (batch_size, sequence_length).

  • use_cache (bool, optional) – If set to True, past_key_values key value states are returned. Defaults to None.

  • output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to False.

  • output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to False.

  • return_dict (bool, optional) – Whether to return a ModelOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

Returns tuple (sequence_output, pooled_output).

With the fields:

  • sequence_output (Tensor):

    Sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • pooled_output (Tensor):

    The output of first token ([CLS]) in sequence. We “pool” the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import ErnieGramModel, ErnieGramTokenizer

tokenizer = ErnieGramTokenizer.from_pretrained('ernie-gram-zh')
model = ErnieGramModel.from_pretrained('ernie-gram-zh)

inputs = tokenizer("欢迎使用百度飞桨!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
sequence_output, pooled_output = model(**inputs)
class ErnieGramPretrainedModel(*args, **kwargs)[source]#

Bases: PretrainedModel

An abstract class for pretrained ERNIE-Gram models. It provides ERNIE-Gram related model_config_file, resource_files_names, pretrained_resource_files_map, pretrained_init_configuration, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

config_class#

alias of ErnieGramConfig

base_model_class#

alias of ErnieGramModel

class ErnieGramForSequenceClassification(config: ErnieGramConfig)[source]#

Bases: ErnieGramPretrainedModel

ERNIE-Gram Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.

Parameters:

config (ErnieGramConfig) – An instance of ErnieGramConfig used to construct ErnieGramForSequenceClassification.

forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
Parameters:
  • input_ids (Tensor) – See ErnieGramModel.

  • token_type_ids (Tensor, optional) – See ErnieGramModel.

  • position_ids (Tensor, optional) – See ErnieGramModel.

  • attention_mask (Tensor, optional) – See BertModel.

  • labels (Tensor of shape (batch_size,), optional) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., num_labels - 1]. If num_labels == 1 a regression loss is computed (Mean-Square loss), If num_labels > 1 a classification loss is computed (Cross-Entropy).

  • inputs_embeds (Tensor, optional) – See ErnieGramModel.

  • output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to False.

  • output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to False.

  • return_dict (bool, optional) – Whether to return a SequenceClassifierOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

Returns tensor logits, a tensor of the input text classification logits. Shape as [batch_size, num_labels] and dtype as float32.

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import ErnieGramForSequenceClassification, ErnieGramTokenizer

tokenizer = ErnieGramTokenizer.from_pretrained('ernie-gram-zh')
model = ErnieGramForSequenceClassification.from_pretrained('ernie-gram-zh')

inputs = tokenizer("欢迎使用百度飞桨!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
class ErnieGramForTokenClassification(config: ErnieGramConfig)[source]#

Bases: ErnieGramPretrainedModel

ERNIE-Gram Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.

Parameters:

config (ErnieGramConfig) – An instance of ErnieGramConfig used to construct ErnieGramForTokenClassification.

forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
Parameters:
  • input_ids (Tensor) – See ErnieGramModel.

  • token_type_ids (Tensor, optional) – See ErnieGramModel.

  • position_ids (Tensor, optional) – See ErnieGramModel.

  • attention_mask (Tensor, optional) – See ErnieGramModel.

  • labels (Tensor of shape (batch_size, sequence_length), optional) – Labels for computing the token classification loss. Indices should be in [0, ..., num_labels - 1].

  • inputs_embeds (Tensor, optional) – See ErnieGramModel.

  • output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to False.

  • output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to False.

  • return_dict (bool, optional) – Whether to return a TokenClassifierOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

Returns tensor logits, a tensor of the input token classification logits. Shape as [batch_size, sequence_length, num_labels] and dtype as float32.

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import ErnieGramForTokenClassification, ErnieGramTokenizer

tokenizer = ErnieGramTokenizer.from_pretrained('ernie-gram-zh')
model = ErnieGramForTokenClassification.from_pretrained('ernie-gram-zh')

inputs = tokenizer("欢迎使用百度飞桨!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
class ErnieGramForQuestionAnswering(config: ErnieGramConfig)[source]#

Bases: ErnieGramPretrainedModel

ERNIE-Gram Model with a linear layer on top of the hidden-states output to compute span_start_logits and span_end_logits, designed for question-answering tasks like SQuAD..

Parameters:

config (ErnieGramConfig) – An instance of ErnieGramConfig used to construct ErnieGramForQuestionAnswering.

forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, start_positions: Tensor | None = None, end_positions: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
Parameters:
  • input_ids (Tensor) – See ErnieGramModel.

  • token_type_ids (Tensor, optional) – See ErnieGramModel.

  • position_ids (Tensor, optional) – See ErnieGramModel.

  • attention_mask (Tensor, optional) – See ErnieGramModel.

  • inputs_embeds (Tensor, optional) – See ErnieGramModel.

  • start_positions (Tensor of shape (batch_size,), optional) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (Tensor of shape (batch_size,), optional) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to False.

  • output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to False.

  • return_dict (bool, optional) – Whether to return a QuestionAnsweringModelOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

Returns tuple (start_logits, end_logits).

With the fields:

  • start_logits (Tensor):

    A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import ErnieGramForQuestionAnswering, ErnieGramTokenizer

tokenizer = ErnieGramTokenizer.from_pretrained('ernie-gram-zh')
model = ErnieGramForQuestionAnswering.from_pretrained('ernie-gram-zh')

inputs = tokenizer("欢迎使用百度飞桨!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)