modeling

class NeZhaModel(vocab_size, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, max_relative_position=64, layer_norm_eps=1e-12, use_relative_position=True)[源代码]

基类:paddlenlp.transformers.nezha.modeling.NeZhaPretrainedModel

The bare NeZha Model transformer outputting raw hidden-states.

This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

参数
  • vocab_size (int) -- Vocabulary size of inputs_ids in DistilBertModel. Defines the number of different tokens that can be represented by the inputs_ids passed when calling DistilBertModel.

  • hidden_size (int, optional) -- Dimensionality of the embedding layer, encoder layers and the pooler layer. Defaults to 768.

  • num_hidden_layers (int, optional) -- Number of hidden layers in the Transformer encoder. Defaults to 12.

  • num_attention_heads (int, optional) -- Number of attention heads for each attention layer in the Transformer encoder. Defaults to 12.

  • intermediate_size (int, optional) -- Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from hidden_size to intermediate_size, and then projected back to hidden_size. Typically intermediate_size is larger than hidden_size. Defaults to 3072.

  • hidden_act (str, optional) -- The non-linear activation function in the feed-forward layer. "gelu", "relu" and any other paddle supported activation functions are supported. Defaults to "gelu".

  • hidden_dropout_prob (float, optional) -- The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to 0.1.

  • attention_probs_dropout_prob (float, optional) -- The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to 0.1.

  • max_position_embeddings (int, optional) -- The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to 512.

  • type_vocab_size (int, optional) -- The vocabulary size of token_type_ids. Defaults to 16.

  • initializer_range (float, optional) --

    The standard deviation of the normal initializer. Defaults to 0.02.

    注解

    A normal_initializer initializes weight matrices as normal distributions. See NeZhaPretrainedModel.init_weights() for how weights are initialized in NeZhaModel.

  • max_relative_embeddings (int, optional) -- The maximum value of the dimensionality of relative encoding, which dictates the maximum supported relative distance of two sentences. Defaults to 64.

  • layer_norm_eps (float, optional) -- The small value added to the variance in LayerNorm to prevent division by zero. Defaults to 1e-12.

  • use_relative_position (bool, optional) -- Whether or not to use relative position embedding. Defaults to True.

forward(input_ids, token_type_ids=None, attention_mask=None)[源代码]

The NeZhaModel forward method, overrides the __call__() special method.

参数
  • input_ids (Tensor) -- Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be int64 and it has a shape of [batch_size, sequence_length].

  • token_type_ids (Tensor, optional) --

    Segment token indices to indicate different portions of the inputs. Selected in the range [0, type_vocab_size - 1]. If type_vocab_size is 2, which means the inputs have two portions. Indices can either be 0 or 1:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    Its data type should be int64 and it has a shape of [batch_size, sequence_length]. Defaults to None, which means we don't add segment embeddings.

  • attention_mask (Tensor, optional) -- Mask used in multi-head attention to avoid performing attention to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have -INF values and the others have 0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. We use whole-word-mask in NeZha, so the whole word will have the same value. For example, "使用" as a word, "使" and "用" will have the same value. Defaults to None, which means nothing needed to be prevented attention to.

返回

Returns tuple (sequence_output, pooled_output).

With the fields:

  • sequence_output (Tensor):

    Sequence of hidden-states at the last layer of the model. It's data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • pooled_output (Tensor):

    The output of first token ([CLS]) in sequence. We "pool" the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].

返回类型

tuple

示例

import paddle
from paddlenlp.transformers import NeZhaModel, NeZhaTokenizer

tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese')
model = NeZhaModel.from_pretrained('nezha-base-chinese')

inputs = tokenizer("欢迎使用百度飞浆!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
output = model(**inputs)
class NeZhaPretrainedModel(name_scope=None, dtype='float32')[源代码]

基类:paddlenlp.transformers.model_utils.PretrainedModel

An abstract class for pretrained NeZha models. It provides NeZha related model_config_file, pretrained_init_configuration, resource_files_names, pretrained_resource_files_map, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

init_weights(layer)[源代码]

Initialization hook

base_model_class

alias of paddlenlp.transformers.nezha.modeling.NeZhaModel

class NeZhaForPretraining(nezha)[源代码]

基类:paddlenlp.transformers.nezha.modeling.NeZhaPretrainedModel

NeZha Model with pretraining tasks on top.

参数

nezha (NeZhaModel) -- An instance of NeZhaModel.

forward(input_ids, token_type_ids=None, attention_mask=None, masked_lm_labels=None, next_sentence_label=None)[源代码]
参数
  • input_ids (Tensor) -- See NeZhaModel.

  • token_type_ids (Tensor, optional) -- See NeZhaModel.

  • attention_mask (Tensor, optional) -- See NeZhaModel.

  • masked_lm_labels (Tensor, optional) -- The labels of the masked language modeling, its dimensionality is equal to prediction_scores. Its data type should be int64 and its shape is [batch_size, sequence_length, 1].

  • next_sentence_label (Tensor, optional) -- The labels of the next sentence prediction task, the dimensionality of next_sentence_labels is equal to seq_relation_labels. Its data type should be int64 and its shape is [batch_size, 1].

返回

Returns Tensor total_loss if masked_lm_labels is not None. Returns tuple (prediction_scores, seq_relationship_score) if masked_lm_labels is None.

With the fields:

  • total_loss (Tensor):

  • prediction_scores (Tensor):

    The scores of masked token prediction. Its data type should be float32. If masked_positions is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].

  • seq_relationship_score (Tensor):

    The scores of next sentence prediction. Its data type should be float32 and its shape is [batch_size, 2].

返回类型

Tensor or tuple

class NeZhaForSequenceClassification(nezha, num_classes=2, dropout=None)[源代码]

基类:paddlenlp.transformers.nezha.modeling.NeZhaPretrainedModel

NeZha Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.

参数
  • nezha (NeZhaModel) -- An instance of NeZhaModel.

  • num_classes (int, optional) -- The number of classes. Defaults to 2.

  • dropout (float, optional) -- The dropout probability for output of NeZha. If None, use the same value as hidden_dropout_prob of NeZhaModel instance nezha. Defaults to None.

forward(input_ids, token_type_ids=None, attention_mask=None)[源代码]

The NeZhaForSequenceClassification forward method, overrides the __call__() special method.

参数
返回

Returns tensor logits, a tensor of the input text classification logits. Shape as [batch_size, num_classes] and dtype as float32.

返回类型

Tensor

示例

import paddle
from paddlenlp.transformers import NeZhaForSequenceClassification
from paddlenlp.transformers import NeZhaTokenizer

tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese')
model = NeZhaForSequenceClassification.from_pretrained('nezha-base-chinese')

inputs = tokenizer("欢迎使用百度飞桨!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
outputs = model(**inputs)

logits  =outputs[0]
class NeZhaPretrainingHeads(hidden_size, vocab_size, hidden_act, embedding_weights=None)[源代码]

基类:paddle.fluid.dygraph.layers.Layer

Perform language modeling task and next sentence classification task.

参数
  • hidden_size (int) -- See NeZhaModel.

  • vocab_size (int) -- See NeZhaModel.

  • hidden_act (str) -- Activation function used in the language modeling task.

  • embedding_weights (Tensor, optional) -- Decoding weights used to map hidden_states to logits of the masked token prediction. Its data type should be float32 and its shape is [vocab_size, hidden_size]. Defaults to None, which means use the same weights of the embedding layer.

forward(sequence_output, pooled_output)[源代码]
参数
  • sequence_output (Tensor) -- Sequence of hidden-states at the last layer of the model. It's data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • pooled_output (Tensor) -- The output of first token ([CLS]) in sequence. We "pool" the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].

返回

Returns tuple (prediction_scores, seq_relationship_score).

With the fields:

  • prediction_scores (Tensor):

    The scores of masked token prediction. Its data type should be float32. If masked_positions is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].

  • seq_relationship_score (Tensor):

    The scores of next sentence prediction. Its data type should be float32 and its shape is [batch_size, 2].

返回类型

tuple

class NeZhaForTokenClassification(nezha, num_classes=2, dropout=None)[源代码]

基类:paddlenlp.transformers.nezha.modeling.NeZhaPretrainedModel

NeZha Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.

参数
  • nezha (NeZhaModel) -- An instance of NeZhaModel.

  • num_classes (int, optional) -- The number of classes. Defaults to 2.

  • dropout (float, optional) -- The dropout probability for output of NeZha. If None, use the same value as hidden_dropout_prob of NeZhaModel instance nezha. Defaults to None.

forward(input_ids, token_type_ids=None, attention_mask=None)[源代码]

The NeZhaForTokenClassification forward method, overrides the __call__() special method.

参数
返回

Returns tensor logits, a tensor of the input token classification logits. Shape as [batch_size, sequence_length, num_classes] and dtype as float32.

返回类型

Tensor

示例

import paddle
from paddlenlp.transformers import NeZhaForTokenClassification
from paddlenlp.transformers import NeZhaTokenizer

tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese')
model = NeZhaForTokenClassification.from_pretrained('nezha-base-chinese')

inputs = tokenizer("欢迎使用百度飞桨!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
outputs = model(**inputs)

logits = outputs[0]
class NeZhaForQuestionAnswering(nezha, dropout=None)[源代码]

基类:paddlenlp.transformers.nezha.modeling.NeZhaPretrainedModel

NeZha with a linear layer on top of the hidden-states output to compute span_start_logits and span_end_logits, designed for question-answering tasks like SQuAD.

参数
  • nezha (NeZhaModel) -- An instance of NeZhaModel.

  • dropout (float, optional) -- The dropout probability for output of NeZha. If None, use the same value as hidden_dropout_prob of NeZhaModel instance nezha. Defaults to None.

forward(input_ids, token_type_ids=None, attention_mask=None)[源代码]

The NeZhaForQuestionAnswering forward method, overrides the __call__() special method.

参数
返回

Returns tuple (start_logits, end_logits).

With the fields:

  • start_logits (Tensor):

    A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

返回类型

tuple

示例

import paddle
from paddlenlp.transformers import NeZhaForQuestionAnswering
from paddlenlp.transformers import NeZhaTokenizer

tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese')
model = NeZhaForQuestionAnswering.from_pretrained('nezha-base-chinese')

inputs = tokenizer("欢迎使用百度飞桨!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
outputs = model(**inputs)

start_logits = outputs[0]
end_logits  =outputs[1]
class NeZhaForMultipleChoice(nezha, num_choices=2, dropout=None)[源代码]

基类:paddlenlp.transformers.nezha.modeling.NeZhaPretrainedModel

NeZha Model with a linear layer on top of the hidden-states output layer, designed for multiple choice tasks like RocStories/SWAG tasks.

参数
  • nezha (NeZhaModel) -- An instance of NeZhaModel.

  • num_choices (int, optional) -- The number of choices. Defaults to 2.

  • dropout (float, optional) -- The dropout probability for output of NeZha. If None, use the same value as hidden_dropout_prob of NeZhaModel instance nezha. Defaults to None.

forward(input_ids, token_type_ids=None, attention_mask=None)[源代码]

The NeZhaForMultipleChoice forward method, overrides the __call__() special method.

参数
返回

Returns tensor reshaped_logits, a tensor of the input multiple choice classification logits. Shape as [batch_size, num_classes] and dtype as float32.

返回类型

Tensor