modeling
- class NeZhaModel(config: NeZhaConfig)[source]
Bases:
NeZhaPretrainedModel
The bare NeZha Model transformer outputting raw hidden-states.
This model inherits from
PretrainedModel
. Refer to the superclass documentation for the generic methods.This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.
- Parameters:
vocab_size (int) – Vocabulary size of
inputs_ids
inDistilBertModel
. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingDistilBertModel
.hidden_size (int, optional) – Dimensionality of the embedding layer, encoder layers and the pooler layer. Defaults to
768
.num_hidden_layers (int, optional) – Number of hidden layers in the Transformer encoder. Defaults to
12
.num_attention_heads (int, optional) – Number of attention heads for each attention layer in the Transformer encoder. Defaults to
12
.intermediate_size (int, optional) – Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from
hidden_size
tointermediate_size
, and then projected back tohidden_size
. Typicallyintermediate_size
is larger thanhidden_size
. Defaults to3072
.hidden_act (str, optional) – The non-linear activation function in the feed-forward layer.
"gelu"
,"relu"
and any other paddle supported activation functions are supported. Defaults to"gelu"
.hidden_dropout_prob (float, optional) – The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to
0.1
.attention_probs_dropout_prob (float, optional) – The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to
0.1
.max_position_embeddings (int, optional) – The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to
512
.type_vocab_size (int, optional) – The vocabulary size of
token_type_ids
. Defaults to16
.initializer_range (float, optional) –
The standard deviation of the normal initializer. Defaults to
0.02
.Note
A normal_initializer initializes weight matrices as normal distributions. See
NeZhaPretrainedModel.init_weights()
for how weights are initialized inNeZhaModel
.max_relative_embeddings (int, optional) – The maximum value of the dimensionality of relative encoding, which dictates the maximum supported relative distance of two sentences. Defaults to
64
.layer_norm_eps (float, optional) – The small value added to the variance in
LayerNorm
to prevent division by zero. Defaults to1e-12
.use_relative_position (bool, optional) – Whether or not to use relative position embedding. Defaults to
True
.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]
The NeZhaModel forward method, overrides the
__call__()
special method.- Parameters:
input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be
int64
and it has a shape of [batch_size, sequence_length].token_type_ids (Tensor, optional) –
Segment token indices to indicate different portions of the inputs. Selected in the range
[0, type_vocab_size - 1]
. Iftype_vocab_size
is 2, which means the inputs have two portions. Indices can either be 0 or 1:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Its data type should be
int64
and it has a shape of [batch_size, sequence_length]. Defaults toNone
, which means we don’t add segment embeddings.attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the
masked
tokens haveFalse
values and the others haveTrue
values. When the data type is int, themasked
tokens have0
values and the others have1
values. When the data type is float, themasked
tokens have-INF
values and the others have0
values. It is a tensor with shape broadcasted to[batch_size, num_attention_heads, sequence_length, sequence_length]
. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. We use whole-word-mask in NeZha, so the whole word will have the same value. For example, “使用” as a word, “使” and “用” will have the same value. Defaults toNone
, which means nothing needed to be prevented attention to.inputs_embeds (Tensor, optional) – If you want to control how to convert
inputs_ids
indices into associated vectors, you can pass an embedded representation directly instead of passinginputs_ids
.output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
ModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns:
An instance of
BaseModelOutputWithPoolingAndCrossAttentions
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofBaseModelOutputWithPoolingAndCrossAttentions
.
Example
import paddle from paddlenlp.transformers import NeZhaModel, NeZhaTokenizer tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese') model = NeZhaModel.from_pretrained('nezha-base-chinese') inputs = tokenizer("欢迎使用百度飞浆!", return_tensors='pt') output = model(**inputs)
- get_input_embeddings()[source]
get input embedding of model
- Returns:
embedding of model
- Return type:
nn.Embedding
- set_input_embeddings(value)[source]
set new input embedding for model
- Parameters:
value (Embedding) – the new embedding of model
- Raises:
NotImplementedError – Model has not implement
set_input_embeddings
method
- class NeZhaPretrainedModel(*args, **kwargs)[source]
Bases:
PretrainedModel
An abstract class for pretrained NeZha models. It provides NeZha related
model_config_file
,pretrained_init_configuration
,resource_files_names
,pretrained_resource_files_map
,base_model_prefix
for downloading and loading pretrained models. SeePretrainedModel
for more details.- config_class
alias of
NeZhaConfig
- base_model_class
alias of
NeZhaModel
- class NeZhaForPretraining(config: NeZhaConfig)[source]
Bases:
NeZhaPretrainedModel
NeZha Model with pretraining tasks on top.
- Parameters:
nezha (
NeZhaModel
) – An instance ofNeZhaModel
.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, masked_lm_labels: Tensor | None = None, next_sentence_label: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]
- Parameters:
input_ids (Tensor) – See
NeZhaModel
.token_type_ids (Tensor, optional) – See
NeZhaModel
.attention_mask (Tensor, optional) – See
NeZhaModel
.inputs_embeds (Tensor, optional) – See
NeZhaModel
.masked_lm_labels (Tensor, optional) – The labels of the masked language modeling, its dimensionality is equal to
prediction_scores
. Its data type should be int64 and its shape is [batch_size, sequence_length, 1].next_sentence_label (Tensor, optional) – The labels of the next sentence prediction task, the dimensionality of
next_sentence_labels
is equal toseq_relation_labels
. Its data type should be int64 and its shape is [batch_size, 1].output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
NeZhaForPreTrainingOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns:
An instance of
NeZhaForPreTrainingOutput
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofNeZhaForPreTrainingOutput
.
- class NeZhaForSequenceClassification(config: NeZhaConfig)[source]
Bases:
NeZhaPretrainedModel
NeZha Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.
- Parameters:
config (
NeZhaConfig
) – An instance of NeZhaConfig used to construct NeZhaForSequenceClassification.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]
The NeZhaForSequenceClassification forward method, overrides the __call__() special method.
- Parameters:
input_ids (Tensor) – See
NeZhaModel
.token_type_ids (Tensor, optional) – See
NeZhaModel
.attention_mask (Tensor, optional) – See
NeZhaModel
.inputs_embeds (Tensor, optional) – See
NeZhaModel
.labels (Tensor of shape
(batch_size,)
, optional) – Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., num_labels - 1]
. Ifnum_labels == 1
a regression loss is computed (Mean-Square loss), Ifnum_labels > 1
a classification loss is computed (Cross-Entropy).output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
SequenceClassifierOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns:
Returns tensor
logits
, a tensor of the input text classification logits. Shape as[batch_size, num_classes]
and dtype as float32.- Return type:
Tensor
Example
import paddle from paddlenlp.transformers import NeZhaForSequenceClassification from paddlenlp.transformers import NeZhaTokenizer tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese') model = NeZhaForSequenceClassification.from_pretrained('nezha-base-chinese') inputs = tokenizer("欢迎使用百度飞浆!", return_tensors='pt') output = model(**inputs) logits = outputs[0]
- class NeZhaForTokenClassification(config: NeZhaConfig)[source]
Bases:
NeZhaPretrainedModel
NeZha Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.
- Parameters:
config (
NeZhaConfig
) – An instance of NeZhaConfig used to construct NeZhaForSequenceClassification.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]
The NeZhaForTokenClassification forward method, overrides the __call__() special method.
- Parameters:
input_ids (Tensor) – See
NeZhaModel
.token_type_ids (Tensor, optional) – See
NeZhaModel
.attention_mask (list, optional) – See
NeZhaModel
.inputs_embeds (Tensor, optional) – See
NeZhaModel
.labels (Tensor of shape
(batch_size, sequence_length)
, optional) – Labels for computing the token classification loss. Indices should be in[0, ..., num_labels - 1]
.output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
TokenClassifierOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns:
An instance of
TokenClassifierOutput
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofTokenClassifierOutput
.
Example
import paddle from paddlenlp.transformers import NeZhaForTokenClassification from paddlenlp.transformers import NeZhaTokenizer tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese') model = NeZhaForTokenClassification.from_pretrained('nezha-base-chinese') inputs = tokenizer("欢迎使用百度飞浆!", return_tensors='pt') output = model(**inputs) logits = outputs[0]
- class NeZhaForQuestionAnswering(config: NeZhaConfig)[source]
Bases:
NeZhaPretrainedModel
NeZha with a linear layer on top of the hidden-states output to compute
span_start_logits
andspan_end_logits
, designed for question-answering tasks like SQuAD.- Parameters:
config (
NeZhaConfig
) – An instance of NeZhaConfig used to construct NeZhaForQuestionAnswering.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, start_positions: Tensor | None = None, end_positions: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]
The NeZhaForQuestionAnswering forward method, overrides the __call__() special method.
- Parameters:
input_ids (Tensor) – See
NeZhaModel
.token_type_ids (Tensor, optional) – See
NeZhaModel
.attention_mask (Tensor, optional) – See
NeZhaModel
.inputs_embeds (Tensor, optional) – See
NeZhaModel
.start_positions (Tensor of shape
(batch_size,)
, optional) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence are not taken into account for computing the loss.end_positions (Tensor of shape
(batch_size,)
, optional) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence are not taken into account for computing the loss.output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
QuestionAnsweringModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns:
Returns tuple (
start_logits
,end_logits
).With the fields:
start_logits
(Tensor):A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].
end_logits
(Tensor):A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].
- Return type:
tuple
Example
import paddle from paddlenlp.transformers import NeZhaForQuestionAnswering from paddlenlp.transformers import NeZhaTokenizer tokenizer = NeZhaTokenizer.from_pretrained('nezha-base-chinese') model = NeZhaForQuestionAnswering.from_pretrained('nezha-base-chinese') inputs = tokenizer("欢迎使用百度飞浆!", return_tensors='pt') outputs = model(**inputs) start_logits = outputs[0] end_logits =outputs[1]
- class NeZhaForMultipleChoice(config: NeZhaConfig)[source]
Bases:
NeZhaPretrainedModel
NeZha Model with a linear layer on top of the hidden-states output layer, designed for multiple choice tasks like RocStories/SWAG tasks.
- Parameters:
config (
BertConfig
) – An instance of BertConfig used to construct BertForMultipleChoice.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]
The NeZhaForMultipleChoice forward method, overrides the __call__() special method.
- Parameters:
input_ids (Tensor) – See
NeZhaModel
.token_type_ids (Tensor, optional) – See
NeZhaModel
.attention_mask (list, optional) – See
NeZhaModel
.inputs_embeds (Tensor, optional) – See
NeZhaModel
.labels (Tensor of shape
(batch_size, )
, optional) – Labels for computing the multiple choice classification loss. Indices should be in[0, ..., num_choices-1]
wherenum_choices
is the size of the second dimension of the input tensors. (Seeinput_ids
above)output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
QuestionAnsweringModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns:
Returns tensor
reshaped_logits
, a tensor of the input multiple choice classification logits. Shape as[batch_size, num_classes]
and dtype asfloat32
.- Return type:
Tensor