modeling#
- class SkepModel(config: SkepConfig)[源代码]#
-
The bare SKEP Model outputting raw hidden-states.
This model inherits from
PretrainedModel
. Refer to the superclass documentation for the generic methods.This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.
More details refer to
SKEP
.- 参数:
vocab_size (
int
, optional, defaults to 12800) -- Vocabulary size of the SKEP model. Defines the number of different tokens that can be represented by theinputs_ids
passed when calling [SKEPModel
].hidden_size (
int
, optional, defaults to 768) -- Dimensionality of the embedding layer, encoder layers and the pooler layer.num_hidden_layers (int, optional, defaults to 12) -- Number of hidden layers in the Transformer encoder.
num_attention_heads (
int
, optional, defaults to 12) -- Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int
, optional, defaults to 3072) -- Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected fromhidden_size
tointermediate_size
, and then projected back tohidden_size
. Typicallyintermediate_size
is larger thanhidden_size
.hidden_act (
str
, optional, defaults to "relu") -- The non-linear activation function in the encoder and pooler. "gelu", "relu" and any other paddle supported activation functions are supported.hidden_dropout_prob (
float
, optional, defaults to 0.1) -- The dropout probability for all fully connected layers in the embeddings and encoder.attention_probs_dropout_prob (
float
, optional, defaults to 0.1) -- The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target.max_position_embeddings (
int
, optional, defaults to 512) -- The maximum sequence length that this model might ever be used with. Typically set this to something large (e.g., 512 or 1024 or 2048).type_vocab_size (
int
, optional, defaults to 4) -- The vocabulary size of the token_type_ids passed into [SKEPModel
].initializer_range (
float
, optional, defaults to 0.02) --The standard deviation of the normal initializer. .. note:
A normal_initializer initializes weight matrices as normal distributions. See :meth:`SkepPretrainedModel.init_weights()` for how weights are initialized in [`SkepModel`].
pad_token_id (int, optional, defaults to 0) -- The index of padding token in the token vocabulary.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, past_key_values: Tuple[Tuple[Tensor]] | None = None, use_cache: bool | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[源代码]#
The SkepModel forward method, overrides the
__call__()
special method.- 参数:
input_ids (Tensor) -- Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be
int64
and it has a shape of [batch_size, sequence_length].token_type_ids (Tensor, optional) --
Segment token indices to indicate different portions of the inputs. Selected in the range
[0, type_vocab_size - 1]
. Iftype_vocab_size
is 2, which means the inputs have two portions. Indices can either be 0 or 1:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Its data type should be
int64
and it has a shape of [batch_size, sequence_length]. Defaults toNone
, which means we don't add segment embeddings.position_ids (Tensor, optional) -- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, max_position_embeddings - 1]
. Shape as(batch_size, num_tokens)
and dtype as int64. Defaults toNone
.attention_mask (Tensor, optional) -- Mask used in multi-head attention to avoid performing attention to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the
masked
tokens haveFalse
values and the others haveTrue
values. When the data type is int, themasked
tokens have0
values and the others have1
values. When the data type is float, themasked
tokens have-INF
values and the others have0
values. It is a tensor with shape broadcasted to[batch_size, num_attention_heads, sequence_length, sequence_length]
. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults toNone
, which means nothing needed to be prevented attention to.inputs_embeds (Tensor, optional) -- If you want to control how to convert
inputs_ids
indices into associated vectors, you can pass an embedded representation directly instead of passinginputs_ids
.past_key_values (tuple(tuple(Tensor)), optional) -- The length of tuple equals to the number of layers, and each inner tuple haves 4 tensors of shape
(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
) which contains precomputed key and value hidden states of the attention blocks. Ifpast_key_values
are used, the user can optionally input only the lastinput_ids
(those that don't have their past key value states given to this model) of shape(batch_size, 1)
instead of allinput_ids
of shape(batch_size, sequence_length)
.use_cache (bool, optional) -- If set to
True
,past_key_values
key value states are returned. Defaults toNone
.output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) -- Whether to return a
ModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- 返回:
An instance of
BaseModelOutputWithPoolingAndCrossAttentions
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofBaseModelOutputWithPoolingAndCrossAttentions
.if the result is tuple: Returns tuple (
sequence_output
,pooled_output
).With the fields:
sequence_output
(Tensor):Sequence of hidden-states at the last layer of the model. It's data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].
pooled_output
(Tensor):The output of first token (
[CLS]
) in sequence. We "pool" the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].
示例
import paddle from paddlenlp.transformers import SkepModel, SkepTokenizer tokenizer = SkepTokenizer.from_pretrained('skep_ernie_2.0_large_en') model = SkepModel.from_pretrained('skep_ernie_2.0_large_en') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP! ") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} output = model(**inputs)
- class SkepPretrainedModel(*args, **kwargs)[源代码]#
-
An abstract class for pretrained Skep models. It provides Skep related
model_config_file
,pretrained_init_configuration
,resource_files_names
,pretrained_resource_files_map
,base_model_prefix
for downloading and loading pretrained models. SeePretrainedModel
for more details.- config_class#
SkepConfig
的别名
- class SkepForSequenceClassification(config: SkepConfig)[源代码]#
-
SKEP Model with a linear layer on top of the pooled output, designed for sequence classification/regression tasks like GLUE tasks.
- 参数:
config (
SkepConfig
) -- An instance of SkepConfig used to contruct SkepForSequenceClassification.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, labels: Tensor | None = None, inputs_embeds: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[源代码]#
The SkepForSequenceClassification forward method, overrides the __call__() special method.
- 参数:
input_ids (Tensor) -- See
SkepModel
.token_type_ids (Tensor, optional) -- See
SkepModel
.position_ids (Tensor,
optional
) -- SeeSkepModel
.attention_mask (Tensor, optional) -- See
SkepModel
.labels (Tensor of shape
(batch_size,)
, optional) -- Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., num_labels - 1]
. Ifnum_labels == 1
a regression loss is computed (Mean-Square loss), Ifnum_labels > 1
a classification loss is computed (Cross-Entropy).inputs_embeds (Tensor, optional) -- See
SkepModel
.output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) -- Whether to return a
SequenceClassifierOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- 返回:
An instance of
SequenceClassifierOutput
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofSequenceClassifierOutput
.
示例
import paddle from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer tokenizer = SkepTokenizer.from_pretrained('skep_ernie_2.0_large_en') model = SkepForSequenceClassification.from_pretrained('skep_ernie_2.0_large_en') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs)
- class SkepForTokenClassification(config: SkepConfig)[源代码]#
-
SKEP Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.
- 参数:
config (
SkepConfig
) -- An instance of SkepConfig used to construct SkepForTokenClassification.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, labels: Tensor | None = None, inputs_embeds: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[源代码]#
The SkepForTokenClassification forward method, overrides the __call__() special method.
- 参数:
input_ids (Tensor) -- See
SkepModel
.token_type_ids (Tensor, optional) -- See
SkepModel
.position_ids (Tensor, optional) -- See
SkepModel
.attention_mask (Tensor, optional) -- See
SkepModel
.labels (Tensor of shape
(batch_size, sequence_length)
, optional) -- Labels for computing the token classification loss. Indices should be in[0, ..., num_labels - 1]
.inputs_embeds (Tensor, optional) -- See
SkepModel
.output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) -- Whether to return a
TokenClassifierOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- 返回:
An instance of
TokenClassifierOutput
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofTokenClassifierOutput
.
示例
import paddle from paddlenlp.transformers import SkepForTokenClassification, SkepTokenizer tokenizer = SkepTokenizer.from_pretrained('skep_ernie_2.0_large_en') model = SkepForTokenClassification.from_pretrained('skep_ernie_2.0_large_en') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs)
- class SkepCrfForTokenClassification(config: SkepConfig)[源代码]#
-
SKEPCRF Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.
- 参数:
config (
SkepConfig
) -- An instance of SkepConfig used to construct SkepCrfForTokenClassification.
- forward(input_ids: Tensor | None = None, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, seq_lens: Tensor | None = None, labels: Tensor | None = None, inputs_embeds: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[源代码]#
The SkepCrfForTokenClassification forward method, overrides the __call__() special method.
- 参数:
input_ids (Tensor) -- See
SkepModel
.token_type_ids (Tensor, optional) -- See
SkepModel
.position_ids (Tensor, optional) -- See
SkepModel
.attention_mask (Tensor, optional) -- See
SkepModel
.seq_lens (Tensor, optional) -- The input length tensor storing real length of each sequence for correctness. Its data type should be int64 and its shape is
[batch_size]
. Defaults toNone
.labels (Tensor, optional) -- The input label tensor. Its data type should be int64 and its shape is
[batch_size, sequence_length]
.inputs_embeds (Tensor, optional) -- See
SkepModel
.output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) -- Whether to return a
TokenClassifierOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- 返回:
An instance of
TokenClassifierOutput
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofTokenClassifierOutput
.if return_dict is False, Returns tensor
loss
iflabels
is not None. Otherwise, returns tensorprediction
.loss
(Tensor):The crf loss. Its data type is float32 and its shape is
[batch_size]
.
prediction
(Tensor):The prediction tensor containing the highest scoring tag indices. Its data type is int64 and its shape is
[batch_size, sequence_length]
.