modeling#
- class ErnieCtmPretrainedModel(*args, **kwargs)[source]#
Bases:
PretrainedModel
An abstract class for pretrained ErnieCtm models. It provides ErnieCtm related
model_config_file
,pretrained_init_configuration
,resource_files_names
,pretrained_resource_files_map
,base_model_prefix
for downloadingand loading pretrained models.
See
PretrainedModel
for more details.- config_class#
alias of
ErnieCtmConfig
- base_model_class#
alias of
ErnieCtmModel
- class ErnieCtmModel(config: ErnieCtmConfig)[source]#
Bases:
ErnieCtmPretrainedModel
The bare ErnieCtm Model transformer outputting raw hidden-states.
This model inherits from
PretrainedModel
. Refer to the superclass documentation for the generic methods.This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.
- Parameters:
vocab_size (int) – Vocabulary size of
inputs_ids
inErnieCtmModel
. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingErnieCtmModel
.embedding_size (int, optional) – Dimensionality of the embedding layer. Defaults to
128
.hidden_size (int, optional) – Dimensionality of the encoder layers and the pooler layer. Defaults to
768
.num_hidden_layers (int, optional) – Number of hidden layers in the Transformer encoder. Defaults to
12
.num_attention_heads (int, optional) – Number of attention heads for each attention layer in the Transformer encoder. Defaults to
12
.intermediate_size (int, optional) – Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from
hidden_size
tointermediate_size
, and then projected back tohidden_size
. Typicallyintermediate_size
is larger thanhidden_size
. Defaults to3072
.hidden_dropout_prob (float, optional) – The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to
0.1
.attention_probs_dropout_prob (float, optional) – The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to
0.1
.max_position_embeddings (int, optional) – The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to
512
.type_vocab_size (int, optional) – The vocabulary size of the
token_type_ids
. Defaults to16
.initializer_range (float, optional) – The standard deviation of the normal initializer for initializing all weight matrices. Defaults to
0.02
.pad_token_id (int, optional) – The index of padding token in the token vocabulary. Defaults to
0
.use_content_summary (
bool
, optional) – Whether or not to add content summary tokens. Defaults toTrue
.content_summary_index (int, optional) – The number of the content summary tokens. Only valid when use_content_summary is True. Defaults to
1
.cls_num (int, optional) – The number of the CLS tokens. Only valid when use_content_summary is True. Defaults to
2
.
- get_input_embeddings()[source]#
get input embedding of model
- Returns:
embedding of model
- Return type:
nn.Embedding
- set_input_embeddings(value)[source]#
set new input embedding for model
- Parameters:
value (Embedding) – the new embedding of model
- Raises:
NotImplementedError – Model has not implement
set_input_embeddings
method
- forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None, inputs_embeds=None, content_clone=False, output_hidden_states=None, output_attentions=None, return_dict=None)[source]#
The ErnieCtmModel forward method, overrides the __call__() special method.
- Parameters:
input_ids (
Tensor
) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. It’s data type should beint64
and has a shape of [batch_size, sequence_length].token_type_ids (
Tensor
, optional) –Segment token indices to indicate different portions of the inputs. Selected in the range
[0, type_vocab_size - 1]
. Iftype_vocab_size
is 2, which means the inputs have two portions. Indices can either be 0 or 1:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Its data type should be
int64
and it has a shape of [batch_size, sequence_length]. Defaults toNone
, which means we don’t add segment embeddings.position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, max_position_embeddings - 1]
. Shape as[batch_size, num_tokens]
and dtype as int64. Defaults toNone
.attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the
masked
tokens haveFalse
values and the others haveTrue
values. When the data type is int, themasked
tokens have0
values and the others have1
values. When the data type is float, themasked
tokens have-INF
values and the others have0
values. It is a tensor with shape broadcasted to[batch_size, num_attention_heads, sequence_length, sequence_length]
. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. We use whole-word-mask in ERNIE, so the whole word will have the same value. For example, “使用” as a word, “使” and “用” will have the same value. Defaults toNone
, which means nothing needed to be prevented attention to.inputs_embeds (Tensor, optional) – Optionally, instead of passing
input_ids
you can choose to directly pass an embedded representation of shape(batch_size, sequence_length, hidden_size)
. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. Default to None.content_clone (bool, optional) – Whether the
content_output
is clone fromsequence_output
. If set toTrue
, the content_output is clone from sequence_output, which may cause the classification task impact on the sequence labeling task. Defaults toFalse
.output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
None
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
None
. (currently not supported)return_dict (bool, optional) – Whether to return a
ModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toNone
.
- Returns:
Returns tuple (
sequence_output
,pooled_output
,content_output
).With the fields:
sequence_output
(Tensor):Sequence of output at the last layer of the model. Its data type should be float32 and has a shape of [batch_size, sequence_length, hidden_size].
pooled_output
(Tensor):The output of first token (
[CLS]
) in sequence. We “pool” the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].
content_output
(Tensor):The output of content summary token (
[CLS1]
in sequence). Its data type should be float32 and has a shape of [batch_size, hidden_size].
- Return type:
tuple
Example
import paddle from paddlenlp.transformers import ErnieModel, ErnieTokenizer tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0') model = ErnieModel.from_pretrained('ernie-1.0') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} sequence_output, pooled_output, content_output = model(**inputs)
- class ErnieCtmWordtagModel(config: ErnieCtmConfig)[source]#
Bases:
ErnieCtmPretrainedModel
ErnieCtmWordtag Model with a token classification head on top (a crf layer on top of the hidden-states output) . e.g. for Named-Entity-Recognition (NER) tasks.
- Parameters:
ernie_ctm (:clss:`ErnieCtmModel`) – An instance of
ErnieCtmModel
.num_tag (int) – The number of different tags.
crf_lr (float) – The learning rate of the crf. Defaults to
100
.
- forward(input_ids=None, token_type_ids=None, lengths=None, position_ids=None, attention_mask=None, inputs_embeds=None, tag_labels=None, output_hidden_states=None, output_attentions=None, return_dict=None, **kwargs)[source]#
- Parameters:
input_ids (Tensor) – See
ErnieCtmModel
.token_type_ids (Tensor, optional) – See
ErnieCtmModel
.position_ids (Tensor, optional) – See
ErnieCtmModel
.attention_mask (Tensor, optional) – See
ErnieCtmModel
.inputs_embeds (Tensor, optional) – See
ErnieCtmModel
.lengths (Tensor, optional) – The input length. Its dtype is int64 and has a shape of
[batch_size]
. Defaults toNone
.tag_labels (Tensor, optional) – The input predicted tensor. Its dtype is float32 and has a shape of
[batch_size, sequence_length, num_tags]
. Defaults toNone
.output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
None
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
None
. (currently not supported)return_dict (bool, optional) – Whether to return a
ModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toNone
.
- Returns:
Returns tuple (
seq_logits
,cls_logits
).With the fields:
seq_logits
(Tensor):A tensor of next sentence prediction logits. Its data type should be float32 and its shape is [batch_size, sequence_length, num_tag].
- Return type:
tuple
Example
import paddle from paddlenlp.transformers import ErnieCtmWordtagModel, ErnieCtmTokenizer tokenizer = ErnieCtmTokenizer.from_pretrained('ernie-ctm') model = ErnieCtmWordtagModel.from_pretrained('ernie-ctm', num_tag=2) inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs)
- class ErnieCtmNptagModel(config: ErnieCtmConfig)[source]#
Bases:
ErnieCtmPretrainedModel
ErnieCtmNptag Model with a
masked language modeling
head on top.- Parameters:
ernie_ctm (:clss:`ErnieCtmModel`) – An instance of
ErnieCtmModel
.
- forward(input_ids=None, token_type_ids=None, attention_mask=None, position_ids=None, inputs_embeds=None, labels=None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
- Parameters:
input_ids (Tensor) – See
ErnieCtmModel
.token_type_ids (Tensor, optional) – See
ErnieCtmModel
.attention_mask (Tensor, optional) – See
ErnieCtmModel
.position_ids (Tensor, optional) – See
ErnieCtmModel
.inputs_embeds (Tensor, optional) – See
ErnieCtmModel
.output_hidden_states (bool, optional) – See
ErnieCtmModel
.output_attentions (bool, optional) – See
ErnieCtmModel
.return_dict (bool, optional) – See
ErnieCtmModel
.
- Returns:
Returns tensor
logits
, the scores of masked token prediction. Its data type should be float32 and shape is [batch_size, sequence_length, vocab_size].- Return type:
tuple
Example
import paddle from paddlenlp.transformers import ErnieCtmNptagModel, ErnieCtmTokenizer tokenizer = ErnieCtmTokenizer.from_pretrained('ernie-ctm') model = ErnieCtmNptagModel.from_pretrained('ernie-ctm') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs) print(logits.shape) # [1, 45, 23000]
- class ErnieCtmForTokenClassification(config: ErnieCtmConfig)[source]#
Bases:
ErnieCtmPretrainedModel
ERNIECtm Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.
- Parameters:
ernie (
ErnieModel
) – An instance ofErnieModel
.num_tag (int, optional) – The number of classes. Defaults to
2
.dropout (float, optional) – The dropout probability for output of ERNIE. If None, use the same value as
hidden_dropout_prob
ofErnieCtmModel
instanceernie
. Defaults toNone
.
- forward(input_ids: Tensor, token_type_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
- Parameters:
input_ids (Tensor) – See
ErnieCtmModel
.token_type_ids (Tensor, optional) – See
ErnieCtmModel
.position_ids (Tensor, optional) – See
ErnieCtmModel
.attention_mask (Tensor, optional) – See
ErnieCtmModel
.inputs_embeds (Tensor, optional) – See
ErnieCtmModel
.labels (Tensor, optional) – labels for model to compute the loss
- Returns:
Returns tensor
logits
, a tensor of the input token classification logits. Shape as[sequence_length, num_tag]
and dtype asfloat32
.- Return type:
Tensor
Example
import paddle from paddlenlp.transformers import ErnieCtmForTokenClassification, ErnieCtmTokenizer tokenizer = ErnieCtmTokenizer.from_pretrained('ernie-ctm') model = ErnieCtmForTokenClassification.from_pretrained('ernie-ctm') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs)