modeling

class RobertaModel(vocab_size, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=16, initializer_range=0.02, pad_token_id=0)[源代码]

基类:paddlenlp.transformers.roberta.modeling.RobertaPretrainedModel

The bare Roberta Model outputting raw hidden-states.

This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

参数
  • vocab_size (int) -- Vocabulary size of inputs_ids in RobertaModel. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by the inputs_ids passed when calling RobertaModel.

  • hidden_size (int, optional) -- Dimensionality of the embedding layer, encoder layers and pooler layer. Defaults to 768.

  • num_hidden_layers (int, optional) -- Number of hidden layers in the Transformer encoder. Defaults to 12.

  • num_attention_heads (int, optional) -- Number of attention heads for each attention layer in the Transformer encoder. Defaults to 12.

  • intermediate_size (int, optional) -- Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from hidden_size to intermediate_size, and then projected back to hidden_size. Typically intermediate_size is larger than hidden_size. Defaults to 3072.

  • hidden_act (str, optional) -- The non-linear activation function in the feed-forward layer. "gelu", "relu" and any other paddle supported activation functions are supported. Defaults to "gelu".

  • hidden_dropout_prob (float, optional) -- The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to 0.1.

  • attention_probs_dropout_prob (float, optional) -- The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to 0.1.

  • max_position_embeddings (int, optional) -- The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to 512.

  • type_vocab_size (int, optional) -- The vocabulary size of the token_type_ids passed when calling RobertaModel. Defaults to 2.

  • initializer_range (float, optional) --

    The standard deviation of the normal initializer. Defaults to 0.02.

    注解

    A normal_initializer initializes weight matrices as normal distributions. See RobertaPretrainedModel._init_weights() for how weights are initialized in RobertaModel.

  • pad_token_id (int, optional) -- The index of padding token in the token vocabulary. Defaults to 0.

forward(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[源代码]
参数
  • input_ids (Tensor) -- Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. It's data type should be int64 and has a shape of [batch_size, sequence_length].

  • token_type_ids (Tensor, optional) --

    Segment token indices to indicate first and second portions of the inputs. Indices can be either 0 or 1:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    It's data type should be int64 and has a shape of [batch_size, sequence_length]. Defaults to None, which means no segment embeddings is added to token embeddings.

  • position_ids (Tensor, optional) -- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, max_position_embeddings - 1]. It's data type should be int64 and has a shape of [batch_size, sequence_length]. Defaults to None.

  • attention_mask (Tensor, optional) -- Mask used in multi-head attention to avoid performing attention to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have -INF values and the others have 0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults to None, which means nothing needed to be prevented attention to.

返回

Returns tuple (sequence_output, pooled_output).

With the fields:

  • sequence_output (Tensor):

    Sequence of hidden-states at the last layer of the model. It's data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • pooled_output (Tensor):

    The output of first token ([CLS]) in sequence. We "pool" the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].

返回类型

tuple

示例

import paddle
from paddlenlp.transformers import RobertaModel, RobertaTokenizer

tokenizer = RobertaTokenizer.from_pretrained('roberta-wwm-ext')
model = RobertaModel.from_pretrained('roberta-wwm-ext')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
sequence_output, pooled_output = model(**inputs)
class RobertaPretrainedModel(name_scope=None, dtype='float32')[源代码]

基类:paddlenlp.transformers.model_utils.PretrainedModel

An abstract class for pretrained RoBerta models. It provides RoBerta related model_config_file, pretrained_init_configuration, resource_files_names, pretrained_resource_files_map, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

init_weights(layer)[源代码]

Initialization hook

base_model_class

alias of paddlenlp.transformers.roberta.modeling.RobertaModel

class RobertaForSequenceClassification(roberta, num_classes=2, dropout=None)[源代码]

基类:paddlenlp.transformers.roberta.modeling.RobertaPretrainedModel

Roberta Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.

参数
  • roberta (RobertaModel) -- An instance of RobertaModel.

  • num_classes (int, optional) -- The number of classes. Defaults to 2.

  • dropout (float, optional) -- The dropout probability for output of Roberta. If None, use the same value as hidden_dropout_prob of RobertaModel instance roberta. Defaults to None.

forward(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[源代码]
参数
返回

Returns tensor logits, a tensor of the input text classification logits. Its data type should be float32 and it has a shape of [batch_size, num_classes].

返回类型

Tensor

示例

import paddle
from paddlenlp.transformers import RobertaForSequenceClassification, RobertaTokenizer

tokenizer = RobertaTokenizer.from_pretrained('roberta-wwm-ext')
model = RobertaForSequenceClassification.from_pretrained('roberta-wwm-ext')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
class RobertaForTokenClassification(roberta, num_classes=2, dropout=None)[源代码]

基类:paddlenlp.transformers.roberta.modeling.RobertaPretrainedModel

Roberta Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.

参数
  • roberta (RobertaModel) -- An instance of RobertaModel.

  • num_classes (int, optional) -- The number of classes. Defaults to 2.

  • dropout (float, optional) -- The dropout probability for output of Roberta. If None, use the same value as hidden_dropout_prob of RobertaModel instance roberta. Defaults to None.

forward(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[源代码]
参数
返回

Returns tensor logits, a tensor of the input token classification logits. Shape as [batch_size, sequence_length, num_classes] and dtype as float32.

返回类型

Tensor

示例

import paddle
from paddlenlp.transformers import RobertaForTokenClassification, RobertaTokenizer

tokenizer = RobertaTokenizer.from_pretrained('roberta-wwm-ext')
model = RobertaForTokenClassification.from_pretrained('roberta-wwm-ext')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
class RobertaForQuestionAnswering(roberta, dropout=None)[源代码]

基类:paddlenlp.transformers.roberta.modeling.RobertaPretrainedModel

Roberta Model with a linear layer on top of the hidden-states output to compute span_start_logits

and span_end_logits, designed for question-answering tasks like SQuAD.

参数
  • roberta (RobertaModel) -- An instance of RobertaModel.

  • dropout (float, optional) -- The dropout probability for output of Roberta. If None, use the same value as hidden_dropout_prob of RobertaModel instance roberta. Defaults to None.

forward(input_ids, token_type_ids=None)[源代码]
参数
返回

Returns tuple (start_logits, end_logits).

With the fields:

  • start_logits (Tensor):

    A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

返回类型

tuple

示例

import paddle
from paddlenlp.transformers import RobertaForSequenceClassification, RobertaTokenizer

tokenizer = RobertaTokenizer.from_pretrained('roberta-wwm-ext')
model = RobertaForSequenceClassification.from_pretrained('roberta-wwm-ext')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)