modeling¶
-
class
BertModel
(vocab_size, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=16, initializer_range=0.02, pad_token_id=0, pool_act='tanh')[source]¶ Bases:
paddlenlp.transformers.bert.modeling.BertPretrainedModel
The bare BERT Model transformer outputting raw hidden-states.
This model inherits from
PretrainedModel
. Refer to the superclass documentation for the generic methods.This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.
- Parameters
vocab_size (int) – Vocabulary size of
inputs_ids
inBertModel
. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingBertModel
.hidden_size (int, optional) – Dimensionality of the embedding layer, encoder layer and pooler layer. Defaults to
768
.num_hidden_layers (int, optional) – Number of hidden layers in the Transformer encoder. Defaults to
12
.num_attention_heads (int, optional) – Number of attention heads for each attention layer in the Transformer encoder. Defaults to
12
.intermediate_size (int, optional) – Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from
hidden_size
tointermediate_size
, and then projected back tohidden_size
. Typicallyintermediate_size
is larger thanhidden_size
. Defaults to3072
.hidden_act (str, optional) – The non-linear activation function in the feed-forward layer.
"gelu"
,"relu"
and any other paddle supported activation functions are supported. Defaults to"gelu"
.hidden_dropout_prob (float, optional) – The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to
0.1
.attention_probs_dropout_prob (float, optional) – The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to
0.1
.max_position_embeddings (int, optional) – The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to
512
.type_vocab_size (int, optional) – The vocabulary size of
token_type_ids
. Defaults to16
.initializer_range (float, optional) –
The standard deviation of the normal initializer. Defaults to 0.02.
Note
A normal_initializer initializes weight matrices as normal distributions. See
BertPretrainedModel.init_weights()
for how weights are initialized inBertModel
.pad_token_id (int, optional) – The index of padding token in the token vocabulary. Defaults to
0
.pooled_act (str, optional) – The non-linear activation function in the pooling layer. Defaults to
"tanh"
.
-
forward
(input_ids, token_type_ids=None, position_ids=None, attention_mask=None, output_hidden_states=False)[source]¶ The BertModel forward method, overrides the
__call__()
special method.- Parameters
input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be
int64
and it has a shape of [batch_size, sequence_length].token_type_ids (Tensor, optional) –
Segment token indices to indicate different portions of the inputs. Selected in the range
[0, type_vocab_size - 1]
. Iftype_vocab_size
is 2, which means the inputs have two portions. Indices can either be 0 or 1:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Its data type should be
int64
and it has a shape of [batch_size, sequence_length]. Defaults toNone
, which means we don’t add segment embeddings.position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, max_position_embeddings - 1]
. Shape as(batch_size, num_tokens)
and dtype as int64. Defaults toNone
.attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the
masked
tokens haveFalse
values and the others haveTrue
values. When the data type is int, themasked
tokens have0
values and the others have1
values. When the data type is float, themasked
tokens have-INF
values and the others have0
values. It is a tensor with shape broadcasted to[batch_size, num_attention_heads, sequence_length, sequence_length]
. Defaults toNone
, which means nothing needed to be prevented attention to.output_hidden_states (bool, optional) – Whether to return the output of each hidden layers. Defaults to
False
.
- Returns
Returns tuple (
sequence_output
,pooled_output
) or (encoder_outputs
,pooled_output
).With the fields:
sequence_output
(Tensor):Sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].
pooled_output
(Tensor):The output of first token (
[CLS]
) in sequence. We “pool” the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].
encoder_outputs
(List(Tensor)):A list of Tensor containing hidden-states of the model at each hidden layer in the Transformer encoder. The length of the list is
num_hidden_layers
. Each Tensor has a data type of float32 and its shape is [batch_size, sequence_length, hidden_size].
- Return type
tuple
Example
import paddle from paddlenlp.transformers import BertModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-wwm-chinese') model = BertModel.from_pretrained('bert-wwm-chinese') inputs = tokenizer("欢迎使用百度飞桨!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} output = model(**inputs)
-
class
BertPretrainedModel
(name_scope=None, dtype='float32')[source]¶ Bases:
paddlenlp.transformers.model_utils.PretrainedModel
An abstract class for pretrained BERT models. It provides BERT related
model_config_file
,resource_files_names
,pretrained_resource_files_map
,pretrained_init_configuration
,base_model_prefix
for downloading and loading pretrained models. SeePretrainedModel
for more details.-
base_model_class
¶
-
-
class
BertForPretraining
(bert)[source]¶ Bases:
paddlenlp.transformers.bert.modeling.BertPretrainedModel
Bert Model with pretraining tasks on top.
-
forward
(input_ids, token_type_ids=None, position_ids=None, attention_mask=None, masked_positions=None)[source]¶ - Parameters
input_ids (Tensor) – See
BertModel
.token_type_ids (Tensor, optional) – See
BertModel
.position_ids (Tensor, optional) – See
BertModel
.attention_mask (Tensor, optional) – See
BertModel
.masked_positions (Tensor, optional) – See
BertPretrainingHeads
.
- Returns
Returns tuple (
prediction_scores
,seq_relationship_score
).With the fields:
prediction_scores
(Tensor):The scores of masked token prediction. Its data type should be float32. If
masked_positions
is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].
seq_relationship_score
(Tensor):The scores of next sentence prediction. Its data type should be float32 and its shape is [batch_size, 2].
- Return type
tuple
-
-
class
BertPretrainingCriterion
(vocab_size)[source]¶ Bases:
paddle.fluid.dygraph.layers.Layer
- Parameters
vocab_size (int) – Vocabulary size of
inputs_ids
inBertModel
. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingBertModel
.
-
forward
(prediction_scores, seq_relationship_score, masked_lm_labels, next_sentence_labels, masked_lm_scale)[source]¶ - Parameters
prediction_scores (Tensor) – The scores of masked token prediction. Its data type should be float32. If
masked_positions
is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size]seq_relationship_score (Tensor) – The scores of next sentence prediction. Its data type should be float32 and its shape is [batch_size, 2]
masked_lm_labels (Tensor) – The labels of the masked language modeling, its dimensionality is equal to
prediction_scores
. Its data type should be int64. Ifmasked_positions
is None, its shape is [batch_size, sequence_length, 1]. Otherwise, its shape is [batch_size, mask_token_num, 1]next_sentence_labels (Tensor) – The labels of the next sentence prediction task, the dimensionality of
next_sentence_labels
is equal toseq_relation_labels
. Its data type should be int64 and its shape is [batch_size, 1]masked_lm_scale (Tensor or int) – The scale of masked tokens. Used for the normalization of masked language modeling loss. If it is a
Tensor
, its data type should be int64 and its shape is equal toprediction_scores
.
- Returns
The pretraining loss, equals to the sum of
masked_lm_loss
plus the mean ofnext_sentence_loss
. Its data type should be float32 and its shape is [1].- Return type
Tensor
-
class
BertPretrainingHeads
(hidden_size, vocab_size, activation, embedding_weights=None)[source]¶ Bases:
paddle.fluid.dygraph.layers.Layer
Perform language modeling task and next sentence classification task.
- Parameters
hidden_size (int) – See
BertModel
.vocab_size (int) – See
BertModel
.activation (str) – Activation function used in the language modeling task.
embedding_weights (Tensor, optional) – Decoding weights used to map hidden_states to logits of the masked token prediction. Its data type should be float32 and its shape is [vocab_size, hidden_size]. Defaults to
None
, which means use the same weights of the embedding layer.
-
forward
(sequence_output, pooled_output, masked_positions=None)[source]¶ - Parameters
sequence_output (Tensor) – Sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].
pooled_output (Tensor) – The output of first token (
[CLS]
) in sequence. We “pool” the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].masked_positions (Tensor, optional) – A tensor indicates positions to be masked in the position embedding. Its data type should be int64 and its shape is [batch_size, mask_token_num].
mask_token_num
is the number of masked tokens. It should be no bigger thansequence_length
. Defaults toNone
, which means we output hidden-states of all tokens in masked token prediction.
- Returns
Returns tuple (
prediction_scores
,seq_relationship_score
).With the fields:
prediction_scores
(Tensor):The scores of masked token prediction. Its data type should be float32. If
masked_positions
is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].
seq_relationship_score
(Tensor):The scores of next sentence prediction. Its data type should be float32 and its shape is [batch_size, 2].
- Return type
tuple
-
class
BertForSequenceClassification
(bert, num_classes=2, dropout=None)[source]¶ Bases:
paddlenlp.transformers.bert.modeling.BertPretrainedModel
Bert Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.
- Parameters
-
forward
(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[source]¶ The BertForSequenceClassification forward method, overrides the __call__() special method.
- Parameters
- Returns
Returns tensor
logits
, a tensor of the input text classification logits. Shape as[batch_size, num_classes]
and dtype as float32.- Return type
Tensor
Example
import paddle from paddlenlp.transformers.bert.modeling import BertForSequenceClassification from paddlenlp.transformers.bert.tokenizer import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = BertForSequenceClassification.from_pretrained('bert-base-cased', num_classes=2) inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs) print(logits.shape) # [1, 2]
-
class
BertForTokenClassification
(bert, num_classes=2, dropout=None)[source]¶ Bases:
paddlenlp.transformers.bert.modeling.BertPretrainedModel
Bert Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.
- Parameters
-
forward
(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[source]¶ The BertForTokenClassification forward method, overrides the __call__() special method.
- Parameters
- Returns
Returns tensor
logits
, a tensor of the input token classification logits. Shape as[batch_size, sequence_length, num_classes]
and dtype asfloat32
.- Return type
Tensor
Example
import paddle from paddlenlp.transformers.bert.modeling import BertForTokenClassification from paddlenlp.transformers.bert.tokenizer import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = BertForTokenClassification.from_pretrained('bert-base-cased', num_classes=2) inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs) print(logits.shape) # [1, 13, 2]
-
class
BertForQuestionAnswering
(bert, dropout=None)[source]¶ Bases:
paddlenlp.transformers.bert.modeling.BertPretrainedModel
Bert Model with a linear layer on top of the hidden-states output to compute
span_start_logits
andspan_end_logits
, designed for question-answering tasks like SQuAD.- Parameters
-
forward
(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[source]¶ The BertForQuestionAnswering forward method, overrides the __call__() special method.
- Parameters
- Returns
Returns tuple (
start_logits
,end_logits
).With the fields:
start_logits
(Tensor):A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].
end_logits
(Tensor):A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].
- Return type
tuple
Example
import paddle from paddlenlp.transformers.bert.modeling import BertForQuestionAnswering from paddlenlp.transformers.bert.tokenizer import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = BertForQuestionAnswering.from_pretrained('bert-base-cased') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} outputs = model(**inputs) start_logits = outputs[0] end_logits = outputs[1]
-
class
BertForMultipleChoice
(bert, num_choices=2, dropout=None)[source]¶ Bases:
paddlenlp.transformers.bert.modeling.BertPretrainedModel
Bert Model with a linear layer on top of the hidden-states output layer, designed for multiple choice tasks like RocStories/SWAG tasks.
- Parameters
-
forward
(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[source]¶ The BertForMultipleChoice forward method, overrides the __call__() special method.
- Parameters
input_ids (Tensor) – See
BertModel
and shape as [batch_size, num_choice, sequence_length].token_type_ids (Tensor, optional) – See
BertModel
and shape as [batch_size, num_choice, sequence_length].position_ids (Tensor, optional) – See
BertModel
and shape as [batch_size, num_choice, sequence_length].attention_mask (list, optional) – See
BertModel
and shape as [batch_size, num_choice, sequence_length].
- Returns
Returns tensor
reshaped_logits
, a tensor of the multiple choice classification logits. Shape as[batch_size, num_choice]
and dtype asfloat32
.- Return type
Tensor
Example
import paddle from paddlenlp.transformers import BertForMultipleChoice, BertTokenizer from paddlenlp.data import Pad, Dict tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMultipleChoice.from_pretrained('bert-base-uncased', num_choices=2) data = [ { "question": "how do you turn on an ipad screen?", "answer1": "press the volume button.", "answer2": "press the lock button.", "label": 1, }, { "question": "how do you indent something?", "answer1": "leave a space before starting the writing", "answer2": "press the spacebar", "label": 0, }, ] text = [] text_pair = [] for d in data: text.append(d["question"]) text_pair.append(d["answer1"]) text.append(d["question"]) text_pair.append(d["answer2"]) inputs = tokenizer(text, text_pair) batchify_fn = lambda samples, fn=Dict( { "input_ids": Pad(axis=0, pad_val=tokenizer.pad_token_id), # input_ids "token_type_ids": Pad( axis=0, pad_val=tokenizer.pad_token_type_id ), # token_type_ids } ): fn(samples) inputs = batchify_fn(inputs) reshaped_logits = model( input_ids=paddle.to_tensor(inputs[0], dtype="int64"), token_type_ids=paddle.to_tensor(inputs[1], dtype="int64"), ) print(reshaped_logits.shape) # [2, 2]
-
class
BertForMaskedLM
(bert)[source]¶ Bases:
paddlenlp.transformers.bert.modeling.BertPretrainedModel
Bert Model with a
masked language modeling
head on top.-
forward
(input_ids, token_type_ids=None, position_ids=None, attention_mask=None)[source]¶ - Parameters
- Returns
Returns tensor
prediction_scores
, The scores of masked token prediction. Its data type should be float32 and shape is [batch_size, sequence_length, vocab_size].- Return type
Tensor
Example
import paddle from paddlenlp.transformers import BertForMaskedLM, BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMaskedLM.from_pretrained('bert-base-uncased') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs) print(logits.shape) # [1, 13, 30522]
-