modeling#

class MegatronBertModel(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

The bare MegatronBert Model transformer outputting raw hidden-states.

This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

Parameters:
  • Args

  • config (MegatronBertConfig) – An instance of MegatronBertConfig used to construct MBartModel.

get_input_embeddings()[source]#

get input embedding of model

Returns:

embedding of model

Return type:

nn.Embedding

set_input_embeddings(value)[source]#

set new input embedding for model

Parameters:

value (Embedding) – the new embedding of model

Raises:

NotImplementedError – Model has not implement set_input_embeddings method

forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[source]#

The MegatronBertModel forward method, overrides the __call__() special method.

Parameters:
  • input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be int64 and it has a shape of [batch_size, sequence_length].

  • token_type_ids (Tensor, optional) –

    Segment token indices to indicate different portions of the inputs. Selected in the range [0, type_vocab_size - 1]. If type_vocab_size is 2, which means the inputs have two portions. Indices can either be 0 or 1:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    Its data type should be int64 and it has a shape of [batch_size, sequence_length]. Defaults to None, which means we don’t add segment embeddings.

  • position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, max_position_embeddings - 1]. Shape as (batch_size, num_tokens) and dtype as int64. Defaults to None.

  • attention_mask (Tensor, optional) –

    Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. If its data type is int, the values should be either 0 or 1.

    • 1 for tokens that not masked,

    • 0 for tokens that masked.

    It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults to None, which means nothing needed to be prevented attention to.

Returns:

Returns tuple (sequence_output, pooled_output).

With the fields:

  • sequence_output (Tensor):

    Sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • pooled_output (Tensor):

    The output of first token ([CLS]) in sequence. We “pool” the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import MegatronBertModel, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertModel.from_pretrained('megatronbert-uncased')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
output = model(**inputs)
class MegatronBertPretrainedModel(*args, **kwargs)[source]#

Bases: PretrainedModel

An abstract class for pretrained MegatronBert models. It provides RoBerta related model_config_file, pretrained_init_configuration, resource_files_names, pretrained_resource_files_map, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

config_class#

alias of MegatronBertConfig

base_model_class#

alias of MegatronBertModel

class MegatronBertForQuestionAnswering(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

MegatronBert Model with question answering tasks.

Parameters:

megatronbert (MegatronBertModel) – An instance of MegatronBertModel.

forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[source]#

The MegatronBertForQuestionAnswering forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns tuple (start_logits, end_logits).

With the fields:

  • start_logits (Tensor):

    A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import MegatronBertForQuestionAnswering, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForQuestionAnswering.from_pretrained('megatronbert-uncased')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
outputs = model(**inputs)

start_logits = outputs[0]
end_logits  = outputs[1]
class MegatronBertForSequenceClassification(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

MegatronBert Model with sequence classification tasks.

Parameters:
forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[source]#

The MegatronBertForSequenceClassification forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns tensor logits, a tensor of the sequence classification logits.

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import MegatronBertForSequenceClassification, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForSequenceClassification.from_pretrained('megatronbert-uncased', num_labels=2)

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
class MegatronBertForNextSentencePrediction(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

MegatronBert Model with a next sentence prediction (classification) head on top.

Parameters:

megatronbert (MegatronBertModel) – An instance of MegatronBertModel.

forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[source]#

The MegatronBertForNextSentencePrediction forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns Tensor seq_relationship_scores. The scores of next sentence prediction.

Its data type should be float32 and its shape is [batch_size, 2].

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import MegatronBertForNextSentencePrediction, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForNextSentencePrediction.from_pretrained('megatronbert-uncased')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
seq_relationship_scores = model(**inputs)
class MegatronBertForCausalLM(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

MegatronBert Model with a causal masked language modeling head on top.

Parameters:

megatronbert (MegatronBertModel) – An instance of MegatronBertModel.

forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[source]#

The MegatronBertForCausalLM forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns Tensor prediction_scores. The scores of masked token prediction.

Its data type should be float32. If masked_positions is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import MegatronBertForCausalLM, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForCausalLM.from_pretrained('megatronbert-uncased')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
prediction_scores = model(**inputs)
class MegatronBertForPreTraining(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

Megatronbert Model with pretraining tasks on top.

Parameters:

megatronbert (MegatronBertModel) – An instance of MegatronBertModel.

forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[source]#

The MegatronBertForPreTraining forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns tuple (prediction_scores, seq_relationship_score).

With the fields:

  • prediction_scores (Tensor):

    The scores of masked token prediction. Its data type should be float32. If masked_positions is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].

  • seq_relationship_score (Tensor):

    The scores of next sentence prediction. Its data type should be float32 and its shape is [batch_size, 2].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import MegatronBertForPreTraining, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForPreTraining.from_pretrained('megatronbert-uncased')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
prediction_scores, seq_relationship_score = model(**inputs)
class MegatronBertForMaskedLM(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

MegatronBert Model with a masked language modeling head on top.

Parameters:

megatronbert (MegatronBertModel) – An instance of MegatronBertModel.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None)[source]#

The MegatronBertForMaskedLM forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns Tensor prediction_scores. The scores of masked token prediction.

Its data type should be float32. If masked_positions is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import MegatronBertForMaskedLM, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForMaskedLM.from_pretrained('megatronbert-uncased')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
prediction_scores = model(**inputs)
class MegatronBertForMultipleChoice(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

MegatronBert Model with a multiple choice classification head on top.

Parameters:

megatronbert (MegatronBertModel) – An instance of MegatronBertModel.

forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[source]#

The MegatronBertForMultipleChoice forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns Tensor reshaped_logits. A tensor of the multiple choice classification logits.

Shape as [batch_size, num_choice] and dtype as float32.

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import MegatronBertForMultipleChoice, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForNextSentencePrediction.from_pretrained('megatronbert-uncased')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
reshaped_logits = model(**inputs)
class MegatronBertForTokenClassification(config: MegatronBertConfig)[source]#

Bases: MegatronBertPretrainedModel

MegatronBert Model with a token classification head on top.

Parameters:
forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None)[source]#

The MegatronBertForTokenClassification forward method, overrides the __call__() special method.

Parameters:
Returns:

Returns tensor logits, a tensor of the input token classification logits.

Shape as [batch_size, sequence_length, num_classes] and dtype as float32.

Return type:

Tensor

Example

import paddle
from paddlenlp.transformers import MegatronBertForTokenClassification, MegatronBertTokenizer

tokenizer = MegatronBertTokenizer.from_pretrained('megatronbert-uncased')
model = MegatronBertForTokenClassification.from_pretrained('megatronbert-uncased', num_labels=2)

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
reshaped_logits = model(**inputs)