modeling

class MobileBertModel(vocab_size, embedding_size=128, hidden_size=512, hidden_dropout_prob=0.0, max_position_embeddings=512, type_vocab_size=2, layer_norm_eps=1e-12, pad_token_id=1, trigram_input=True, normalization_type='no_norm', num_hidden_layers=24, use_bottleneck=True, num_feedforward_networks=4, num_attention_heads=4, true_hidden_size=128, use_bottleneck_attention=False, attention_probs_dropout_prob=0.1, intermediate_size=512, intra_bottleneck_size=128, hidden_act='relu', classifier_activation=False, initializer_range=0.02, key_query_shared_bottleneck=True, add_pooling_layer=True)[source]

Bases: paddlenlp.transformers.mobilebert.modeling.MobileBertPretrainedModel

The bare MobileBert Model transformer outputting raw hidden-states. This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods. This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

Parameters
  • vocab_size (int) – Vocabulary size of inputs_ids in MobileBertModel. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by the inputs_ids passed when calling MobileBertModel.

  • embedding_size (int, optional) – Embedding dimensionality of lookup_table in the embedding layer. Defaults to 128.

  • hidden_size (int, optional) – Dimensionality of the embedding layer, encoder layer and pooler layer. Defaults to 512.

  • true_hidden_size (int, optional) – Dimensionality of input_tensor in self attention layer. Defaults to 128.

  • use_bottleneck_attention (bool, optional) – Using bottleneck to value tensor in self attention layer. Defaults to False.

  • key_query_shared_bottleneck (bool, optional) – Key and query shared bottleneck layer. Defaults to True.

  • num_hidden_layers (int, optional) – Number of hidden layers in the Transformer encoder. Defaults to 24.

  • num_attention_heads (int, optional) – Number of attention heads for each attention layer in the Transformer encoder. Defaults to 4.

  • intermediate_size (int, optional) – Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from hidden_size to intermediate_size, and then projected back to hidden_size. Typically intermediate_size is larger than hidden_size. Defaults to 512.

  • hidden_act (str, optional) – The non-linear activation function in the feed-forward layer. "gelu", "relu" and any other paddle supported activation functions are supported. Defaults to "relu".

  • hidden_dropout_prob (float, optional) – The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to 0.1.

  • attention_probs_dropout_prob (float, optional) – The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to 0.1.

  • max_position_embeddings (int, optional) – The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to 512.

  • type_vocab_size (int, optional) – The vocabulary size of token_type_ids. Defaults to 2.

  • initializer_range (float, optional) –

    The standard deviation of the normal initializer. Defaults to 0.02. .. note:

    A normal_initializer initializes weight matrices as normal distributions.
    See :meth:`MobileBertPretrainedModel.init_weights()` for how weights are initialized in `MobileBertModel`.
    

  • pad_token_id (int, optional) – The index of padding token in the token vocabulary. Defaults to 1.

  • add_pooling_layer (bool, optional) – Adding the pooling Layer after the encoder layer. Defaults to True.

  • classifier_activation (bool, optional) – Using the non-linear activation function in the pooling layer. Defaults to False.

get_input_embeddings()[source]

get input embedding of model

Returns

embedding of model

Return type

nn.Embedding

set_input_embeddings(value)[source]

set new input embedding for model

Parameters

value (Embedding) – the new embedding of model

Raises

NotImplementedError – Model has not implement set_input_embeddings method

get_head_mask(head_mask, num_hidden_layers, is_attention_chunked=False)[source]

Prepare the head mask if needed.

Parameters
  • head_mask (paddle.Tensor with shape [num_heads] or [num_hidden_layers x num_heads], optional) – The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard).

  • num_hidden_layers (int) – The number of hidden layers in the model.

  • is_attention_chunked – (bool, optional, defaults to False): Whether or not the attentions scores are computed by chunks or not.

Returns

paddle.Tensor with shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] or list with [None] for each layer.

forward(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_hidden_states=None, output_attentions=None)[source]

The MobileBertModel forward method, overrides the __call__() special method.

Parameters
  • input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be int64 and it has a shape of [batch_size, sequence_length].

  • token_type_ids (Tensor, optional) – Segment token indices to indicate different portions of the inputs. Selected in the range [0, type_vocab_size - 1]. If type_vocab_size is 2, which means the inputs have two portions. Indices can either be 0 or 1: - 0 corresponds to a sentence A token, - 1 corresponds to a sentence B token. Its data type should be int64 and it has a shape of [batch_size, sequence_length]. Defaults to None, which means we don’t add segment embeddings.

  • position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, max_position_embeddings - 1]. Shape as (batch_size, num_tokens) and dtype as int64. Defaults to None.

  • attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have -INF values and the others have 0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults to None, which means nothing needed to be prevented attention to.

  • head_mask (paddle.Tensor with shape [num_heads] or [num_hidden_layers x num_heads], optional) – The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard). Defaults to None.

  • output_hidden_states (bool, optional) – Whether to return the output of each hidden layers. Defaults to None.

  • output_attentions (bool, optional) – Whether to return the output of each self attention layers. Defaults to None.

Returns

Returns tuple (sequence_output, pooled_output) or (encoder_outputs, pooled_output). With the fields: - sequence_output (Tensor):

Sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • pooled_output (Tensor):

    The output of first token ([CLS]) in sequence. We “pool” the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].

  • encoder_outputs (List(Tensor)):

    A list of Tensor containing hidden-states of the model at each hidden layer in the Transformer encoder. The length of the list is num_hidden_layers. Each Tensor has a data type of float32 and its shape is [batch_size, sequence_length, hidden_size].

Return type

tuple

Example

class MobileBertPretrainedModel(*args, **kwargs)[source]

Bases: paddlenlp.transformers.model_utils.PretrainedModel

An abstract class for pretrained MobileBert models. It provides MobileBert related model_config_file, resource_files_names, pretrained_resource_files_map, pretrained_init_configuration, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

base_model_class

alias of paddlenlp.transformers.mobilebert.modeling.MobileBertModel

class MobileBertForPreTraining(mobilebert)[source]

Bases: paddlenlp.transformers.mobilebert.modeling.MobileBertPretrainedModel

MobileBert Model with pretraining tasks on top.

Parameters

bert (MobileBertModel) – An instance of MobileBertModel.

get_output_embeddings()[source]

To be overwrited for models with output embeddings

Returns

the otuput embedding of model

Return type

Optional[Embedding]

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None)[source]

The MobileBertForPreTraining forward method, overrides the __call__() special method.

Parameters
Returns

Returns tuple (prediction_scores, seq_relationship_score). With the fields: - prediction_scores (Tensor):

The scores of masked token prediction. Its data type should be float32. If masked_positions is None, its shape is [batch_size, sequence_length, vocab_size]. Otherwise, its shape is [batch_size, mask_token_num, vocab_size].

  • seq_relationship_score (Tensor):

    The scores of next sentence prediction. Its data type should be float32 and its shape is [batch_size, 2].

Return type

tuple

class MobileBertForSequenceClassification(mobilebert, num_labels=2)[source]

Bases: paddlenlp.transformers.mobilebert.modeling.MobileBertPretrainedModel

MobileBert Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.

Parameters
  • mobilebert (MobileBertModel) – An instance of MobileBert.

  • num_classes (int, optional) – The number of classes. Defaults to 2.

forward(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None)[source]

The MobileBertForSequenceClassification forward method, overrides the __call__() special method.

Parameters
Returns

Returns tensor logits, a tensor of the input text classification logits. Shape as [batch_size, num_classes] and dtype as float32.

Return type

Tensor

Example

class MobileBertForQuestionAnswering(mobilebert)[source]

Bases: paddlenlp.transformers.mobilebert.modeling.MobileBertPretrainedModel

MobileBert Model with a linear layer on top of the hidden-states output to compute span_start_logits and span_end_logits, designed for question-answering tasks like SQuAD.

Parameters

mobilebert (MobileBert) – An instance of MobileBert.

forward(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None)[source]

The MobileBertForQuestionAnswering forward method, overrides the __call__() special method.

Parameters
  • input_ids (Tensor) – See MobileBertModel.

  • token_type_ids (Tensor, optional) – See MobileBertModel.

  • position_ids (Tensor, optional) – See MobileBertModel.

  • head_mask (Tensor, optional) – See MobileBertModel.

  • attention_mask (Tensor, optional) – See MobileBertModel.

  • inputs_embeds (Tensor, optional) – See MobileBertModel.

  • output_attentions (bool, optional) – See MobileBertModel.

  • output_hidden_states (bool, optional) – See MobileBertModel.

  • start_positions (Tensor, optional) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (Tensor, optional) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

Returns tuple (start_logits, end_logits). With the fields: - start_logits (Tensor):

A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

Return type

tuple

Example