modeling¶
-
class
TinyBertModel
(config: paddlenlp.transformers.tinybert.configuration.TinyBertConfig)[source]¶ Bases:
paddlenlp.transformers.tinybert.modeling.TinyBertPretrainedModel
The bare TinyBERT Model transformer outputting raw hidden-states.
This model inherits from
PretrainedModel
. Refer to the superclass documentation for the generic methods.This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.
- Parameters
config (
TinyBertConfig
) – An instance of TinyBertConfig used to construct TinyBertModel.
-
get_input_embeddings
() → paddle.nn.layer.common.Embedding[source]¶ get input embedding of TinyBert Pretrained Model
- Returns
the input embedding of tiny bert
- Return type
nn.Embedding
-
set_input_embeddings
(embedding: paddle.nn.layer.common.Embedding) → None[source]¶ set the input embedding with the new embedding value
- Parameters
embedding (nn.Embedding) – the new embedding value
-
forward
(input_ids: Optional[paddle.Tensor] = None, token_type_ids: Optional[paddle.Tensor] = None, position_ids: Optional[paddle.Tensor] = None, attention_mask: Optional[paddle.Tensor] = None, inputs_embeds: Optional[paddle.Tensor] = None, past_key_values: Optional[Tuple[Tuple[paddle.Tensor]]] = None, use_cache: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_attentions: Optional[bool] = None, return_dict: Optional[bool] = None)[source]¶ The TinyBertModel forward method, overrides the
__call__()
special method.- Parameters
input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be
int64
and it has a shape of [batch_size, sequence_length].token_type_ids (Tensor, optional) –
Segment token indices to indicate different portions of the inputs. Selected in the range
[0, type_vocab_size - 1]
. Iftype_vocab_size
is 2, which means the inputs have two portions. Indices can either be 0 or 1:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Its data type should be
int64
and it has a shape of [batch_size, sequence_length]. Defaults toNone
, which means we don’t add segment embeddings.position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, max_position_embeddings - 1]
. Shape as(batch_size, num_tokens)
and dtype as int64. Defaults toNone
.attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the
masked
tokens haveFalse
values and the others haveTrue
values. When the data type is int, themasked
tokens have0
values and the others have1
values. When the data type is float, themasked
tokens have-INF
values and the others have0
values. It is a tensor with shape broadcasted to[batch_size, num_attention_heads, sequence_length, sequence_length]
. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults toNone
, which means nothing needed to be prevented attention to.inputs_embeds (Tensor, optional) – If you want to control how to convert
inputs_ids
indices into associated vectors, you can pass an embedded representation directly instead of passinginputs_ids
.past_key_values (tuple(tuple(Tensor)), optional) – The length of tuple equals to the number of layers, and each inner tuple haves 4 tensors of shape
(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
) which contains precomputed key and value hidden states of the attention blocks. Ifpast_key_values
are used, the user can optionally input only the lastinput_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of allinput_ids
of shape(batch_size, sequence_length)
.use_cache (
bool
, optional) – If set toTrue
,past_key_values
key value states are returned. Defaults toNone
.output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
ModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns
An instance of
BaseModelOutputWithPoolingAndCrossAttentions
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofBaseModelOutputWithPoolingAndCrossAttentions
.tuple: Returns tuple (
encoder_output
,pooled_output
).With the fields:
encoder_output
(Tensor):Sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].
pooled_output
(Tensor):The output of first token (
[CLS]
) in sequence. We “pool” the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].
Example
import paddle from paddlenlp.transformers import TinyBertModel, TinyBertTokenizer tokenizer = TinyBertTokenizer.from_pretrained('tinybert-4l-312d') model = TinyBertModel.from_pretrained('tinybert-4l-312d') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP! ") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} output = model(**inputs)
-
class
TinyBertPretrainedModel
(*args, **kwargs)[source]¶ Bases:
paddlenlp.transformers.model_utils.PretrainedModel
An abstract class for pretrained TinyBERT models. It provides TinyBERT related
model_config_file
,resource_files_names
,pretrained_resource_files_map
,pretrained_init_configuration
,base_model_prefix
for downloading and loading pretrained models. SeePretrainedModel
for more details.-
config_class
¶ alias of
paddlenlp.transformers.tinybert.configuration.TinyBertConfig
-
base_model_class
¶ alias of
paddlenlp.transformers.tinybert.modeling.TinyBertModel
-
-
class
TinyBertForPretraining
(config: paddlenlp.transformers.tinybert.configuration.TinyBertConfig)[source]¶ Bases:
paddlenlp.transformers.tinybert.modeling.TinyBertPretrainedModel
TinyBert Model with pretraining tasks on top.
- Parameters
config (
TinyBertConfig
) – An instance of TinyBertConfig used to construct TinyBertForPretraining.
-
forward
(input_ids: Optional[paddle.Tensor] = None, token_type_ids: Optional[paddle.Tensor] = None, position_ids: Optional[paddle.Tensor] = None, attention_mask: Optional[paddle.Tensor] = None, inputs_embeds: Optional[paddle.Tensor] = None, output_hidden_states: Optional[bool] = None, output_attentions: Optional[bool] = None, return_dict: Optional[bool] = None)[source]¶ The TinyBertForPretraining forward method, overrides the __call__() special method.
- Parameters
input_ids (Tensor) – See
TinyBertModel
.token_type_ids (Tensor, optional) – See
TinyBertModel
.position_ids (Tensor, optional) – See
TinyBertModel
.attention_mask (Tensor, optional) – See
TinyBertModel
.
- Returns
Returns tensor
sequence_output
, sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].- Return type
Tensor
Example
import paddle from paddlenlp.transformers.tinybert.modeling import TinyBertForPretraining from paddlenlp.transformers.tinybert.tokenizer import TinyBertTokenizer tokenizer = TinyBertTokenizer.from_pretrained('tinybert-4l-312d') model = TinyBertForPretraining.from_pretrained('tinybert-4l-312d') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP! ") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} outputs = model(**inputs) logits = outputs[0]
-
class
TinyBertForSequenceClassification
(config: paddlenlp.transformers.tinybert.configuration.TinyBertConfig)[source]¶ Bases:
paddlenlp.transformers.tinybert.modeling.TinyBertPretrainedModel
TinyBert Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.
- Parameters
config (
TinyBertConfig
) – An instance of TinyBertConfig used to construct TinyBertForSequenceClassification.
-
forward
(input_ids: Optional[paddle.Tensor] = None, token_type_ids: Optional[paddle.Tensor] = None, position_ids: Optional[paddle.Tensor] = None, attention_mask: Optional[paddle.Tensor] = None, labels: Optional[paddle.Tensor] = None, inputs_embeds: Optional[paddle.Tensor] = None, output_hidden_states: Optional[bool] = None, output_attentions: Optional[bool] = None, return_dict: Optional[bool] = None)[source]¶ The TinyBertForSequenceClassification forward method, overrides the __call__() special method.
- Parameters
input_ids (Tensor) – See
TinyBertModel
.token_type_ids (Tensor, optional) – See
TinyBertModel
.position_ids (Tensor, optional) – See
TinyBertModel
.attention_mask_list (list, optional) – See
TinyBertModel
.labels (Tensor of shape
(batch_size,)
, optional) – Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., num_classes - 1]
. Ifnum_classes == 1
a regression loss is computed (Mean-Square loss), Ifnum_classes > 1
a classification loss is computed (Cross-Entropy).output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
SequenceClassifierOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns
An instance of
SequenceClassifierOutput
ifreturn_dict=True
. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields ofSequenceClassifierOutput
.
Example
import paddle from paddlenlp.transformers.tinybert.modeling import TinyBertForSequenceClassification from paddlenlp.transformers.tinybert.tokenizer import TinyBertTokenizer tokenizer = TinyBertTokenizer.from_pretrained('tinybert-4l-312d') model = TinyBertForSequenceClassification.from_pretrained('tinybert-4l-312d') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP! ") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} outputs = model(**inputs) logits = outputs[0]
-
class
TinyBertForQuestionAnswering
(config: paddlenlp.transformers.tinybert.configuration.TinyBertConfig)[source]¶ Bases:
paddlenlp.transformers.tinybert.modeling.TinyBertPretrainedModel
TinyBert Model with a linear layer on top of the hidden-states output to compute
span_start_logits
andspan_end_logits
, designed for question-answering tasks like SQuAD.Args: :param config: An instance of TinyBertConfig used to construct TinyBertForQuestionAnswering. :type config:
TinyBertConfig
-
forward
(input_ids: Optional[paddle.Tensor] = None, token_type_ids: Optional[paddle.Tensor] = None, position_ids: Optional[paddle.Tensor] = None, attention_mask: Optional[paddle.Tensor] = None, inputs_embeds: Optional[paddle.Tensor] = None, start_positions: Optional[paddle.Tensor] = None, end_positions: Optional[paddle.Tensor] = None, output_hidden_states: Optional[bool] = None, output_attentions: Optional[bool] = None, return_dict: Optional[bool] = None)[source]¶ - Parameters
input_ids (Tensor) – See
TinyBertModel
.token_type_ids (Tensor, optional) – See
TinyBertModel
.position_ids (Tensor, optional) – See
TinyBertModel
.attention_mask (Tensor, optional) – See
TinyBertModel
.start_positions (Tensor of shape
(batch_size,)
, optional) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence are not taken into account for computing the loss.end_positions (Tensor of shape
(batch_size,)
, optional) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence are not taken into account for computing the loss.output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
QuestionAnsweringModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns
Returns tuple (
start_logits
,end_logits
).With the fields:
start_logits
(Tensor):A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].
end_logits
(Tensor):A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].
- Return type
tuple
Example
import paddle from paddlenlp.transformers import TinyBertForQuestionAnswering, TinyBertTokenizer tokenizer = TinyBertTokenizer.from_pretrained('tinybert-6l-768d-zh') model = TinyBertForQuestionAnswering.from_pretrained('tinybert-6l-768d-zh') inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!") inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()} logits = model(**inputs)
-
-
class
TinyBertForMultipleChoice
(config: paddlenlp.transformers.tinybert.configuration.TinyBertConfig)[source]¶ Bases:
paddlenlp.transformers.tinybert.modeling.TinyBertPretrainedModel
TinyBERT Model with a linear layer on top of the hidden-states output layer, designed for multiple choice tasks like RocStories/SWAG tasks.
Args: :param config: An instance of TinyBertConfig used to construct TinyBertForMultipleChoice. :type config:
TinyBertConfig
-
forward
(input_ids: Optional[paddle.Tensor] = None, token_type_ids: Optional[paddle.Tensor] = None, position_ids: Optional[paddle.Tensor] = None, attention_mask: Optional[paddle.Tensor] = None, inputs_embeds: Optional[paddle.Tensor] = None, labels: Optional[paddle.Tensor] = None, output_hidden_states: Optional[bool] = None, output_attentions: Optional[bool] = None, return_dict: Optional[bool] = None)[source]¶ The TinyBertForMultipleChoice forward method, overrides the __call__() special method.
- Parameters
input_ids (Tensor) – See
TinyBertModel
and shape as [batch_size, num_choice, sequence_length].token_type_ids (Tensor, optional) – See
TinyBertModel
and shape as [batch_size, num_choice, sequence_length].position_ids (Tensor, optional) – See
TinyBertModel
and shape as [batch_size, num_choice, sequence_length].attention_mask (list, optional) – See
TinyBertModel
and shape as [batch_size, num_choice, sequence_length].labels (Tensor of shape
(batch_size, )
, optional) – Labels for computing the multiple choice classification loss. Indices should be in[0, ..., num_choices-1]
wherenum_choices
is the size of the second dimension of the input tensors. (Seeinput_ids
above)output_hidden_states (bool, optional) – Whether to return the hidden states of all layers. Defaults to
False
.output_attentions (bool, optional) – Whether to return the attentions tensors of all attention layers. Defaults to
False
.return_dict (bool, optional) – Whether to return a
MultipleChoiceModelOutput
object. IfFalse
, the output will be a tuple of tensors. Defaults toFalse
.
- Returns
Returns tensor
reshaped_logits
, a tensor of the multiple choice classification logits. Shape as[batch_size, num_choice]
and dtype asfloat32
.- Return type
Tensor
-