modeling#

class XLMModel(config: XLMConfig)[source]#

Bases: XLMPretrainedModel

The bare XLM Model transformer outputting raw hidden-states.

This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

Parameters:

config (XLMConfig) – An instance of XLMConfig.

forward(input_ids=None, langs=None, attention_mask=None, position_ids=None, lengths=None, cache=None, output_attentions=False, output_hidden_states=False)[source]#

The XLMModel forward method, overrides the __call__() special method.

Parameters:
  • input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be int64 and it has a shape of [batch_size, sequence_length].

  • langs (Tensor, optional) – A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the language name to language id mapping is in model.config['lang2id'] (which is a dictionary string to int). Shape as [batch_size, sequence_length] and dtype as int64. Defaults to None.

  • attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have 0.0 values and the others have 1.0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults to None, which means nothing needed to be prevented attention to.

  • position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, max_position_embeddings - 1]. Shape as [batch_size, sequence_length] and dtype as int64. Defaults to None.

  • lengths (Tensor, optional) – Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in [0, ..., sequence_length]. Shape as [batch_size] and dtype as int64. Defaults to None.

  • cache (Tuple[Tuple[Tensor]], optional) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model. Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input ids as they have already been computed. Defaults to None.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. Defaults to False.

  • output_hidden_states (bool, optional) – Whether or not to return the output of all hidden layers. Defaults to False.

Returns:

Returns tuple (last_hidden_state, hidden_states, attentions)

With the fields:

  • last_hidden_state (Tensor):

    Sequence of hidden-states at the last layer of the model. It’s data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • hidden_states (tuple(Tensor), optional):

    returned when output_hidden_states=True is passed. Tuple of Tensor (one for the output of the embeddings + one for the output of each layer). Each Tensor has a data type of float32 and its shape is [batch_size, sequence_length, hidden_size].

  • attentions (tuple(Tensor), optional):

    returned when output_attentions=True is passed. Tuple of Tensor (one for each layer) of shape. Each Tensor has a data type of float32 and its shape is [batch_size, num_heads, sequence_length, sequence_length].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import XLMModel, XLMTokenizer

tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-tlm-xnli15-1024")
model = XLMModel.from_pretrained("xlm-mlm-tlm-xnli15-1024")

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!", lang="en")
inputs = {k:paddle.to_tensor([v], dtype="int64") for (k, v) in inputs.items()}
inputs["langs"] = paddle.ones_like(inputs["input_ids"]) * tokenizer.lang2id["en"]

last_hidden_state = model(**inputs)[0]
get_input_embeddings()[source]#

get input embedding of model

Returns:

embedding of model

Return type:

nn.Embedding

set_input_embeddings(value)[source]#

set new input embedding for model

Parameters:

value (Embedding) – the new embedding of model

Raises:

NotImplementedError – Model has not implement set_input_embeddings method

class XLMPretrainedModel(*args, **kwargs)[source]#

Bases: PretrainedModel

An abstract class for pretrained XLM models. It provides XLM related model_config_file, resource_files_names, pretrained_resource_files_map, pretrained_init_configuration, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

config_class#

alias of XLMConfig

base_model_class#

alias of XLMModel

class XLMWithLMHeadModel(config: XLMConfig)[source]#

Bases: XLMPretrainedModel

The XLM Model transformer with a masked language modeling head on top (linear layer with weights tied to the input embeddings).

Parameters:

config (XLMConfig) – An instance of XLMConfig.

forward(input_ids=None, langs=None, attention_mask=None, position_ids=None, lengths=None, cache=None, labels=None)[source]#

The XLMWithLMHeadModel forward method, overrides the __call__() special method.

Parameters:
  • input_ids (Tensor) – See XLMModel.

  • langs (Tensor, optional) – See XLMModel.

  • attention_mask (Tensor, optional) – See XLMModel.

  • position_ids (Tensor, optional) – See XLMModel.

  • lengths (Tensor, optional) – See XLMModel.

  • cache (Dict[str, Tensor], optional) – See XLMModel.

  • labels (Tensor, optional) – The Labels for computing the masked language modeling loss. Indices are selected in [-100, 0, ..., vocab_size-1] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., vocab_size-1] Shape as [batch_size, sequence_length] and dtype as int64. Defaults to None.

Returns:

Returns tuple (loss, logits). With the fields:

  • loss (Tensor):

    returned when labels is provided. Language modeling loss (for next-token prediction). It’s data type should be float32 and its shape is [1,].

  • logits (Tensor):

    Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). It’s data type should be float32 and its shape is [batch_size, sequence_length, vocab_size].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import XLMWithLMHeadModel, XLMTokenizer

tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-tlm-xnli15-1024')
model = XLMWithLMHeadModel.from_pretrained('xlm-mlm-tlm-xnli15-1024')

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!", lang="en")
inputs = {k:paddle.to_tensor([v], dtype="int64") for (k, v) in inputs.items()}
inputs["langs"] = paddle.ones_like(inputs["input_ids"]) * tokenizer.lang2id["en"]
inputs["labels"] = inputs["input_ids"]

loss, logits = model(**inputs)
class XLMForSequenceClassification(config: XLMConfig)[source]#

Bases: XLMPretrainedModel

The XLMModel with a sequence classification head on top (linear layer). XLMForSequenceClassification uses the first token in order to do the classification.

Parameters:

config (XLMConfig) – An instance of XLMConfig.

forward(input_ids=None, langs=None, attention_mask=None, position_ids=None, lengths=None)[source]#

The XLMForSequenceClassification forward method, overrides the __call__() special method.

Parameters:
  • input_ids (Tensor) – See XLMModel.

  • langs (Tensor, optional) – See XLMModel.

  • attention_mask (Tensor, optional) – See XLMModel.

  • position_ids (Tensor, optional) – See XLMModel.

  • lengths (Tensor, optional) – See XLMModel.

Returns:

A tensor of the input text classification logits. Shape as [batch_size, num_classes] and dtype as float32.

Return type:

logits (Tensor)

Example

import paddle
from paddlenlp.transformers import XLMForSequenceClassification, XLMTokenizer

tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-tlm-xnli15-1024")
model = XLMForSequenceClassification.from_pretrained("xlm-mlm-tlm-xnli15-1024", num_classes=2)

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!", lang="en")
inputs = {k:paddle.to_tensor([v], dtype="int64") for (k, v) in inputs.items()}
inputs["langs"] = paddle.ones_like(inputs["input_ids"]) * tokenizer.lang2id["en"]

logits = model(**inputs)
class XLMForTokenClassification(config: XLMConfig)[source]#

Bases: XLMPretrainedModel

XLMModel with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.

Parameters:

config (XLMConfig) – An instance of XLMConfig.

forward(input_ids=None, langs=None, attention_mask=None, position_ids=None, lengths=None)[source]#

The XLMForTokenClassification forward method, overrides the __call__() special method.

Parameters:
  • input_ids (Tensor) – See XLMModel.

  • langs (Tensor, optional) – See XLMModel.

  • attention_mask (Tensor, optional) – See XLMModel.

  • position_ids (Tensor, optional) – See XLMModel.

  • lengths (Tensor, optional) – See XLMModel.

Returns:

A tensor of the input token classification logits. Shape as [batch_size, sequence_length, num_classes] and dtype as float32.

Return type:

logits (Tensor)

Example

import paddle
from paddlenlp.transformers import XLMForTokenClassification, XLMTokenizer

tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-tlm-xnli15-1024")
model = XLMForTokenClassification.from_pretrained("xlm-mlm-tlm-xnli15-1024", num_classes=2)

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!", lang="en")
inputs = {k:paddle.to_tensor([v], dtype="int64") for (k, v) in inputs.items()}
inputs["langs"] = paddle.ones_like(inputs["input_ids"]) * tokenizer.lang2id["en"]

logits = model(**inputs)
class XLMForQuestionAnsweringSimple(config: XLMConfig)[source]#

Bases: XLMPretrainedModel

XLMModel with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

Parameters:

config (XLMConfig) – An instance of XLMConfig.

forward(input_ids=None, langs=None, attention_mask=None, position_ids=None, lengths=None)[source]#

The XLMForQuestionAnswering forward method, overrides the __call__() special method.

Parameters:
  • input_ids (Tensor) – See XLMModel.

  • langs (Tensor, optional) – See XLMModel.

  • attention_mask (Tensor, optional) – See XLMModel.

  • position_ids (Tensor, optional) – See XLMModel.

  • lengths (Tensor, optional) – See XLMModel.

Returns:

Returns tuple (start_logits, end_logits).

With the fields:

  • start_logits (Tensor):

    A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import XLMForQuestionAnswering, XLMTokenizer

tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-tlm-xnli15-1024")
model = XLMForQuestionAnswering.from_pretrained("xlm-mlm-tlm-xnli15-1024", num_classes=2)

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!", lang="en")
inputs = {k:paddle.to_tensor([v], dtype="int64") for (k, v) in inputs.items()}
inputs["langs"] = paddle.ones_like(inputs["input_ids"]) * tokenizer.lang2id["en"]

outputs = model(**inputs)

start_logits = outputs[0]
end_logits = outputs[1]
class XLMForMultipleChoice(config: XLMConfig)[source]#

Bases: XLMPretrainedModel

XLMModel with a linear layer on top of the hidden-states output layer, designed for multiple choice tasks like RocStories/SWAG tasks.

Parameters:

config (XLMConfig) – An instance of XLMConfig.

forward(input_ids=None, langs=None, attention_mask=None, position_ids=None, lengths=None)[source]#

The XLMForMultipleChoice forward method, overrides the __call__() special method. :param input_ids: See XLMModel and shape as [batch_size, num_choice, sequence_length]. :type input_ids: Tensor :param langs: See XLMModel and shape as [batch_size, num_choice, sequence_length]. :type langs: Tensor, optional :param attention_mask: See XLMModel and shape as [batch_size, num_choice, sequence_length]. :type attention_mask: Tensor, optional :param position_ids: See XLMModel and shape as [batch_size, num_choice, sequence_length]. :type position_ids: Tensor, optional :param lengths: See XLMModel and shape as [batch_size, num_choice]. :type lengths: Tensor, optional

Returns:

A tensor of the multiple choice classification logits. Shape as [batch_size, num_choice] and dtype as float32.

Return type:

reshaped_logits (Tensor)

Example

import paddle
from paddlenlp.transformers import XLMForMultipleChoice, XLMTokenizer
from paddlenlp.data import Pad

tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-tlm-xnli15-1024")
model = XLMForMultipleChoice.from_pretrained("xlm-mlm-tlm-xnli15-1024", num_choices=2)

data = [
    {
        "question": "how do you turn on an ipad screen?",
        "answer1": "press the volume button.",
        "answer2": "press the lock button.",
        "label": 1,
    },
    {
        "question": "how do you indent something?",
        "answer1": "leave a space before starting the writing",
        "answer2": "press the spacebar",
        "label": 0,
    },
]
text = []
text_pair = []
for d in data:
    text.append(d["question"])
    text_pair.append(d["answer1"])
    text.append(d["question"])
    text_pair.append(d["answer2"])

inputs = tokenizer(text, text_pair, lang="en")
input_ids = Pad(axis=0, pad_val=tokenizer.pad_token_id)(inputs["input_ids"])
input_ids = paddle.to_tensor(input_ids, dtype="int64")
langs = paddle.ones_like(input_ids) * tokenizer.lang2id["en"]

reshaped_logits = model(
    input_ids=input_ids,
    langs=langs,
)