modeling#

class ReformerModel(config: ReformerConfig)[source]#

Bases: ReformerPretrainedModel

The bare Reformer Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

Parameters:
  • tie_word_embeddings (bool, optional) – Whether to tie input and output embeddings. Defaults to False.

  • is_decoder (bool, optional) – Whether or not to use a causal mask in addition to the attention_mask passed to ReformerModel. When using the Reformer for causal language modeling, this argument should be set to True. Defaults to True.

  • chunk_size_feed_forward (int, optional) – The chunk size of all feed forward layers in the residual attention blocks. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. Defaults to 0.

  • pad_token_id (int, optional) – The id of the padding token. Defaults to 0.

  • hash_seed (int, optional) – Seed that can be used to make local sensitive hashing in LSHSelfAttention deterministic. This should only be set for testing purposed. For evaluation and training purposes hash_seed should be left as None to ensure fully random rotations in local sensitive hashing scheme. Defaults to None.

  • vocab_size (int, optional) – Vocabulary size of inputs_ids in ReformerModel. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ReformerModel. Defaults to 258.

  • attention_head_size (int, optional) – Dimensionality of the projected key, query and value vectors. Defaults to 128.

  • hidden_size (int, optional) – Dimensionality of the embedding layer, encoder layer.Defaults to 1024.

  • num_attention_heads (int, optional) – Number of attention heads for each attention layer in the Transformer encoder. Defaults to 8.

  • num_hashes (int, optional) – Number of hashing rounds (e.g., number of random rotations) in Local Sensitive Hashing scheme. The higher num_hashes, the more accurate the LSHSelfAttention becomes, but also the more memory and time intensive the hashing becomes. Defaults to 4.

  • num_hidden_layers (int, optional) – Number of hidden layers in the Transformer encoder. Defaults to 12.

  • num_buckets (int or List[int], optional) – Number of buckets, the key query vectors can be “hashed into” using the locality sensitive hashing scheme. Each query key vector is hashed into a hash in 1, ..., num_buckets. The number of buckets can also be factorized into a list for improved memory complexity. In this case, each query key vector is hashed into a hash in 1-1, 1-2, ..., num_buckets[0]-1, ..., num_buckets[0]-num_buckets[1] if num_buckets is factorized into two factors. The number of buckets (or the product the factors) should approximately equal sequence length / lsh_chunk_length. If num_buckets not set, a good value is calculated on the fly. Defaults to 512.

  • lsh_attn_chunk_length (int, optional) – Length of chunk which attends to itself in LSHSelfAttention. Chunking reduces memory complexity from sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk length (chunked self attention).Defaults to 256.

  • local_attn_chunk_length (int, optional) – Length of chunk which attends to itself in LocalSelfAttention. Chunking reduces memory complexity from sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk length (chunked self attention).Defaults to 128.

  • lsh_num_chunks_after (int, optional) – Number of following neighbouring chunks to attend to in LSHSelfAttention layer to itself. Defaults to 0.

  • lsh_num_chunks_before (int, optional) – Number of previous neighbouring chunks to attend to in LSHSelfAttention layer to itself. Defaults to 1.

  • local_num_chunks_after (int, optional) – Number of following neighbouring chunks to attend to in LocalSelfAttention layer to itself. Defaults to 0.

  • local_num_chunks_before (int, optional) – Number of previous neighbouring chunks to attend to in LocalSelfAttention layer to itself. Defaults to 1.

  • hidden_act (str, optional) – The non-linear activation function (function or string) in the feed forward layer in the residual attention block. If string, "gelu", "relu", "tanh", "mish" and "gelu_new" are supported. Defaults to "relu".

  • feed_forward_size (int, optional) – Dimensionality of the feed_forward layer in the residual attention block. Defaults to 4096.

  • hidden_dropout_prob (float, optional) – The dropout ratio for all fully connected layers in the embeddings and encoder. Defaults to 0.2.

  • lsh_attention_probs_dropout_prob (float, optional) – The dropout ratio for the attention probabilities in LSHSelfAttention. Defaults to 0.1.

  • local_attention_probs_dropout_prob (float, optional) – The dropout ratio for the attention probabilities in LocalSelfAttention. Defaults to 0.2.

  • max_position_embeddings (int, optional) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). Defaults to 65536.

  • initializer_range (float, optional) –

    The standard deviation of the normal initializer. Defaults to 0.02.

    Note

    A normal_initializer initializes weight matrices as normal distributions. See ReformerPretrainedModel._init_weights() for how weights are initialized in ReformerModel.

  • layer_norm_eps (float, optional) – The epsilon used by the layer normalization layers. Defaults to 1e-12.

  • axial_pos_embds (bool, optional) – Whether or not to use axial position embeddings. Defaults to True.

  • axial_pos_shape (List[int], optional) – The position dims of the axial position encodings. During training, the product of the position dims has to be equal to the sequence length. Defaults to [128, 512].

  • axial_pos_embds_dim (List[int], optional) – The embedding dims of the axial position encodings. The sum of the embedding dims has to be equal to the hidden size. Defaults to [256, 768].

  • axial_norm_std (float, optional) – The standard deviation of the normal_initializer for initializing the weight matrices of the axial positional encodings. Defaults to 1.0.

  • chunk_size_lm_head (int, optional) – The chunk size of the final language model feed forward head layer. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. Defaults to 0.

  • attn_layers (List[str], optional) – List of attention layer types in ascending order. It can be chosen between a LSHSelfAttention layer ("lsh") and a LocalSelfAttention layer ("local"). Defaults to ["local", "local", "lsh", "local", "local", "local", "lsh", "local", "local", "local", "lsh", "local"].

get_input_embeddings()[source]#

get input embedding of model

Returns:

embedding of model

Return type:

nn.Embedding

set_input_embeddings(value)[source]#

set new input embedding for model

Parameters:

value (Embedding) – the new embedding of model

Raises:

NotImplementedError – Model has not implement set_input_embeddings method

forward(input_ids: Tensor | None = None, attention_mask: Tensor | None = None, position_ids: Tensor | None = None, num_hashes: int | None = None, cache: List[Tuple[Tensor]] | None = None, use_cache: bool | None = False, inputs_embeds: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#

The ReformerModel forward method, overrides the __call__() special method.

Parameters:
  • input_ids (Tensor) – Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be int64 and it has a shape of [batch_size, sequence_length].

  • attention_mask (Tensor, optional) – Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have 0.0 values and the others have 1.0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults to None, which means nothing needed to be prevented attention to.

  • position_ids (Tensor, optional) – Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, max_position_embeddings - 1]. Shape as [batch_size, num_tokens] and dtype as int64. Defaults to None.

  • num_hashes (int, optional) – The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites the default defined in config["num_hashes"]. Defaults to None.

  • cache (List[tuple(Tensor, Tensor)], optional) – List of tuple(Tensor, Tensor) of length config["num_hidden_layers"], with the first element being the previous buckets of shape [batch_size, num_heads, num_hashes, sequence_length] and the second being the previous hidden_states of shape [batch_size, sequence_length, hidden_size]. Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed up sequential decoding. Defaults to None.

  • use_cache (bool, optional) – Whether or not to use cache. If set to True, cache states are returned and can be used to speed up decoding. Defaults to False.

  • inputs_embeds (Tensor, optional) – If you want to control how to convert inputs_ids indices into associated vectors, you can pass an embedded representation directly instead of passing inputs_ids.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. Defaults to False.

  • output_hidden_states (bool, optional) – Whether or not to return the output of all hidden layers. Defaults to False.

  • return_dict (bool, optional) – Whether to return a ModelOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

An instance of BaseModelOutputWithPoolingAndCrossAttentions if return_dict=True. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields of BaseModelOutputWithPoolingAndCrossAttentions.

Example

import paddle
from paddlenlp.transformers import ReformerModel, ReformerTokenizer

tokenizer = ReformerTokenizer.from_pretrained('reformer-crime-and-punishment')
model = ReformerModel.from_pretrained('reformer-crime-and-punishment')
model.eval()

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}

outputs = model(**inputs)
last_hidden_state = outputs[0]
class ReformerPretrainedModel(*args, **kwargs)[source]#

Bases: PretrainedModel

An abstract class for pretrained Reformer models. It provides Reformer related model_config_file, resource_files_names, pretrained_resource_files_map, pretrained_init_configuration, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

config_class#

alias of ReformerConfig

base_model_class#

alias of ReformerModel

class ReformerForSequenceClassification(config: ReformerConfig)[source]#

Bases: ReformerPretrainedModel

The Reformer Model transformer with a sequence classification head on top (linear layer).

Parameters:
  • reformer (ReformerModel) – An instance of ReformerModel.

  • num_classes (int, optional) – The number of classes. Defaults to 2.

  • dropout (float, optional) – The dropout probability for output of Reformer. If None, use the same value as hidden_dropout_prob of ReformerModel instance reformer. Defaults to None.

forward(input_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, num_hashes: int | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
Parameters:
  • input_ids (Tensor) – See ReformerModel.

  • position_ids (Tensor, optional) – See ReformerModel.

  • attention_mask (Tensor, optional) – See ReformerModel.

  • num_hashes (int, optional) – See ReformerModel.

  • inputs_embeds (Tensor, optional) – See ReformerModel.

  • labels (Tensor, optional) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ...,num_classes - 1]. If num_classes == 1 a regression loss is computed (Mean-Square loss), If num_classes > 1 a classification loss is computed (Cross-Entropy). Shape is [batch_size,] and dtype is int64.

  • output_attentions (bool, optional) – See ReformerModel.

  • output_hidden_states (bool, optional) – See ReformerModel.

  • return_dict (bool, optional) – Whether to return a SequenceClassifierOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

Returns tuple (loss, logits, hidden_states, attentions).

With the fields:

  • loss (Tensor):

    returned when labels is provided. Classification (or regression if num_classes==1) loss. It’s data type should be float32 and its shape is [1,].

  • logits (Tensor):

    Classification (or regression if num_classes==1) scores (before SoftMax). It’s data type should be float32 and its shape is [batch_size, num_classes].

  • hidden_states (tuple(Tensor)):

    See ReformerModel.

  • attentions (tuple(Tensor)):

    See ReformerModel.

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import ReformerForSequenceClassification, ReformerTokenizer

tokenizer = ReformerTokenizer.from_pretrained('reformer-crime-and-punishment')
model = ReformerForSequenceClassification.from_pretrained('reformer-crime-and-punishment', is_decoder=False)
model.eval()

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
output = model(**inputs, labels=paddle.to_tensor([0]))

loss = output[0]
logits = output[1]
class ReformerForQuestionAnswering(config: ReformerConfig)[source]#

Bases: ReformerPretrainedModel

Reformer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

Parameters:
  • reformer (ReformerModel) – An instance of ReformerModel.

  • dropout (float, optional) – The dropout probability for output of Reformer. If None, use the same value as hidden_dropout_prob of ReformerModel instance reformer. Defaults to None.

forward(input_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, num_hashes: int | None = None, start_positions: Tensor | None = None, end_positions: Tensor | None = None, inputs_embeds: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
Parameters:
  • input_ids (Tensor) – See ReformerModel.

  • position_ids (Tensor, optional) – See ReformerModel.

  • attention_mask (Tensor, optional) – See ReformerModel.

  • num_hashes (int, optional) – See ReformerModel.

  • start_positions (Tensor, optional) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. Shape is [batch_size,] and dtype is int64.

  • end_positions (Tensor, optional) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. Shape is [batch_size,] and dtype is int64.

  • inputs_embeds (Tensor, optional) – See ReformerModel.

  • output_attentions (bool, optional) – See ReformerModel.

  • output_hidden_states (bool, optional) – See ReformerModel.

  • return_dict (bool, optional) – Whether to return a QuestionAnsweringModelOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

Returns tuple (loss, logits, hidden_states, attentions).

With the fields:

  • loss (Tensor):

    returned when labels is provided. Classification (or regression if num_classes==1) loss. It’s data type should be float32 and its shape is [1,].

  • start_logits (Tensor):

    A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • hidden_states (tuple(Tensor)):

    See ReformerModel.

  • attentions (tuple(Tensor)):

    See ReformerModel.

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import ReformerForQuestionAnswering, ReformerTokenizer

tokenizer = ReformerTokenizer.from_pretrained('reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('reformer-crime-and-punishment', is_decoder=False)
model.eval()

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
output = model(**inputs)

start_logits = outputs[0]
end_logits = outputs[1]
class ReformerModelWithLMHead(config: ReformerConfig)[source]#

Bases: ReformerPretrainedModel

The Reformer Model transformer with a language modeling head on top.

Parameters:

reformer (ReformerModel) – An instance of ReformerModel.

get_output_embeddings()[source]#

To be overwrited for models with output embeddings

Returns:

the otuput embedding of model

Return type:

Optional[Embedding]

forward(input_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, num_hashes: int | None = None, cache: List[Tuple[Tensor]] | None = None, use_cache: bool | None = False, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: Tensor | None = None, output_attentions: Tensor | None = None)[source]#
Parameters:
  • input_ids (Tensor) – See ReformerModel.

  • position_ids (Tensor, optional) – See ReformerModel.

  • attention_mask (Tensor, optional) – See ReformerModel.

  • num_hashes (int, optional) – See ReformerModel.

  • cache (List[tuple(Tensor, Tensor)], optional) – See ReformerModel.

  • use_cache (bool, optional) – See ReformerModel.

  • inputs_embeds (Tensor, optional) – See ReformerModel.

  • labels (Tensor, optional) – Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., vocab_size]. Shape is [batch_size, sequence_length] and dtype is int64.

  • output_attentions (bool, optional) – See ReformerModel.

  • output_hidden_states (bool, optional) – See ReformerModel.

Returns:

Returns tuple (loss, logits, cache, hidden_states, attentions).

With the fields:

  • loss (Tensor):

    returned when labels is provided. Language modeling loss (for next-token prediction). It’s data type should be float32 and its shape is [1,].

  • logits (Tensor):

    Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). It’s data type should be float32 and its shape is [batch_size, sequence_length, vocab_size].

  • cache (List[tuple(Tensor, Tensor)]):

    See ReformerModel.

  • hidden_states (tuple(Tensor)):

    See ReformerModel.

  • attentions (tuple(Tensor)):

    See ReformerModel.

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import ReformerModelWithLMHead, ReformerTokenizer

tokenizer = ReformerTokenizer.from_pretrained('reformer-crime-and-punishment')
model = ReformerModelWithLMHead.from_pretrained('reformer-crime-and-punishment')
model.eval()

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
output = model(**inputs, labels=inputs["input_ids"])

loss = output[0]
logits = output[1]
class ReformerForMaskedLM(config: ReformerConfig)[source]#

Bases: ReformerPretrainedModel

The Reformer Model transformer with a masked language modeling head on top.

Parameters:

reformer (ReformerModel) – An instance of ReformerModel.

get_output_embeddings()[source]#

To be overwrited for models with output embeddings

Returns:

the otuput embedding of model

Return type:

Optional[Embedding]

forward(input_ids: Tensor | None = None, position_ids: Tensor | None = None, attention_mask: Tensor | None = None, num_hashes: int | None = None, inputs_embeds: Tensor | None = None, labels: Tensor | None = None, output_hidden_states: bool | None = None, output_attentions: bool | None = None, return_dict: bool | None = None)[source]#
Parameters:
  • input_ids (Tensor) – See ReformerModel.

  • position_ids (Tensor, optional) – See ReformerModel.

  • attention_mask (Tensor, optional) – See ReformerModel.

  • num_hashes (int, optional) – See ReformerModel.

  • inputs_embeds (Tensor, optional) – See ReformerModel.

  • labels (Tensor, optional) – Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored(masked), the loss is only computed for the tokens with labels in [0, ..., vocab_size]. Shape is [batch_size, sequence_length] and dtype is int64.

  • output_attentions (bool, optional) – See ReformerModel.

  • output_hidden_states (bool, optional) – See ReformerModel.

  • return_dict (bool, optional) – Whether to return a MaskedLMOutput object. If False, the output will be a tuple of tensors. Defaults to False.

Returns:

Returns tuple (loss, logits, hidden_states, attentions).

With the fields:

  • loss (Tensor):

    returned when labels is provided. Masked Language modeling loss. It’s data type should be float32 and its shape is [1,].

  • logits (Tensor):

    Prediction scores of the masked language modeling head (scores for each vocabulary token before SoftMax). It’s data type should be float32 and its shape is [batch_size, sequence_length, vocab_size].

  • hidden_states (tuple(Tensor)):

    See ReformerModel.

  • attentions (tuple(Tensor)):

    See ReformerModel.

Return type:

tuple

Example

import paddle
from paddlenlp.transformers import ReformerForMaskedLM, ReformerTokenizer

tokenizer = ReformerTokenizer.from_pretrained('reformer-crime-and-punishment')
model = ReformerForMaskedLM.from_pretrained('reformer-crime-and-punishment', is_decoder=False)
model.eval()

inputs = tokenizer("Welcome to use PaddlePaddle and PaddleNLP!")
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
output = model(**inputs, labels=inputs["input_ids"])

loss = output[0]
logits = output[1]
class ReformerLayer(config: ReformerConfig, layer_id=0)[source]#

Bases: Layer

forward(prev_attn_output, hidden_states, attention_mask=None, num_hashes=None, cache=None, use_cache=False, orig_sequence_length=None, output_attentions=False)[source]#

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments