modeling

class LukeModel(vocab_size=50267, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=514, type_vocab_size=1, entity_vocab_size=500000, entity_emb_size=256, initializer_range=0.02, pad_token_id=1, entity_pad_token_id=0)[源代码]

基类:paddlenlp.transformers.luke.modeling.LukePretrainedModel

The bare Luke Model transformer outputting raw hidden-states.

This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

参数
  • vocab_size (int, optional) -- Vocabulary size of inputs_ids in LukeModel. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by the inputs_ids passed when calling LukeModel. Defaults to 50267.

  • hidden_size (int, optional) -- Dimensionality of the embedding layer, encoder layer and pooler layer. Defaults to 768.

  • num_hidden_layers (int, optional) -- Number of hidden layers in the Transformer encoder. Defaults to 12.

  • num_attention_heads (int, optional) -- Number of attention heads for each attention layer in the Transformer encoder. Defaults to 12.

  • intermediate_size (int, optional) -- Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from hidden_size to intermediate_size, and then projected back to hidden_size. Typically intermediate_size is larger than hidden_size. Defaults to 3072.

  • hidden_act (str, optional) -- The non-linear activation function in the feed-forward layer. "gelu", "relu" and any other paddle supported activation functions are supported. Defaults to "gelu".

  • hidden_dropout_prob (float, optional) -- The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to 0.1.

  • attention_probs_dropout_prob (float, optional) -- The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to 0.1.

  • max_position_embeddings (int, optional) -- The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to 514.

  • type_vocab_size (int, optional) -- The vocabulary size of token_type_ids. Defaults to 1.

  • entity_vocab_size (int, optional) -- Vocabulary size of entity_ids in LukeModel. Also is the vocab size of token entity embedding matrix. Defines the number of different entity that can be represented by the entity_ids passed when calling LukeModel. Defaults to 500000.

  • entity_emb_size (int, optional) -- Dimensionality of the entity embedding layer Defaults to 256.

  • initializer_range (float, optional) --

    The standard deviation of the normal initializer. Defaults to 0.02.

    注解

    A normal_initializer initializes weight matrices as normal distributions. See BertPretrainedModel.init_weights() for how weights are initialized in BertModel.

  • pad_token_id (int, optional) -- The index of padding token in the token vocabulary. Defaults to 1.

  • entity_pad_token_id (int, optional) -- The index of padding token in the token vocabulary. Defaults to 0.

forward(input_ids, token_type_ids=None, position_ids=None, attention_mask=None, entity_ids=None, entity_position_ids=None, entity_token_type_ids=None, entity_attention_mask=None)[源代码]

The LukeModel forward method, overrides the __call__() special method.

参数
  • input_ids (Tensor) -- Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be int64 and it has a shape of [batch_size, sequence_length].

  • token_type_ids (Tensor, optional) --

    Segment token indices to indicate different portions of the inputs. Selected in the range [0, type_vocab_size - 1]. If type_vocab_size is 2, which means the inputs have two portions. Indices can either be 0 or 1:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    Its data type should be int64 and it has a shape of [batch_size, sequence_length]. Defaults to None, which means we don't add segment embeddings.

  • position_ids (Tensor, optional) -- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, max_position_embeddings - 1]. Shape as (batch_size, num_tokens) and dtype as int64. Defaults to None.

  • attention_mask (Tensor, optional) -- Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have -INF values and the others have 0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults to None, which means nothing needed to be prevented attention to.

  • entity_ids (Tensor, optional) -- Indices of entity sequence tokens in the entity vocabulary. They are numerical representations of entities that build the entity input sequence. Its data type should be int64 and it has a shape of [batch_size, entity_sequence_length].

  • entity_position_ids (Tensor, optional) -- Indices of positions of each entity sequence tokens in the position embeddings. Selected in the range [0, max_position_embeddings - 1]. Shape as (batch_size, num_entity_tokens) and dtype as int64. Defaults to None.

  • entity_token_type_ids (Tensor, optional) -- Segment entity token indices to indicate different portions of the entity inputs. Selected in the range [0, type_vocab_size - 1]. If type_vocab_size is 2, which means the inputs have two portions. Indices can either be 0 or 1:

  • entity_attention_mask (Tensor, optional) -- Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have -INF values and the others have 0 values. It is a tensor will be concat with attention_mask.

返回

Returns tuple (word_hidden_state, entity_hidden_state, pool_output).

With the fields:

  • word_hidden_state (Tensor):

    Sequence of hidden-states at the last layer of the model. It's data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • entity_hidden_state (Tensor):

    Sequence of entity hidden-states at the last layer of the model. It's data type should be float32 and its shape is [batch_size, sequence_length, hidden_size].

  • pooled_output (Tensor):

    The output of first token (<s>) in sequence. We "pool" the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and its shape is [batch_size, hidden_size].

返回类型

tuple

示例

import paddle
from paddlenlp.transformers import LukeModel, LukeTokenizer

tokenizer = LukeTokenizer.from_pretrained('luke-base')
model = LukeModel.from_pretrained('luke-base')

text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)]
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
output = model(**inputs)
class LukePretrainedModel(name_scope=None, dtype='float32')[源代码]

基类:paddlenlp.transformers.model_utils.PretrainedModel

An abstract class for pretrained Luke models. It provides Luke related model_config_file, pretrained_init_configuration, resource_files_names, pretrained_resource_files_map, base_model_prefix for downloading and loading pretrained models. See PretrainedModel for more details.

init_weights(layer)[源代码]

Initialization hook

base_model_class

alias of paddlenlp.transformers.luke.modeling.LukeModel

class LukeForEntitySpanClassification(luke, num_classes)[源代码]

基类:paddlenlp.transformers.luke.modeling.LukePretrainedModel

The LUKE model with a span classification head on top (a linear layer on top of the hidden states output) for tasks such as named entity recognition.

参数
  • luke (LukeModel) -- An instance of LukeModel.

  • num_classes (int) -- The number of classes.

forward(entity_start_positions, entity_end_positions, input_ids, token_type_ids=None, position_ids=None, attention_mask=None, entity_ids=None, entity_position_ids=None, entity_token_type_ids=None, entity_attention_mask=None)[源代码]

The LukeForEntitySpanClassification forward method, overrides the __call__() special method.

参数
  • entity_start_positions -- The start position of entities in sequence.

  • entity_end_positions -- The start position of entities in sequence.

  • input_ids (Tensor) -- See LukeModel.

  • token_type_ids (Tensor, optional) -- See LukeModel.

  • position_ids (Tensor, optional) -- See :class: LukeModel

  • attention_mask (list, optional) -- See LukeModel.

  • entity_ids (Tensor, optional) -- See LukeModel.

  • entity_position_ids (Tensor, optional) -- See LukeModel.

  • entity_token_type_ids (Tensor, optional) -- See LukeModel.

  • entity_attention_mask (list, optional) -- See LukeModel.

返回

Returns tensor logits, a tensor of the entity span classification logits. Shape as [batch_size, num_entities, num_classes] and dtype as float32.

返回类型

Tensor

示例

import paddle
from paddlenlp.transformers import LukeForEntitySpanClassification, LukeTokenizer

tokenizer = LukeTokenizer.from_pretrained('luke-base')
model = LukeForEntitySpanClassification.from_pretrained('luke-base', num_classes=2)

text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)]
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
inputs['entity_start_positions'] = paddle.to_tensor([[1]], dtype='int64')
inputs['entity_end_positions'] = paddle.to_tensor([[2]], dtype='int64')
logits = model(**inputs)
class LukeForEntityPairClassification(luke, num_classes)[源代码]

基类:paddlenlp.transformers.luke.modeling.LukePretrainedModel

The LUKE model with a classification head on top (a linear layer on top of the hidden states of the two entity tokens) for entity pair classification tasks, such as TACRED.

参数
  • luke (LukeModel) -- An instance of LukeModel.

  • num_classes (int) -- The number of classes.

forward(input_ids, token_type_ids=None, position_ids=None, attention_mask=None, entity_ids=None, entity_position_ids=None, entity_token_type_ids=None, entity_attention_mask=None)[源代码]

The LukeForEntityPairClassification forward method, overrides the __call__() special method.

参数
  • input_ids (Tensor) -- See LukeModel.

  • token_type_ids (Tensor, optional) -- See LukeModel.

  • position_ids (Tensor, optional) -- See :class: LukeModel

  • attention_mask (list, optional) -- See LukeModel.

  • entity_ids (Tensor, optional) -- See LukeModel.

  • entity_position_ids (Tensor, optional) -- See LukeModel.

  • entity_token_type_ids (Tensor, optional) -- See LukeModel.

  • entity_attention_mask (list, optional) -- See LukeModel.

返回

Returns tensor logits, a tensor of the entity pair classification logits. Shape as [batch_size, num_classes] and dtype as float32.

返回类型

Tensor

示例

import paddle
from paddlenlp.transformers import LukeForEntityPairClassification, LukeTokenizer

tokenizer = LukeTokenizer.from_pretrained('luke-base')
model = LukeForEntityPairClassification.from_pretrained('luke-base', num_classes=2)

text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7), (17, 28)]
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
class LukeForEntityClassification(luke, num_classes)[源代码]

基类:paddlenlp.transformers.luke.modeling.LukePretrainedModel

The LUKE model with a classification head on top (a linear layer on top of the hidden state of the first entity token) for entity classification tasks, such as Open Entity.

参数
  • luke (LukeModel) -- An instance of LukeModel.

  • num_classes (int) -- The number of classes.

forward(input_ids, token_type_ids=None, position_ids=None, attention_mask=None, entity_ids=None, entity_position_ids=None, entity_token_type_ids=None, entity_attention_mask=None)[源代码]

The LukeForEntityClassification forward method, overrides the __call__() special method.

参数
  • input_ids (Tensor) -- See LukeModel.

  • token_type_ids (Tensor, optional) -- See LukeModel.

  • position_ids (Tensor, optional) -- See :class: LukeModel

  • attention_mask (list, optional) -- See LukeModel.

  • entity_ids (Tensor, optional) -- See LukeModel.

  • entity_position_ids (Tensor, optional) -- See LukeModel.

  • entity_token_type_ids (Tensor, optional) -- See LukeModel.

  • entity_attention_mask (list, optional) -- See LukeModel.

返回

Returns tensor logits, a tensor of the entity classification logits. Shape as [batch_size, num_classes] and dtype as float32.

返回类型

Tensor

示例

import paddle
from paddlenlp.transformers import LukeForEntityClassification, LukeTokenizer

tokenizer = LukeTokenizer.from_pretrained('luke-base')
model = LukeForEntityClassification.from_pretrained('luke-base', num_classes=2)

text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)]
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
class LukeForMaskedLM(luke)[源代码]

基类:paddlenlp.transformers.luke.modeling.LukePretrainedModel

Luke Model with a masked language modeling head on top.

参数

luke (LukeModel) -- An instance of LukeModel.

forward(input_ids, token_type_ids=None, position_ids=None, attention_mask=None, entity_ids=None, entity_position_ids=None, entity_token_type_ids=None, entity_attention_mask=None)[源代码]

The LukeForMaskedLM forward method, overrides the __call__() special method.

参数
  • input_ids (Tensor) -- See LukeModel.

  • token_type_ids (Tensor, optional) -- See LukeModel.

  • position_ids (Tensor, optional) -- See :class: LukeModel

  • attention_mask (list, optional) -- See LukeModel.

  • entity_ids (Tensor, optional) -- See LukeModel.

  • entity_position_ids (Tensor, optional) -- See LukeModel.

  • entity_token_type_ids (Tensor, optional) -- See LukeModel.

  • entity_attention_mask (list, optional) -- See LukeModel.

返回

Returns tuple (logits, entity_logits).

With the fields:

  • logits (Tensor):

    The scores of masked token prediction. Its data type should be float32 and shape is [batch_size, sequence_length, vocab_size].

  • entity_logits (Tensor):

    The scores of masked entity prediction. Its data type should be float32 and its shape is [batch_size, entity_length, entity_vocab_size].

返回类型

tuple

示例

import paddle
from paddlenlp.transformers import LukeForMaskedLM, LukeTokenizer

tokenizer = LukeTokenizer.from_pretrained('luke-base')
model = LukeForMaskedLM.from_pretrained('luke-base')

text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)]
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits, entity_logits = model(**inputs)
class LukeForQuestionAnswering(luke)[源代码]

基类:paddlenlp.transformers.luke.modeling.LukePretrainedModel

LukeBert Model with question answering tasks. :param luke: An instance of LukeModel. :type luke: LukeModel

forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None, entity_ids=None, entity_position_ids=None, entity_token_type_ids=None, entity_attention_mask=None)[源代码]

The LukeForQuestionAnswering forward method, overrides the __call__() special method.

参数
  • input_ids (Tensor) -- See LukeModel.

  • token_type_ids (Tensor, optional) -- See LukeModel.

  • position_ids (Tensor, optional) -- See :class: LukeModel

  • attention_mask (list, optional) -- See LukeModel.

  • entity_ids (Tensor, optional) -- See LukeModel.

  • entity_position_ids (Tensor, optional) -- See LukeModel.

  • entity_token_type_ids (Tensor, optional) -- See LukeModel.

  • entity_attention_mask (list, optional) -- See LukeModel.

返回

Returns tuple (start_logits, end_logits). With the fields: - start_logits (Tensor):

A tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

  • end_logits (Tensor):

    A tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].

返回类型

tuple

示例

import paddle
from paddlenlp.transformers import LukeForQuestionAnswering, LukeTokenizer

tokenizer = LukeTokenizer.from_pretrained('luke-base')
model = LukeForQuestionAnswering.from_pretrained('luke-base')

text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)]
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
start_logits, end_logits = model(**inputs)