modeling

class BlenderbotModel(vocab_size, bos_token_id=1, pad_token_id=0, eos_token_id=2, decoder_start_token_id=1, d_model=1280, num_encoder_layers=2, num_decoder_layers=12, encoder_attention_heads=32, decoder_attention_heads=32, encoder_ffn_dim=5120, decoder_ffn_dim=5120, dropout=0.1, activation_function='gelu', attention_dropout=0.0, activation_dropout=0.0, max_position_embeddings=128, init_std=0.02, scale_embedding=True, normalize_before=True)[源代码]

基类:paddlenlp.transformers.blenderbot.modeling.BlenderbotPretrainedModel

Construct a bare Blenderbot Model.

This model inherits from PretrainedModel. Check the superclass documentation for the generic methods and the library implements for all its model.

This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.

参数
  • vocab_size (int) -- Vocabulary size of the Blenderbot model.

  • bos_token_id (int, optional) -- The id for begging of sentences token. Defaults to 1.

  • pad_token_id (int, optional) -- The id for padding token. Defaults to 0.

  • eos_token_id (int, optional) -- The id for end of sentence token. Defaults to 2.

  • decoder_start_token_id (int, optional) -- The id indicating the start of decoding sentence. Defaults to 1.

  • d_model (int, optional) -- Dimensionality of the layers and the pooler layer. Defaults to 1280.

  • num_encoder_layers (int, optional) -- Number of Transformer encoder layers for BlenderbotEncoder. Defaults to 2.

  • num_decoder_layers (int, optional) -- Number of Transformer decoder layers for BlenderbotDecoder. Defaults to 12.

  • encoder_attention_heads (int, optional) -- Number of attention heads for each Transformer encoder layer in BlenderbotEncoder. Defaults to 32.

  • decoder_attention_heads (int, optional) -- Number of attention heads for each Transformer decoder layer in BlenderbotDecoder. Defaults to 32.

  • encoder_ffn_dim (int, optional) -- Dimensionality of the feed-forward layer for each Transformer encoder layer in BlenderbotEncoder. Defaults to 5120.

  • decoder_ffn_dim (int, optional) -- Dimensionality of the feed-forward layer for each Transformer dncoder layer in BlenderbotDncoder. Defaults to 5120.

  • dropout (float, optional) -- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. Defaults to 0.1.

  • activation_function (str, optional) -- The non-linear activation function (function or string) in the encoder and pooler. "gelu", "relu" and any other paddle supported activation functions are supported. Defaults to "gelu".

  • attention_dropout (float, optional) -- The dropout ratio for the attention probabilities. Defaults to 0.0.

  • activation_dropout (float, optional) -- The dropout ratio for activations inside the fully connected layer.

  • max_position_embeddings (int, optional) --

    , The max position index of an input sequence. Defaults to 128.

  • init_std (float, optional) -- The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Defaults to 0.02.

  • scale_embedding (bool, optional) -- Indicate whether to scale embeddings by diving by sqrt(d_model). Defaults to True.

  • normalize_before (bool, optional) -- Indicate whether to put layer normalization into preprocessing of MHA and FFN sub-layers. If True, pre-process is layer normalization and post-precess includes dropout, residual connection. Otherwise, no pre-process and post-precess includes dropout, residual connection, layer normalization. Defaults to True.

forward(input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, encoder_output=None, use_cache=False, cache=None, **kwargs)[源代码]
参数
  • input_ids (Tensor) -- Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. It's data type should be int64 and has a shape of [batch_size, sequence_length].

  • attention_mask (Tensor, optional) --

    Mask to indicate whether to perform attention on each input token or not. The values should be either 0 or 1. The attention scores will be set to -infinity for any positions in the mask that are 0, and will be unchanged for positions that are 1.

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    It's data type should be float32 and has a shape of [batch_size, sequence_length]. Defaults to None.

  • decoder_input_ids (Tensor, optional) -- If not provided, decoder_input_ids will be automatically generated based on decoder_start_token_id and input_ids.

  • decoder_attention_mask (Tensor, optional) -- If not provided, the default decoder_attention_mask will be a tensor with upper triangular part being -np.inf. the shape will be (decoder_length, decoder_length)

  • encoder_output (Tensor, optional) -- The output of encoder. If not provided, a encoder_output will be generated from BlenderbotEncoder. Defaults to None.

  • use_cache (bool, optional) -- Indicates whether to use cache to speed up decoding. Defaults to False

  • cache (list, optional) -- It is a list, and each element in the list is a tuple( (incremental_cache, static_cache) ). See paddle.nn.TransformerDecoder.gen_cache for more details. It is only used for inference and should be None for training. Default None.

返回

If use_cache=False, the return will be the last hidden state of decoder with shape of [batch_size, seq_lens, hidden_size]. seq_lens corresponds to the length of input sequence. Otherwise, the return will be a tuple of (decoder_output, cache). Please refer to class paddle.nn.TransformerDecoder for more information regarding cache.

返回类型

Tensor|tuple

示例

import paddle
from paddlenlp.transformers import BlenderbotTokenizer, BlenderbotModel

# "blenderbot-400M-distill" is the pretrained weight of BlenderbotForConditionalGeneration,
# Therefore some weight of additional layers in BlenderbotForConditionalGeneration
# might not be loaded and used regarding the following sample code.
pretrained_model_name = "blenderbot-400M-distill"
tokenizer = BlenderbotTokenizer.from_pretrained(pretrained_model_name)
model = BlenderbotModel.from_pretrained(pretrained_model_name)

sample_text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(sample_text, return_attention_mask=True, return_token_type_ids=False)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
decoder_output = model(**inputs)
get_encoder()[源代码]

This method is required for model with encoder-decoder architecture.

class BlenderbotPretrainedModel(*args, **kwargs)[源代码]

基类:paddlenlp.transformers.model_utils.PretrainedModel

An abstract class for pretrained Blenderbot models. It provides Blenderbot related model_config_file, resource_files_names, pretrained_resource_files_map, pretrained_init_configuration, base_model_prefix for downloading and loading pretrained models. Refer to PretrainedModel for more details.

init_weights(layer)[源代码]

Initialization hook

base_model_class

alias of paddlenlp.transformers.blenderbot.modeling.BlenderbotModel

class BlenderbotEncoder(vocab_size, embed_tokens=None, pad_token_id=0, d_model=1280, num_encoder_layers=2, encoder_attention_heads=32, encoder_ffn_dim=5120, dropout=0.1, activation_function='gelu', attention_dropout=0.0, activation_dropout=0.0, max_position_embeddings=128, init_std=0.02, scale_embedding=True, normalize_before=True)[源代码]

基类:paddlenlp.transformers.blenderbot.modeling.BlenderbotPretrainedModel

The encoder of Blenderbot Model. Please refer to PretrainedModel or BlenderbotModel for more information regarding methods and arguments.

forward(input_ids, attention_mask=None)[源代码]
返回

The last hidden states at the last layer of the encoder. It's data type should be float and has a shape of (batch_size, seq_lens, hidden_size). seq_lens corresponds to the length of input sequence.

返回类型

Tensor

class BlenderbotDecoder(vocab_size, embed_tokens=None, pad_token_id=0, d_model=1280, num_decoder_layers=12, decoder_attention_heads=32, decoder_ffn_dim=5120, dropout=0.1, activation_function='gelu', attention_dropout=0.0, activation_dropout=0.0, max_position_embeddings=128, init_std=0.02, scale_embedding=True, normalize_before=True)[源代码]

基类:paddlenlp.transformers.blenderbot.modeling.BlenderbotPretrainedModel

The decoder of Blenderbot Model. Please refer to PretrainedModel and BlenderbotModel for more information regarding methods and arguments.

forward(decoder_input_ids=None, decoder_attention_mask=None, encoder_output=None, memory_mask=None, use_cache=False, cache=None)[源代码]

Please refer to BlenderbotModel for more information regarding the arguments.

class BlenderbotForConditionalGeneration(blenderbot)[源代码]

基类:paddlenlp.transformers.blenderbot.modeling.BlenderbotPretrainedModel

forward(input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, encoder_output=None, use_cache=False, cache=None, **kwargs)[源代码]

Please refer to BlenderbotModel for more information regarding arguments. :returns:

If use_cache=False, the return will be a tensor with shape of

[batch_size, seq_lens, hidden_size]. Otherwise, the return will be a tuple of (decoder_output, cache).

返回类型

Tensor|tuple

示例


import paddle from paddlenlp.transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration

pretrained_model_name = "blenderbot-400M-distill" tokenizer = BlenderbotTokenizer.from_pretrained(pretrained_model_name) model = BlenderbotForConditionalGeneration.from_pretrained(pretrained_model_name)

sample_text = "My friends are cool but they eat too many carbs." inputs = tokenizer(sample_text, return_attention_mask=True, return_token_type_ids=False) inputs = {k: paddle.to_tensor([v]) for (k, v) in inputs.items()}

# Generate response using beam search result_ids, scores = model.generate(input_ids=inputs['input_ids'],

max_length=60, min_length=20, decode_strategy='beam_search', num_beams=10, length_penalty=0.65)

for sequence_ids in result_ids.numpy().tolist():

print("User: ", sample_text) print("bot: ", tokenizer.convert_ids_to_string(sequence_ids)) # "bot: That's unfortunate. Are they trying to lose weight?"

prepare_inputs_for_generation(decoder_input_ids, attention_mask=None, encoder_output=None, use_cache=True, cache=None, **kwargs)[源代码]

Prepare inputs for decoder to generate sentences. :returns: A dictionary containing necessary inputs for generating next token. :rtype: dict

get_encoder()[源代码]

This method is required for model with encoder-decoder architecture.

class BlenderbotForCausalLM(blenderbot)[源代码]

基类:paddlenlp.transformers.blenderbot.modeling.BlenderbotPretrainedModel

Constructs BLenderbot For Causal Language Model. This model is equivalent to the blenderbot decoder without cross-attention.

forward(input_ids=None, attention_mask=None, use_cache=False, cache=None, **kwargs)[源代码]
参数
  • input_ids (Tensor) -- Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. It's data type should be int64 and has a shape of [batch_size, sequence_length].

  • attention_mask (Tensor, optional) --

    Mask to indicate whether to perform attention on each input token or not. The values should be either 0 or 1. The attention scores will be set to -infinity for any positions in the mask that are 0, and will be unchanged for positions that are 1.

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    It's data type should be float32 and has a shape of [batch_size, sequence_length]. Defaults to None.

  • use_cache (bool, optional) -- Indicates whether to use cache to speed up decoding. Defaults to False

  • cache (list, optional) -- It is a list, and each element in the list is a tuple( (incremental_cache, static_cache) ). See paddle.nn.TransformerDecoder.gen_cache for more details. It is only used for inference and should be None for training. Default None.

返回

If use_cache=False, the return will be a tensor with shape of

[batch_size, seq_lens, hidden_size]. Otherwise, the return will be a tuple of (lm_logits, cache).

返回类型

Tensor|tuple

示例


import paddle from paddlenlp.transformers import BlenderbotTokenizer, BlenderbotForCausalLM use_cache = False text = "My friends are cool but they eat too many carbs." model_name = "blenderbot-400M-distill" tokenizer = BlenderbotTokenizer.from_pretrained(model_name) model = BlenderbotForCausalLM.from_pretrained(model_name) model.eval() inputs = tokenizer(text) inputs = {k: paddle.to_tensor([v]) for (k, v) in inputs.items()}

with paddle.no_grad():

outputs = model(**inputs, use_cache=use_cache) # outputs is a tuple of (lm_logits, cache) if use_cache=True.

prepare_inputs_for_generation(input_ids, attention_mask=None, use_cache=True, cache=None, **kwargs)[源代码]

Prepare inputs for decoder to generate sentences. :returns: A dictionary containing necessary inputs for generating next token. :rtype: dict